You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Open Thread for February 3 - 10

6 Post author: NancyLebovitz 03 February 2014 03:30PM

If it's worth saying, but not worth its own post (even in Discussion), then it goes here.

Comments (331)

Comment author: Metus 03 February 2014 08:59:33PM 9 points [-]

Being on my way to friends and thinking about living in the city vs. living on the outskirts I had a thought: Though property prices in cities are higher anything else is much closer: Restaurants, shopping, ideally friends, and any public transport. This means that I spend much less time just getting around and commuting. Also I save some amount on heating as single houses necessarily are more difficult to heat.

So on one hand I spend more on rent but on the other hand I save on time, energy and transportation. So the "actual" cost of living in the city is lower than it might seem at first. Has anyone done an estimation of this "actual" cost or should I do it myself as kind of an exercise? I am aware that there are quite some parameters to consider such as personal preferences on having parks nearby, noise levels and my desire to go out.

Comment author: ThrustVectoring 04 February 2014 06:19:49PM 9 points [-]

If you live in a city, you can (and probably should) get away with not owning a car. Not only is it unnecessary to get where you want to go, but due to property prices, parking is a gigantic hassle and expense. Walking works well for anything within a mile, biking for anything within about 5, public transit or a cab for the metro area, and car rentals (or borrowing a friend's) can fill in for anything else that absolutely requires your own vehicle.

Not owning a car saves a significant amount of time and money and makes the math better for living in a more built-up area.

Comment author: btrettel 04 February 2014 09:39:39PM 5 points [-]

I've lived car-free for several years now and I think it's one of the best choices I've ever made. I'm saving a lot of money, staying in great shape from biking, and avoiding a stressful commute. Most people think I'm an eccentric for this, but I'm okay with that.

Comment author: [deleted] 05 February 2014 04:48:44PM 3 points [-]

It depends on where you are -- in certain places, public transportation sucks.

Comment author: Lumifer 05 February 2014 05:38:03PM 2 points [-]

If you live in a city, you can (and probably should) get away with not owning a car.

That very much depends on a particular city. And your lifestyle, of course.

Comment author: ChristianKl 04 February 2014 12:18:07AM 6 points [-]

I would guess that it depends a lot of the particular city that you are talking about. That means it would be good if you make the estimation yourself.

Comment author: Metus 04 February 2014 12:48:00AM 1 point [-]

Pointing me to data sources - especially in Europe - would be great, too.

Comment author: btrettel 04 February 2014 09:24:43PM *  1 point [-]

I recall, but am unable to find, a small study that looked into living in typical American suburbs and driving vs. living in the center city and taking public transit, walking, or biking. As I recall, the authors concluded that either is comparable in total costs for the "average" city. If that's true, then I think it's a strong case for living in the city given that people underestimate how stressful their commutes are and that you'll save time.

Others, especially bicycle advocates, have made the same comparison. If you don't own a car, you can turn your savings into higher rent. In my experience, you'll easily save more money and time going the car-free route. I'd recommend the book How to Live Well Without Owning a Car for an introduction to this lifestyle. Note that this lifestyle is not for everyone, but I do think it's a good idea for a large segment of the population.

Edit: I think this is the "study" I referred to above. It's interesting to see how my memory distorted things; I thought of this as an academic study, and couldn't find it among the papers I saved. No wonder, as it was merely a newspaper column.

Comment author: WPKIOPHOWI 07 February 2014 11:20:20PM *  8 points [-]

I've been thinking about whether it's a good idea to quit porn (not masturbation, just porn). Does anyone have anything to add to the below?

Reasons not to quit:

  • It's difficult, which may cause stress and willpower depletion, though these effects would probably only be temporary.
  • It is pleasurable (i.e. valued just as a "fun" activity. This should be compared to alternative pleasurable activities, though, because any "porn time" can be replaced with "other fun things time").

Reasons to quit:

  • It's a superstimulus, and might interfere with the brain's reward system in bad ways. http://yourbrainonporn.com/ has some evidence, though nothing as strong as, say, an RCT studying the effects of quitting porn.
  • Time. Any time spent viewing porn is time that could be spent doing other things (not necessarily "working," but other relaxing/pleasurable activities which could have greater advantages. For example, reading fiction has the advantage that you can later talk about what you read with other people).
  • Possibility of addiction: I definitely don't think I have a porn addiction, and I doubt I'm likely to progress to one, but obviously it's possible anyway, and my own inside-view on that isn't very safe to go on. From wikipedia:

    A study found that 17% of people who viewed pornography on the Internet met criteria for problematic[clarification needed] sexual compulsivity.[9] A survey found that 20–60% of a sample of college-age males who use pornography found it to be problematic.[10] Research on Internet addiction disorder indicates rates may range from 1.5 to 8.2% in Europeans and Americans.[11] Internet pornography users are included in Internet users, and Internet pornography has been shown to be the Internet activity most likely to lead to compulsive disorders.[12]

I haven't viewed porn for about 2 weeks and it hasn't actually been that difficult, so I'm trying to decide whether I should just commit to quitting it completely. Right now I'm leaning toward quitting -- viewing porn might be harmful, and it's almost certainly not beneficial, so there's a higher expected value from quitting for anybody who doesn't assign much higher utility to the fun from porn than the fun from alternative activities.

For completeness, I should also mention the "nofap" movement. The anecdotes on there are the same sort of things you'd find when reading about homeopathy or juice fasts, though, so those can be mostly ignored.

Comment author: Viliam_Bur 09 February 2014 08:58:21PM *  5 points [-]

How about you make specific predictions (written) of what will happen if you abstain for a specific number of months, then abstain for the given number of months, and then evaluate the original predictions?

For things like "clarity of mind", find some way of measuring it. For things like "motivation" instead focus on what exactly you will be motivated to do.

Then compare with the same amount of time with porn.

It's still very little data, but better than no data at all.

Less meta -- I think it pretty much depends on what you replace it with. Which can be both someting better or something worse, and you probably don't know the exact answer unless you try.

Comment author: ChrisHallquist 07 February 2014 11:43:50PM 5 points [-]

1 and 2 apply to entertainment in general. There's something to be said for cutting back on TV, aimless internet browsing, etc., but it makes more sense to focus on cutting back total time than eliminating one particular form of entertainment in particular.

As for 3, I'm not familiar with that particular study, but in my experience studies of "porn addiction" or "sex addiction" tend to rely on dubious definitions of "addiction." I'd advise against taking worries of porn addiction any more seriously than worries of "internet addiction" or "social media addiction" or "TV addiction" or whatever.

Comment author: Kaj_Sotala 12 February 2014 04:46:57PM 3 points [-]

I'd advise against taking worries of porn addiction any more seriously than worries of "internet addiction" or "social media addiction"

This sentence sounds like it's intended to communicate "porn addiction shouldn't be taken very seriously". But speaking as someone who is hardly ever capable of staying offline even for a day despite huge increases in well-being whenever he is successful at it, to say nothing about the countless of days ruined due to getting stuck on social media, these examples make it sound like you were saying that porn addiction was an immensely big risk that was worth taking very seriously indeed.

Comment author: [deleted] 08 February 2014 10:39:12PM 0 points [-]

For example, reading fiction has the advantage that you can later talk about what you read with other people

So does porn... if you're young enough.

Comment author: ephion 10 February 2014 01:29:47PM 1 point [-]

Porn gets me off quicker. That is it's utility. When I'm self-pleasuring for enjoyment, I don't watch it, because it's more fun to use my imagination. However, when I'm sexually frustrated and can't focus on what I want to focus on, pornography allows me to cut masturbation time down from 10-20 minutes to under 5. This is a great time saver, and allows me to spend my time more productively.

It is superstimulation, and if you come to rely on it to come or develop an addiction (arguably the same thing), then you'll have a problem. But if it isn't having any negative affect on your life, then why drop it?

Comment author: NancyLebovitz 03 February 2014 03:31:52PM 8 points [-]

Do you take notes when you read non-fiction you want to analyse? If so, how much detail? On the first reading? Just points of disputation, or an effort at a summary?

Comment author: lmm 03 February 2014 07:02:55PM 4 points [-]

No, I don't.

Comment author: Vaniver 03 February 2014 09:28:18PM 3 points [-]

I tend to go for notes chapter-by-chapter. Among other things, it takes long enough to read a chapter that I get to the point where I can remember any particular idea with ease but the flow of concepts has mostly been lost and all of the pieces have been shunted into long-term memory. If I can mostly reconstruct the chapter, great, if not, I go back and figure out what was where and why it was there. (It might be worthwhile to always go back and see what you missed / got wrong, but that would probably get close to doubling the necessary reading time.)

Comment author: dhoe 04 February 2014 10:12:57PM 2 points [-]

I do, but it's mostly because doing it helps me focus. I rarely go back to read my notes. Here's an example, for a book about SQL query tuning.

Comment author: Qiaochu_Yuan 04 February 2014 11:24:59PM 1 point [-]

I tried doing this briefly when I was experimenting with Workflowy but I found it excruciatingly boring and couldn't keep it up; it was close to ruining reading non-fiction for me and I stopped immediately when I noticed that.

Comment author: Nornagest 04 February 2014 11:40:24PM 1 point [-]

Workflowy's not the best tool for note-taking -- it's great for making structured lists of items that you only need to identify or briefly describe, making it a fantastic e.g. task list, but adding more structure to any particular item is pretty clunky (though at least possible).

I've historically used Keynote NF, but it's PC-only. Currently looking for an app that does the same thing on iDevices, since my iPad's becoming my go-to note-taking tool, but I haven't found anything that does everything I want yet.

Comment author: savageorange 04 February 2014 01:49:37AM 1 point [-]

Yes, if I don't take notes on the first reading there won't be a second reading. Not much detail -- more than a page is a problem (this can be ameliorated though, see below). I make an effort to include points of particular agreement, disagreement and some projects to test the ideas (hopefully projects I actually want to do rather than mere 'toy' projects).

Now would be a good time to mention TreeSheets, which I feel solves a lot of the problems of more established note-taking methods (linear, wiki, mindmap). It can be summarized as 'infinitely nestable spreadsheet/database with infinite zoom'. I use it for anything that gets remotely complex, because of the way it allows you to fold away arbitrary levels of detail in a visually consistent way.

Comment author: Emile 03 February 2014 09:39:50PM 1 point [-]

I'll usually:

  • Use a piece of paper as a bookmark on which I take notes (noting page numbers of bits I don't understand, attempts to summarize/reorganize, interesting insights, notes while I work something out, random ideas the text gives me) - it's not rare that I end a book with two or three pages of notes stuffed into them. I'll then go over those notes and maybe enter some bits in Anki
  • Directly enter stuff in Anki if it's atomic enough (it often isn't)
  • Takes notes in Google docs (either if I'm near a computer at a time, or if I want to have "searchable" notes or look up related info on the internet)
Comment author: luminosity 03 February 2014 08:51:00PM 1 point [-]

Usually I'll read it in depth first, then once I know if it's worth taking notes, I'll return to it and scan through quickly for those points I know are worth grabbing.

Comment author: sixes_and_sevens 03 February 2014 04:02:41PM 1 point [-]

I've fairly recently (over the past month or so) started taking notes on pretty much everything, as part of a drive to capture as much useful content in Evernote as possible. A lot of what I'm doing at the moment is probably quite wasteful, but I expect to figure out what is and isn't useful in fairly short order.

For ebooks I've been making judicious use of highlighting on the Kindle. Unfortunately the UK Kindle service isn't as feature-rich as the US counterpart, so I'm still looking into ways of parsing my clippings file into Evernote. For hardcopy books and lectures, I've taken to either writing bullet-pointed lists or mini-essays. This also seems to have the positive side-effects of forcing me to clearly elucidate on ideas I've just taken in, and stopping me ruminating on the areas in question.

For example, late last night I was reading about the concept of "burden of proof" in legal and rhetorical contexts. This is a bit of a personal bugbear, and I ended up writing several hundred words informed by what I was reading. Not only can I now reference this when necessary, but it stopped me from trying to sleep with a bunch of proactive burden-of-proof-related arguments running through my head.

Comment author: Jayson_Virissimo 03 February 2014 04:01:23PM *  1 point [-]

I answered a similar question here.

As I read textbooks, I summarize the most important concepts (along with doing the exercises, if there are any) and write them in a notebook and then later (less than a week) enter the notes into Anki as cloze-delete flashcards. I don't have an objective measure of retention, but I believe that it has vastly improved relative to when I would simply read the book.

Comment author: RowanE 05 February 2014 10:41:17AM 7 points [-]

It's at least commonly accepted that alcohol kills brain cells - is there a study that actually links a certain amount of drinking to a certain amount of IQ points lost?

Comment author: fubarobfusco 05 February 2014 04:29:01PM 8 points [-]

The relationship between alcohol use and cognitive function appears to be nonlinear, and indeed non-monotonic: light drinkers have better cognitive performance than nondrinkers. Reduction in cognitive performance for heavy drinkers is measured more in men than in women.

Source: Rodgers et al (2005), "Non-linear relationships between cognitive function and alcohol consumption in young, middle-aged and older adults: the PATH Through Life Project" — http://www.ncbi.nlm.nih.gov/pubmed/16128717

Chronic alcoholics do not have reduced numbers of neocortical neurons, but do have reductions in white matter volume.

Source: Jensen and Pakkenberg, "Do alcoholics drink their neurons away?" — http://www.sciencedirect.com/science/article/pii/014067369392185V

Neither of these studies speaks about the specific measurement you're asking for, IQ, but they do address the general topic.

(Chronic alcoholism is also associated with specific neurological conditions such as Wernicke-Korsakoff syndrome, which is caused by thiamine deficiency — someone who's getting most of their calories from booze is not getting enough nutrition.)

Comment author: hyporational 05 February 2014 10:27:24PM *  2 points [-]

People usually abstain for reasons that might affect cognitive performance like depression or previous substance abuse for example.

Reduction in cognitive performance for heavy drinkers is measured more in men than in women.

They note that:

After adjustment for education and race, male hazardous/harmful drinkers no longer performed significantly less well than light drinkers, whereas male and female abstainers and occasional drinkers still did so.

-

Chronic alcoholism is also associated with specific neurological conditions such as Wernicke-Korsakoff syndrome, which is caused by thiamine deficiency — someone who's getting most of their calories from booze is not getting enough nutrition.

Alcoholism can also reduce thiamine absorption as much as 50 % in people who aren't malnourished.

Comment author: fubarobfusco 06 February 2014 01:55:32AM 3 points [-]

One of the pages off that link has this fact:

Some health policy experts have hypothesized that fortifying alcoholic beverages with thiamine would lower healthcare costs.

Now that's harm reduction!

Comment author: hyporational 05 February 2014 10:16:44PM 1 point [-]

I did a few Medline searches some time ago and the answer appeared to be no. Since then I've done enough self quantification (mostly with Anki) to know that sleepless nights and even slight hangovers severely damage my abilities for several days. I was unaware of this effect before measuring my performance. Even small amounts of alcohol damage my sleep, and you could probably find studies that conform to this observation. This knowledge slowly creeped on me to seem actionable enough that further searching for studies felt like a desperate attempt to rationalize self sabotage.

Measure your performance. Temporary effects are not a direct answer to your question, but might be sufficient knowledge for decision making.

Comment author: btrettel 04 February 2014 10:24:58PM *  7 points [-]

How do you organize your computer files? How do you maintain organization of your computer files? Anyone have any tips or best practices for computer file organization?

I've recently started formalizing my computer file organization. For years my computer file organization would have been best described as ad-hoc and short-sighted. Even now, after trying to clean up the mess, when I look at some directories from 5 or more years ago I have a very hard time telling what separates two different versions of the same directory. I rarely left README like files explaining what's what, mostly because I didn't think about it.

Here are a few things I've learned:

  • Decide on a reasonable directory structure and iterate towards a better one. I can't anticipate how my needs would be better served by a different structure in the future, so I don't try that hard to. I can create new directories and move things around as needed. My current home directory is roughly structured into the following directories: backups, classes, logs, misc (financial info, etc.), music, notes, projects (old projects the preceded my use of version control), reference, svn, temp (files awaiting organization, mostly because I couldn't immediately think of an appropriate place for them), utils (local executable utilities).
  • Symbolic links are necessary when you think a file might fit well in two places in a hierarchy. I don't care too much about making a consistent rule about where to put the actual file.
  • Version control allows you to synchronize files across different computers, share them with others, track changes, roll back to older versions (where you can know what changed based on what you wrote in the log), and encourages good habits (e.g., documenting changes in each revision). I use version control for most of my current projects, even those that do not involve programming (e.g., my notes repository is about 700 text files). I don't think which version control system you use is that important, though some (e.g., cvs) are worse than others. I use Subversion because it's simple.
  • I store papers, books, and other writings that I keep in a directory named reference. I try to keep a consistent file naming scheme: AuthorYearJournalAbbreviation.pdf. I have a text file that lists my own journal abbreviation conventions. If the file is not from a journal, I'll use something like "chapter" or "book" as appropriate. (Other people use softwares like Zotero or Mendeley for this purpose. I have Zotero, but mostly use it for citation management because I find it to be inconvenient to use.)
  • In terms of naming files, I try to think about how I'd find the file in the future and try to make it obvious if I navigate to the file or search for it. For PDFs, you often can't search the text, so perhaps my file naming convention should include the paper title to help with searching.
  • README files explaining things in a directory are often very helpful, especially after returning to a project after several years. Try to anticipate what you might not remember about a project several years disconnected from it.
  • Synchronizing files across different computers seems to encourage me to make sure the directory structure makes at least some sense. My main motivation in cleaning things up was to make synchronizing files easier. I use rsync; another popular option is Dropbox.

Using scripts to help maintain your files is enormously helpful. My goals are to have descriptive file names, to have correct permissions (important for security; I've found that files that touched a Windows system often have completely wrong permissions), to minimize disk space used, and to interact well with other computers. I have a script that I titled "flint" (file system lint) that does the following and more:

  • checks for duplicate files, sorting them by file size (fdupes doesn't do that; my script is pretty crude and not yet worth sharing)
  • scans for Windows viruses
  • checks for files with bad permissions (777, can't be written to, can't be read, executable when it shouldn't be, etc.)
  • deletes unneeded files, mostly from other filesystems (.DS_Store, Thumbs.db, Desktop.ini, .bak and .asv files where the original exists, core dumps, etc.)
  • checks for nondescriptive file names (e.g., New Folder, untitled, etc.)
  • checks for broken symbolic links
  • lists the largest files on my computer
  • lists the most common filenames on my computer
  • lists empty directories and empty files

I'd be very interested in any other tips, as I often find my computer file organization to be a bottleneck in my productivity.

Comment author: bramflakes 04 February 2014 11:27:30PM *  8 points [-]

I have a folder that I do my short term work in:

D:/stupid shit that I can't wait to get rid of

This is set to auto-delete everything in it weekly. I had a chronic problem where small files that were useful for some minor task or another from months or years ago would clutter up everything. This was my "elegant" solution to the problem and it's served me well for years, because it gave me an actual incentive to put my finished work in a sensible place.

Although now that I think about it, it would be a better idea for it to only delete files that haven't been touched for a week, rather than wiping everything all at once on a Saturday ..

Comment author: Risto_Saarelma 05 February 2014 07:01:59AM 3 points [-]

Although now that I think about it, it would be a better idea for it to only delete files that haven't been touched for a week, rather than wiping everything all at once on a Saturday ..

The Linux program tmpreaper will do this. It can be made into a cron job. I've got mine set for 30 days.

Comment author: Risto_Saarelma 05 February 2014 07:01:20AM 2 points [-]

If you're comfortable with command-line UIs, git-annex is worth a look for creating repositories of large static files (music, photos, pdfs) you sync between several computers.

I use regular git for pretty much anything I create myself, since I get mirroring and backups from it. Though it's mostly text, not audio or video. Large files that you change a lot probably need a different backup solution. I've been trying out Obnam as an actual backup system. Also bought an account at an off-site shell provider that also provides space for backups.

Use the same naming scheme for your reference article names and the BibTeX identifiers for them, if you're writing up some academic research.

GdMap or WinDirStat are great for getting a visualization of what's taking space on a drive.

If your computer ever gets stolen, you probably want it to have had a full-disk encryption. That way it's only a financial loss, and probably not a digital security breach.

It constantly fascinates me that you can name the exact contents of a file pretty much unambiguously with something like a SHA256 hash of it, but I haven't found much actual use for this yet. I keep envisioning schemes where your last-resort backup of your media archive is just a list of file names and content hashes, and if you lose your copies you can just use a cloud service to retrieve new files with those hashes. (These of course need to be files that you can reasonably assume other people will have bit-to-bit equal copies of.) Unfortunately, there don't seem to be very robust and comprehensive hash-based search and download engines yet.

Comment author: gwern 04 February 2014 11:03:15PM 2 points [-]

checks for duplicate files, sorting them by file size (fdupes doesn't do that; my script is pretty crude and not yet worth sharing)

How can identical files be sorted by file size?

Comment author: FourFire 08 February 2014 11:39:37PM 1 point [-]

My first reflex is to exclaim that I don't organize my files in any way, but that is incorrect: I merely lack comphrehension of how my filing system works, it's inconsistent, patchy and arbitrary, but I do have some sort of automatic filing system which feels "right" and when my files are not in this system my computer feels "wrong".

Comment author: btrettel 09 February 2014 02:55:34PM 0 points [-]

Intriguing. Can you describe some of your automated filing rules? I am considering trying such a setup via fsniper.

Comment author: FourFire 11 February 2014 02:37:32AM *  0 points [-]

I wouldn't reccommend duplicating my filesystem (it's most likely less useful than most filing systems which aren't "throw everything in one folder/on desktop and forget about it") but I'll note some key features:

Files reside inside folder trees of which the folders are either named clearly as what they are, or in obsuficating special words or made up phrases (even acronyms) which have only special meaning to me in the context of that paticular position in the file tree.

Different types of files have seperate folders in places

Folder trees are arranged in sets of categories, sub categories and filetypes (the order of sorting is very ad-hoc and arbitrary) you could have for example: Media > Type of media > genre of media > Creator > Work but it could just as easily have Creator at the root of the tree.

I really suggest you just make your own system or copy someone else's; it will more likely than not provide more utility.

Edit: just to be clear I don't have any sort of automated software which organizes my files for me, I am merely saying that my mind organizes the files semiconciously so I'm not directly "driving" when the act of organizing occurs

Comment author: jkaufman 05 February 2014 09:39:21PM *  1 point [-]

The only thing that's worked for me in the long term is making things public on the internet. This generally means putting it on my website, though code goes on github and shared-editing things go in Google Docs. Everything else older than a couple years is either gone or not somewhere I can find anymore.

Comment author: Antiochus 03 February 2014 04:31:07PM 7 points [-]

How much is it worth spending on a computer chair? Is a chair for both work and play (ie video games) practical, or is reclining comfort necessarily opposed to sit-up comfort?

Comment author: DaFranker 03 February 2014 07:11:42PM *  4 points [-]

In an attempt to simplify the various details of the cost-benefit calculations here:

If you spend:

  • 1-2 hours on this chair per day: Might be worth spending some time shopping for a decent seat at Staples, but once you find something that fits and feels comfortable (with some warnings to take in consideration), pretty much go with that. You should find something below 100$ for sure, and can probably get away with <60$ spent if you get good sales.

  • 3-4 hours / day: If you're shopping at Staples, be more careful and check the engineering of the chair if you've got any knowledge there. Stuff below 60$ will probably break down and bend and become all other sorts of uncomfortable after a few months of use. If your body mass is high, you might need to go for solidity over comfort, or accept the unfair hand you're dealt and spend more than 150$ for something that mixes enough comfort, ergonomy and solid reliability.

  • More than 4 hours / day on average: This is where the gains become nonlinear, and you will want to seriously test and examine anything you're buying under 150$. At this point, you need to consider ergonomics, long-term comfort (which can't be reliably "tested in store" at all, IME), reliability, a very solid frame for extended use that can handle the body's natural jiggling and squirming without deforming itself (this includes checking the "frame" itself, but also any cushions, since those can "deflate" very rapidly if the manufacturer skimped there, and therefore become hard and just as uncomfortable as a bent chair), and so on. At this point, the same advice applies as shopping for mattresses, work boots, or any other sort of tool that you're using all day every day. It's only at this point where the differences between more relaxed postures, "work" postures and "gaming" postures starts really mattering, and I'd say if you actually spend 6-8 hours per day on average on this chair, you definitely want to go for the best you can get. How much that needs to cost, unfortunately, isn't a known quantity; it depends very heavily on your body size, shape, mass, leg/torso ratio, how you normally move and a bunch of other things... so there's a lot of hit-and-miss, unfortunately, unless you have access to the services of a professional in office ergonomics. Even then, I can't myself speak for how much a professional would help.

Comment author: garabik 04 February 2014 01:20:00PM 3 points [-]

It makes a difference - I now have a good, high quality chair that cost over 250€ (not from my own pocket) and it's close to perfect - I can recline it to a comfortable position that is not possible with an "ordinary office" chair (I used to break them down on a regular basis). Despite being advertised as "super-resistant", this one already broke twice (covered by warranty). And when I had to sit on an "ordinary office" chair, I found out that I cannot work for more than an hour or two before I get serious pain in my back - this seems to be related to the monitor being beneath the eyes and the inability to recline - I (like to) have the monitor exactly at the eye level and looking slightly upwards.

Comment author: Antiochus 04 February 2014 03:34:44PM *  3 points [-]

Could you post a link to the kind of chair that you got?

Comment author: garabik 07 February 2014 09:54:24AM 1 point [-]

It's not quite what I have (this one is some year old model), but seems close: link here

I either misremember the price, or it went down significantly (or the combination...)

Comment author: Metus 03 February 2014 07:33:27PM *  1 point [-]

I want to extend this to mattresses. About a third of my time is spent sleeping, how much can I spend before marginal returns kick in?

Comment author: ChristianKl 03 February 2014 10:16:01PM 3 points [-]

As far as mattresses go, it's important to note that it's not all about price. When I read a guide by a German consumer advice group they made the point that it's important to actually test the mattress in person to see how it fits your individual preferences.

Comment author: ephion 03 February 2014 08:42:05PM 2 points [-]

I bought this and it's amazing. I was sleeping on a $900 spring mattress, and this is so much better in every respect. It's held up for 1.5 years, now, and is just as nice as the day I got it.

Comment author: falenas108 04 February 2014 01:57:17PM 1 point [-]

Before of Other Optimizing here. You're going to see a lot of "This mattress is the best thing I've ever slept on!," and it may not be the case for you. Second Christian's advice to actually go into a store and sleep on a mattress.

Comment author: btrettel 04 February 2014 08:27:49PM *  1 point [-]

My father is one of the patent examiners for mattresses. I brought him along the last time I bought a mattress. His recommendation was like ChristianKl's: try different mattresses and see what's comfortable. Cost and comfortableness are not necessarily related. Whether or not you find it comfortable in the store is the best indication of whether you'll find it to be comfortable at home. Pick the cheapest one you find comfortable. With that being said, you might find some more expensive mattresses last longer, though he indicated that most mattresses are designed to wear out around the same time. Also, he's highly skeptical of the value of memory foam and other things you see on TV, so don't think those things are necessarily better.

For what it's worth, he sleeps on a waterbed. I am unsure, but I think the choice might be motivated by my mother's allergies; waterbeds can't absorb allergens by their design.

Comment author: ThrustVectoring 04 February 2014 06:25:05PM 1 point [-]

Mattresses aren't the only thing you can sleep on. I'd consider picking up and installing a hammock - they're not only cheap (~$100 for a top of the line one, $10 and 2 hours for making your own), but they also give you significantly more usable living space.

Comment author: drethelin 04 February 2014 07:12:55PM 2 points [-]

Most people like to have a bed they can have sex in though

Comment author: niceguyanon 03 February 2014 05:36:55PM 1 point [-]

Not an answer, but I did discover kneeling chairs, because I am also in the market for a new chair. I'd try one with back support, but none of the reviews of the products on amazon compel me to make any purchases.

http://www.amazon.com/Office-Star-Ergonomically-Designed-Casters/dp/B002L15NSK/ref=nosim?tag=vglnk-c319-20

http://www.ncbi.nlm.nih.gov/pubmed/18810008

Comment author: Stabilizer 08 February 2014 06:14:12AM *  6 points [-]

To those knowledgeable in philosophy, can someone please explain why Wittgenstein is such a big deal? I skimmed the Wikipedia articles on Tractatus Logico-Philosophicus and Philosophical Investigations.

I have no idea what's going on in Tractatus.

The points made in Philosophical Investigations---namely that a lot of philosophical problems come down to confusions about language---seems to be interesting and correct to me: but really, did no one before Wittgenstein think about this? I mean, if I read Russell, it seems that he had a similar brand of clear thinking going on. I'm sure various strains of Traditional Rationality were around much before Wittgenstein.

Or is it only because I'm living in the post-Wittgenstein world that I feel that this is relatively obvious?

Comment author: Squark 09 February 2014 08:49:45PM 3 points [-]
Comment author: Emile 10 February 2014 01:28:41PM 1 point [-]

Indeed, it is pretty good - and not obvious.

However that doesn't answer Stabilizer's curiosity about which ideas were really brought by him, how his ideas about confusions on language compare to those of Russel, etc. I'm also interested in knowing :)

(I've read the Philosophical Investigations but not Russel, and don't have a clear idea of the history of ideas in that domain)

Comment author: Petruchio 03 February 2014 06:43:35PM 6 points [-]

I have just started playing poker online. On Less Wrong Discussion, Poker has been called an exercise in instrumental rationality, and a so-called Rationality Dojo was opened via RationalPoker.com. I have perused this site, but it has been dormant since July 2011. Other sources exist, such as 2 + 2, Pokerology and Play Winning Poker, but none of them have the quality of content or style that I have found on Less Wrong. Is anyone here a serious poker player? Is there any advice for someone who wants to become a winning player themselves?

Comment author: Tenoke 04 February 2014 11:44:53AM *  9 points [-]

What is your goal? If you want to earn significantly more than (let's say) $20,000 a year then poker is probably not your best bet. I used to play during 2007-2010 and the game were getting progressively tougher (more regulars, less fish), the same way as they had been in the prior few years before I started playing online. I recently checked how things are going and the trend seems to still be in place. Additionally, the segregation of countries in online poker (americans not being able to play with non-americans for example) is making things worse and this is in fact what drove me away mid-2010.

TL;DR You are several years too late to have a decent chance of making good money with poker.

Comment author: Petruchio 04 February 2014 11:08:02PM 2 points [-]

Thank you for the heads up. I'll keep it to more causal play. Do you have any experience to with brick and mortar poker? And what are you doing now if you are no longer (presumably) playing professionally?

Comment author: Tenoke 05 February 2014 07:41:31AM 2 points [-]

Do you have any experience to with brick and mortar poker?

There are more fish live, sure. However since you can only play one table at a time and since you can only do about 30 hands a table, you will need to play at higher stakes and have a big enough bankroll.

And what are you doing now if you are no longer (presumably) playing professionally?

For the record I wasn't making really big money back then or anything either (decent enough for the country I used to live in but that's it). I work now and if you are looking for job advice, the 'obvious' one is programming.

Comment author: JRMayne 06 February 2014 12:12:03AM 2 points [-]

Aside: Poker and rationality aren't close to excellently correlated. (Poker and math is a stronger bond.) Poker players tend to be very good at probabilities, but their personal lives can show a striking lack of rationality.

To the point: I don't play poker online because it's illegal in the US. I play live four days a year in Las Vegas. (I did play more in the past.)

I'm significantly up. I am reasonably sure I could make a living wage playing poker professionally. Unfortunately, the benefits package isn't very good, I like my current job, and I am too old to play the 16-hour days of my youth.

General tips: Play a lot. To the extent that you can, keep track of your results. You need surprisingly large sample sizes to determine whether your really a winner unless you have a signature performance. (If you win three 70-person tournaments in a row, you are better than that class of player.) No-limit hold-'em (my game of choice) is a game where you can win or lose based on a luck a lot of the time. Skill will win out over very long periods of time, but don't get too cocky or depressed over a few days' work.

Try to keep track of those things you did that were wrong at the time. If you got all your chips in pre-flop with AA, you were right even if someone else hits something and those chips are now gone. This is the first-order approximation.

Play a lot, and try to get better. If you are regularly losing over a significant period of time, you are doing something wrong. Do not blame the stupid players for making random results. (That is a sign of the permaloser.)

Know the pot math. Know that all money in the pot is the same; your pot-money amount doesn't matter. Determine your goals: Do you want to fish-hunt (find weak games, kill them) or are you playing for some different goal? Maybe it's more fun to play stronger players. Plus, you can better faster against stronger players, if you have enough money.

Finally, don't be a jerk. Poker players are generally decent humans at the table in my experience. Being a jerk is unpleasant, and people will be gunning for you. It is almost always easier to take someone's money when they are not fully focused on beating you. Also, it's nicer. Don't (in live games) slow-roll, give lessons, chirp at people, bark at the dealer, or any of that. Poker is a fun hobby.

Comment author: Nornagest 06 February 2014 12:50:41AM 2 points [-]

Poker and rationality aren't close to excellently correlated. (Poker and math is a stronger bond.) Poker players tend to be very good at probabilities, but their personal lives can show a striking lack of rationality.

Poker teaches only a couple of significant rationality skills (playing according to probabilities even when you don't intuitively want to; beating the sunk-cost fallacy and loss aversion), but it's very good at teaching those if approached with the right mindset. It also gives you a good head for simple probability math, and if played live makes for good training in reading people, but that doesn't convert to fully general rationality skills without some additional work.

I'd call it more a rationality drill than a rationality exercise, but I do see the correlation.

(As qualifications go, I successfully played poker [primarily mid-limit hold 'em] online before it was banned in the States. I've also funded my occasional Vegas trips with live games, although that's like taking candy from a baby as long as you stay sober -- tourists at the low-limit tables are fantastically easy to rip off.)

Comment author: Petruchio 06 February 2014 02:48:23AM *  1 point [-]

Poker also requires the skill of identifying and avoiding tilt, the state of being emotionally charged leading to the sacrifice of good decision-making. A nice look of the baises which need to be reduce to play effective poker can be found at Rationalpoker.com.

I suppose poker is more of a rationality drill than exercise, and just a physicist may be successful in his field while having a broken personal life, so may a poker player fall to the same trap.

Comment author: Kaj_Sotala 04 February 2014 08:22:57AM *  15 points [-]

I know how egoistic this comment risks sounding, but: many different people (at least half a dozen) have independently expressed to me that they find the links that I post on social media to be consistently interesting and valuable, to the point of one person claiming that about 40% of the value that she got out of Facebook was from reading my posts.

Thus, if you're not already doing so, you may be interested in following me on social media, either on Facebook or Google Plus. I'm a little picky about accepting friend requests on FB, but anyone is free to follow me there. If you don't want to be on any of those services, it's apparently also possible to get an RSS feed of the G+ posts. (I also have a Twitter account, but I use that one a lot less.)

On the other hand, if you're procrastination-prone, you may want to avoid following me - I've also had two people mention that they've at least considered unfollowing me because they waste too much time reading my links.

Comment author: Emile 04 February 2014 09:02:18AM 3 points [-]

I can confirm that you and gwern are my favourite reads on Google+ (though I don't visit neither Google+ nor Facebook very often).

Comment author: Curiouskid 06 February 2014 05:16:24AM *  2 points [-]

Most Interesting quote I found in first 5 minutes of browsing your G+ feed:

Unfortunately, the bubble was to burst once again, following a series of attacks on connectionism’s representational capabilities and lack of grounding. Connectionist models were criticized for being incapable of capturing the compositionality and productivity characteristic of language processing and other cognitive representations (Fodor & Pylyshyn 1988); for being too opaque (e.g., in the distribution and dynamics of their weights) to offer insight into their own operation, much less that of the brain (Smolensky 1988); and for using learning rules that are biologically implausible and amount to little more than a generalized regression (Crick 1989). The theoretical position underlying connectionism was thus reduced to the vague claim that that the brain can learn through feedback to predict its environment, without a psychological explanation being offered of how it does so. As before, once the excitement over computational power was tempered, the shortage of theoretical substance was exposed.

"One reason that research in connectionism suffered such setbacks is that, although there were undeniably important theoretical contributions made during this time, overall there was insufficient critical evaluation of the nature and validity of the psychological claims underlying the approach. During the initial explosions of connectionist research, not enough effort was spent asking what it would mean for the brain to be fundamentally governed by distributed representations and tuning of association strengths, or which possible specific assumptions within this framework were most consistent with the data. Consequently, when the limitations of the metaphor were brought to light, the field was not prepared with an adequate answer. On the other hand, pointing out the shortcomings of the approach (e.g., Marcus 1998; Pinker & Prince 1988) was productive in the long run, because it focused research on the hard problems. Over the last two decades, attempts to answer these criticisms have led to numerous innovative approaches to computational problems such as object binding (Hummel & Biederman 1992), structured representation (Pollack 1990), recurrent dynamics (Elman 1990), and executive control (e.g., Miller & Cohen 2001; Rougier et al. 2005). At the same time, integration with knowledge of anatomy and physiology has led to much more biologically realistic networks capable of predicting neurological, pharmacological, and lesion data (e.g., Boucher et al. 2007; Frank et al. 2004). As a result, connectionist modeling of cognition has a much firmer grounding than before."

-- Matt Jones & Bradley C. Love, Bayesian Fundamentalism or Enlightenment? On the explanatory status and theoretical contributions of Bayesian models of cognition.

I would love more detailed/referenced high-level analyses of different approaches to AI (e.g. connectionism v. computationalism v. WBE).

I suppose this would be a good place to start at the very least:

Comment author: Curiouskid 07 February 2014 10:35:03PM 1 point [-]

I'm curious why this was down voted.

I thought this was an excellent quote from his newsfeed and that it was good evidence that his feed was worth reading. Then, I indirectly asked if he had any similar links/resources, since I thought the quote was so good.

Comment author: NoSuchPlace 07 February 2014 01:57:17PM 5 points [-]

Today's SMBC is about an AI with a utility function which sounds good but isn't.

Comment author: CAE_Jones 10 February 2014 02:44:02AM 4 points [-]

Someone suggests that planning is bad for success. There is very little research cited, however (there is one study involving CEOs). Is there more confirming / invalidating evidence for this idea somewhere?

Comment author: niceguyanon 07 February 2014 04:22:25AM *  4 points [-]

After reading this main post, it dawned on me that the scary sounding change terminal goals technique is really similar to just sour grapes reasoning plus substituting them with lower hanging grapes, that would eventually get you to the higher hanging grapes you originally wanted.

I typically refrain from deluding myself to think that I don't want what is hard to attain, because I know I really do want it. With sour grapes reasoning I can pretend to not want my original goal as much as I now want another more instrumental goal. I feel like this helps me cope and be more productive, instead of frustrating myself with hard to define terminal goals.

At first I thought that changing terminal goals would be kind of a hard mind hack to put to use, but now that I think about it, it's actually quite easy to carefully delude myself. This hack doesn't have to just apply to lofty terminal goals, it can apply to goals that are just simply not in your locus of control, like getting the job or making the team. Didn't get that internship? "Pfft, I didn't really want it anyway, I really just want to practice and learn these skills to be an awesome programmer." Didn't make the cut for this year's team? Pfft, I really just want to have an awesome crossover and sweet jump shot."

Comment author: lukeprog 11 February 2014 05:55:14PM 3 points [-]

Doing early prep work on my scientific review of Transcendence, I came across this amusing anecdote from Lab Coats in Hollywood:

Marine biologist Mike Graham, for example, was giving a lecture to Finding Nemo’s (2004) animators when the director asked him “if there was one thing that the fi lm might get wrong that would really disturb him.” An account of this meeting in Nature shows how Graham’s answer created a predicament for the animators: “Quick as a fl ash, Graham said the most intolerable outrage would be to see kelp — a type of seaweed that only grows in cold waters — depicted in a coral reef. There was an uncomfortable shuffling in the audience. Then a voice from the back called out: ‘Better not go see the movie then.’ But if you check out your video or DVD, you’ll see there is no kelp. After Graham raised his objections, every frond was carefully removed from each scene, at considerable cost."

Comment author: lukeprog 11 February 2014 06:04:21PM 1 point [-]

Another clip:

Filmmakers in the 1950s and 1960s... had to rely on propane tanks to mimic the exhaust coming off a rocket in space. When gas leaves the tank it “curls” as the atmosphere causes it to form vortices. In a vacuum gas does not behave in this manner, so these films were inaccurate in this respect. During production for Deep Impact, Chris Luchini explained this to the propmakers regarding the rocket exhaust as well as the comet ’ s outgassing. Liquid nitrogen helped them get around this problem for the rocket exhaust, but for safety reasons they were unable to utilize this for outgassing jets. When Luchini saw a rough cut of the film he noticed the curling of the gas off these jets. He mentioned this error to a special effects technician who used a CGI wipe effect to remove the curling days before the film’s premiere. Such a fix would have been impossible prior to the development of CGI technologies. Although CGI work can be expensive and difficult, it is often easier and cheaper to fix scientific inaccuracies during postproduction than it would be to struggle with them during production. In this case, they were able to rectify an error days before the release of the film.

Comment author: Fossegrimen 07 February 2014 10:57:15AM 3 points [-]

Anyone care to elaborate on Why a Bayesian is not allowed to look at the residuals?

I got hunches, but don't feel qualified to explain in detail.

Comment author: witzvo 09 February 2014 07:13:15PM 1 point [-]

To be a Bayesian in the purest sense is very demanding. One need not only articulate a basic model for the structure of the data and the distribution of the errors around that data (as in a regression model), but all your further uncertainty about each of those parts. If you have some sliver of doubt that maybe the errors have a slight serial correlation, that has to be expressed as a part of your prior before you look at any data. If you think that maybe the model for the structure might not be a line, but might be better expressed as an ordinary differential equation with a somewhat exotic expression for dy/dx then that had better be built in with appropriate prior mass too. And you'd better not do this just for the 3 or 4 leading possible modifications, but for every one that you assign prior mass to, and don't forget uncertainty about that uncertainty, up the hierarchy. Only then can the posterior computation, which is now rather computationally demanding, compute your true posterior.

Since this is so difficult, practitioners often fall short somewhere. Maybe they compute the posterior from the simple form of their prior, then build in one complication and compute a posterior for that and compare and, if these two look similar enough, conclude that building in more complications is unnecessary. Or maybe... gasp... they look at residuals. Such behavior is often going to be a violation of the (full) likelihood principle b/c the principle demands that the probability densities all be laid out explicitly and that we only obtain information from ratios of those.

So pragmatic Bayesians will still look at the residuals Box 1980.

Comment author: witzvo 09 February 2014 07:48:50PM *  0 points [-]

As a counterargument to my previous post, if anyone wants an exposition of the likelihood principle, here is reasonably neutral presentation by Birnbaum 1962. For coherence and Bayesianism see Lindley 1990.

Edited to add: As Lindley points out (section 2.6), the consideration of the adequacy of a small model can be tested in a Bayesian way through consideration of a larger model, which includes the smaller. Fair enough. But is the process of starting with a small model, thinking, and then considering, possibly, a succession of larger models, some of which reject the smaller one and some of which do not, actually a process that is true to the likelihood principle? I don't think so.

Comment author: Lumifer 07 February 2014 03:26:31PM *  11 points [-]

This is awesome.

Here is the abstract of a paper in Neuroscience Letters. The paper is titled "Early sexual experience alters voluntary alcohol intake in adulthood".

And the abstract goes

Steroid hormones signaling before and after birth sexually differentiates neuronal circuitry. Additionally, steroid hormones released during adolescence can also have long lasting effects on adult behavior and neuronal circuitry. As adolescence is a critical period for the organization of the nervous system by steroid hormones it may also be a sensitive period for the effects of social experience on adult phenotype. Our previous study indicated that early adolescent sexual activity altered mood and prefrontal cortical morphology but to a much smaller extent if the sexual experience happened in late adolescence. In humans, both substance abuse disorders and mood disorders greatly increase during adolescence. An association among both age of first sexual activity and age of puberty with both mood and substance disorders has been reported with alcohol being the most commonly abused drug in this population. The goal of this experiment was do determine whether sexual experience early in adolescent development would have enduring effects on adult affective and drug-seeking behavior.

.
.
.
.
.
.
and the abstract continues

Compared to sexually inexperienced HAMSTERS and those that experienced sex for the first time in adulthood, animals that mated at 40 days of age and were tested either 40 or 80 days later significantly increased depressive- but not anxiety-like behaviors and increased self-administration of saccharine-sweetened ethanol. The results of this study suggest that an isolated, though highly relevant, social experience during adolescence can significantly alter depressive-like behavior and alcohol self-administration in adulthood.

I propose that from now on the titles of all papers about physiology and psychology should be read with "...in hamsters" appended to them.

Comment author: ChrisHallquist 09 February 2014 08:19:46PM *  5 points [-]

I'm seeing a lot of things claiming that over the long run, people can't increase their output by working much more than 40 hours per week. It might (so the claim goes) work for a couple weeks of rushing to meet deadline, but if you try to keep up such long hours long-term your hourly productivity will drop to the point that your total output will be no higher than what you'd get working ~40 hour weeks.

There seem to be studies supporting this claim, and I haven't been able to find any studies contradicting it. On the other hand, it seems like something that's worth being suspicious of simply because of course people would want it to be true. Also, I've heard that the studies supporting this claim weren't performed until after the 40 hour work week had become entrenched for other reasons, which seems suspicious. Finally, if (salaried) employees working long hours is just them trying to signal how hard working they are, at the expense of real productivity, it's a bit surprising managers haven't clamped down on that kind of wasteful signaling more.

(EDIT: Actually, failure of managers to clamp down on something is probably pretty weak evidence of it not being wasteful signaling, see here.)

This seems like a question of great practical importance, so I'm really eager to hear what other people here think about it.

Comment author: Nornagest 09 February 2014 09:31:32PM *  5 points [-]

Well, it's quite unlikely that 40 hours/week is exactly the right value. I'd expect that what's going on involves researchers comparing the cultural default to a grab-bag of longer hours, probably with fairly coarse granularity, and concluding that the cultural default works better even though it might not be an absolute optimal.

There's also cultural factors to take into account, both local to the company and general to the society. If we've habituated ourselves to thinking that 40 hours/week is normal for people in general, it wouldn't be surprising to me if working longer hours acted as a stressor purely by comparison with others. Similarly, among companies, expecting employees to work longer hours than the default would probably correlate with putting high pressure on them in other ways, and this would probably be very hard to untangle from the productivity statistics.

Comment author: lukeprog 09 February 2014 09:18:36PM 3 points [-]

I'm seeing a lot of things claiming that over the long run, people can't increase their output by working much more than 40 hours per week.

I think this is just false. It seems to me that lots of people work long hours throughout their entire career, with output much higher than if they only worked 40 hrs/wk. But I haven't looked into studies.

Comment author: fubarobfusco 09 February 2014 08:30:58PM 4 points [-]

Finally, if (salaried) employees working long hours is just them trying to signal how hard working they are, at the expense of real productivity, it's a bit surprising managers haven't clamped down on that kind of wasteful signaling more.

I'm not sure that "X is wasteful signaling and hurts productivity" is very strong evidence for "managers would minimize X".

One manager I used to work for got in some social trouble with his peers (other managers in the same organization) for tolerating staff publicly disagreeing with him on technical issues. In a different workplace and industry, I've heard managers explicitly discuss the conflicts between "managing up" (convincing your boss that your group do good work) and "managing down" (actually helping your group do good work) — with the understanding that if you do not manage up, you will not have the opportunity to manage down.

A lot of the role of managers seems to be best explained as ape behavior, not agent behavior.

Comment author: DaFranker 11 February 2014 01:15:19PM 0 points [-]

A lot of the role of managers seems to be best explained as ape behavior, not agent behavior.

Localized context warning needed missing here.

There's also other warnings that need to be thrown in:
People who only care about the social-ape aspects are more likely to seek the position. People in general do social-ape stuff, at every level, not just manager level, with the aforementioned selection effect only increasing the apparent ratio. On top of that, instances of social-ape behavior are more salient and, usually, more narratively impactful, both because of how "special" they seem and because the human brain is fine-tuned to pick up on them.

Another unstudied aspect, which I suspect is significant but don't have much solid evidence about, is that IMO good exec and managerial types seem to snatch up and keep all the "decent" non-ape managers, which would make all the remaining ape dregs look even more predominant in the places that don't have those snatchers.

But anyway, if you model the "team" as an independent unit acting "against" outside forces or "other tribes" which exert social-ape-type pressures and requirements on the Team's "tribe", then the manager's behavior is much more logical in agent terms: One member of the team is sacrificed to "social-ape concerns", a maintenance or upkeep cost to pay of sorts, for the rest of the team to do useful and productive things without having the entire group's productivity smashed to bits by external social-ape pressures.

I find that in relatively-sane (i.e. no VPs coming to look over the shoulder of individual employees or poring over Internet logs and demanding answers and justifications for every little thing) environments with above-average managers, this is usually the case.

Comment author: Squark 09 February 2014 08:41:19PM 2 points [-]

Based on personal experience AKA anecdotal evidence w/o even quantitative verification, for what it's worth:

  • I think the optimal point depends (significantly) on the person, the job and the work environment
  • For me, 45-50 hours a week seems efficient, most of the time
  • Regarding managers not clamping down on wasteful signaling: I don't think it's strong evidence, because of course managers would want the opposite to be true. For them making employees work more hours feels like the simplest way to get the project back on schedule (and the project is always behind schedule).
Comment author: shminux 09 February 2014 09:13:08PM 1 point [-]

In accordance with what others say, I have seen plenty of smart managers who inexplicably value longer hours over better work output. My guess is that someone going home earlier offends their internal concept of fairness. That's one reason productive people do better on fixed price contracts than on a salary.

Comment author: Fossegrimen 06 February 2014 06:34:30PM 5 points [-]

A political question:

Our recently elected minister for finance just did something unexpected. She basically went:

“Last autumn during the election campaign, I said we should do X. After four months of looking at the actual numbers, it turns out that X is a terribad idea, so we are going to do NOT X”

(She used more obfuscating terms, she’s a politician after all.)

The evidence points to her actually changing her mind rather than lying during the election.

The question:

Would you prefer a politician sane enough to change her mind when presented with convincing evidence or one that you (mostly) agree with?

Comment author: David_Gerard 06 February 2014 10:22:53PM *  3 points [-]

My preference is for politicians who I broadly ideologically agree with, who are capable of doing what you described.

I expect that if one I did not broadly ideologically agree with did what you describe, I would think of them as a weasel, or first consider the hypothesis they were preparing to fuck over all that was good and right in some manner I had not yet figured out. (I realise this is defective thinking in a number of ways, but that would in fact be my first reaction.)

Comment author: tut 07 February 2014 12:52:15PM *  1 point [-]

Both. But they should change their mind before an election, not after. If they made the speech you quoted what I would hear is "X is the right thing to do, so I promised you X, but now that I have my mitts on some real power, not X is better for me, so I will do not X"

Comment author: Luke_A_Somers 07 February 2014 12:45:32AM 1 point [-]

If I can trust them to actually be changing their mind when presented with evidence, and not just lying, and listening for any further arguments from the side they started on (presumably mine for purposes of this question), the former.

Comment author: ITakeBets 08 February 2014 03:11:54PM 6 points [-]

So... What do we make of this?

Excerpt:

He is a rationalist who is deeply against living by social norms and just sees them as defaults, and is “non-default” about pretty much everything including work path, values etc., as well as lifestyle including cooking (lives off takeaway so as not to spend time grocery shopping and cooking), cleaning (does not have much of a regular cleaning habit – I broke glass in his kitchen a month ago and he said I shouldn’t have to clean it up and it’s still there), sleeping (he has no regular sleep schedule and sleeps when he wants to. The kind of work that he does is largely from home with long deadlines. He ships a prescription anti-narcolepsy from overseas which allows him to stay awake for long stretches on little sleep – although he plans on giving this up soon). He also takes party drugs and for a while, was taking quite high amounts of MDMA on a weekly basis, which pretty much wiped him out the day or two after. I have always been uncomfortable around drugs, although he did not really know the extent of my discomfort, and I can’t take them myself due to mental health. He dropped back to once a month after I expressed concerns about escalation and he acknowledges that he has some susceptibility to addiction, although he is not currently dependent.“

One serious issue we had was that he gave me an STI. He had rationalised that he had a very limited risk of having an STI so despite my repeated requests and despite being informed that a previous partner had been infected, did not get tested. I was furious at his intellectual arrogance and the danger he had put us both in. I lost a week of unpaid time off work and my mum had to nurse me through my allergic reaction to the treatment. I told him I wanted to break up, but we ended up supporting each other through the treatment and ultimately decided to get back together and work things out.

Comment author: gwern 09 February 2014 09:30:23PM 8 points [-]

That he fails at basic instrumental rationality. I would be very interested in seeing a valid cost-benefit analysis which can justify leaving dangerous broken glass around, eating only take-out, and ignoring the risk of STI...

Comment author: NancyLebovitz 08 February 2014 06:51:31PM 6 points [-]

What I make of it is that "rationalist" is getting to sound cool enough that there are going to be people who claim to be rationalists even though they aren't notably rational.

Lists of "how to identify a real rationalist" will presumably run up against Goodhart's Law, but it still might make sense to start working on them.

Comment author: shminux 09 February 2014 09:03:07PM 5 points [-]

Just because a manipulative narcissistic asshole calls himself a rationalist, it doesn't make him rational in the meaning of the word coined by Eliezer and generally shared here.

Comment author: drethelin 10 February 2014 02:21:58AM 9 points [-]

also remember: what's rational to do if you're a narcissistic asshole is different than what's rational for a nicer person

Comment author: ChrisHallquist 09 February 2014 08:12:23PM 5 points [-]

He is a rationalist who is deeply against living by social norms and just sees them as defaults, and is “non-default” about pretty much everything

As soon as I read that, I thought "uh oh, this is bad...", long before getting to the part about the STI. And unfortunately, this first sentence describes too many people in the LessWrong community, even ones who are more careful about STIs. Maybe this will be a wakeup call to people to stop equating "rationalist" with "rejecting social norms."

Comment author: hyporational 10 February 2014 06:34:34AM *  2 points [-]

I think this one by Yvain works as a plausible explanation for why this is unlikely to change.

Do you deliberately pick topics that cause controversy here, or is your model of this community flawed? Either way I find people's reactions to your posts amusing.

Comment author: Eugine_Nier 11 February 2014 06:06:54AM -1 points [-]

If you're going to argue using appeals to tradition, it helps to know something about the history of the tradition you're appealing to. In particular whether it has centuries of experience behind it or is merely something some meta-contrarians from the previous generation thought was a good idea.

Comment author: fubarobfusco 08 February 2014 04:53:07PM 4 points [-]

What do we make of this?

The character described sounds dangerous to himself and others.

Comment author: Viliam_Bur 09 February 2014 10:00:24PM *  2 points [-]

There is often mentioned "LW" in the comments, but it seems to be an abbreviation for Letter Writer (the person who wrote the letter about the "rationalist"), not LessWrong. It took me some time to realize this.

Well, I expected that making "rationality" popular would bring some problems. If we succeed to make the word "rationality" high-status, suddenly all kinds of people will start to self-identify as "rationalists", without complying with our definition. (And the next step will be them trying to prove they are the real "rationalists", and all the others are fakes.) But I didn't expect this kind of thing, and this soon.

On the other hand, there doesn't have to be any connection with us. (EDIT: I was wrong here.) I mean... LessWrong does not have a copyright on "rationality".

Comment author: ITakeBets 09 February 2014 10:09:07PM *  6 points [-]

there doesn't have to be any connection with us

Comments mention HPMoR, and letter writer says he read it aloud to her. The Modafinil use is also circumstantial evidence.

Comment author: Viliam_Bur 10 February 2014 11:54:31AM *  4 points [-]

Thanks for pointing this out; I didn't read all the comments previously (only the first third, or so) because there is so many of them. (Here is a link to the HPMoR comment, for other curious people.) I've read the remaining ones now.

By the way, the comments are closed today. (They were still open yesterday.) I am happy someone was fast enough to post there this:

Somewhat related: a major rationalist perspective on the importance of saying oops (http://lesswrong.com/lw/i9/the_importance_of_saying_oops/) (hint: it’s very important, the guy you’re dealing with should have done it a long time ago for all the things he messed up on, and you should flee him) and feelings (http://lesswrong.com/lw/hp/feeling_rational/) (hint: it is rational to feel, not to deny others’ feelings).

tl;dr: LW, this dude is calling himself “rational” but is not rational.

Reading the comments, I am impressed by their high quality. I actually feared something like using "rationality" as a boo light, but there is only an occassional fallacy of gray (everyone is equally irrational), and only a very few commenters try to generalize the behavior to men in general. Based on my experience from the rest of the internet, I expected much more of that. Actually, there are also some very smart comments, like:

it is rational and logical to take emotions into account. Emotions are real things that human beings have – we have them often for good reasons, and we’re not Vulcans (besides, I’m betting both Spock and Tuvok have really neat clean quarters and would never leave broken glass lying around to defy the man, because it would not be logical). Anyway. Emotions are valid. Caring for the emotional well being of your loved ones is important and also a rational choice. People have different preferences for things, and feel differently about things, and negotiating those differences is a huge part of a good relationship.

If by chance the person who wrote the letter comes here, I strongly recommend reading "The Mask of Sanity" for a descriptions of how psychopaths work. I believe some of the examples would pattern-match very strongly.

And the lesson for the LessWrong community is probably this: Some psychopaths will find LW and HPMoR, and will use "rationality" as their excuse. We should probably have some visible FAQ that contradicts them. (On a second thought: Having the FAQ on LessWrong would not have helped in this specific case, because the abusive boyfriend only showed her HPMoR. And having this kind of disclaimer on HPMoR would probably feel weird. Maybe the best solution would be to have a link to the LessWrong FAQ on the HPMoR web page; something like: "This fan fiction is about rationality. Read here more about what is - and what isn't - considered rational by its author.)

Comment author: Alicorn 08 February 2014 04:55:29PM *  4 points [-]

The letter writer mentions her (ex-)boyfriend's OK Cupid account screenname in the comments. I looked at it and didn't recognize him. I checked the same screenname on Reddit, which she said he also used (no account under that name) and here (an account exists by that name, but I don't think it's the same person - in particular the OKC account has a characteristic punctuation error that the local account doesn't make). If anyone from Missouri wants to see if he looks familiar there are breadcrumbs to follow.

It's possible that the choice of the word "rationalist" was a coincidence and this is not a peripheral community member mistreating his Muggle girlfriend, but just some random guy. I think it is worth finding out if we can.

Comment author: wedrifid 08 February 2014 05:51:05PM 3 points [-]

One serious issue we had was that he gave me an STI. He had rationalised that he had a very limited risk of having an STI so despite my repeated requests and despite being informed that a previous partner had been infected, did not get tested.

If only there was a simple magic word that transferred control of one's own sexual health into one's own hands. Like "No", for instance. For creative emphasis or in response to repeated attempts to initiate sex despite refusal to honour basic safety requests there are alternative expressions of refusal such as "You want to put that filthy, infested thing inside me? Eww, gross!"

Comment author: wedrifid 08 February 2014 05:53:12PM 1 point [-]

He ships a prescription anti-narcolepsy from overseas which allows him to stay awake for long stretches on little sleep

The only thing he is getting right. ;)

Comment author: Calvin 10 February 2014 06:55:48AM 1 point [-]

One serious issue we had was that he gave me an STI. He had rationalised that he had a very limited risk of having an STI so despite my repeated requests and despite being informed that a previous partner had been infected, did not get tested.

I thought accepted theory was that rationalists, are less credulous but better at taking ideas seriously, but what do I know, really? Maybe he needs to read more random blog posts about quantum physics and AI to aspire for LW level of rationality.

Comment author: DaFranker 11 February 2014 01:36:52PM 2 points [-]

He is a rationalist (...)

He had rationalised (...)

(...) despite being informed that a previous partner had been infected (...)

So uh, let's run down the checklist...

[ X ] Proclaims rationality and keeps it as part of their identity.
[ X ] Underdog / against-society / revolution mentality.
[ X ] Fails to credit or fairly evaluate accepted wisdom.
[ ] Fails to produce results and is not "successful" in practice.
[ X ] Argues for bottom-lines.
[ X ] Rationalizes past beliefs.
[ X ] Fails to update when run over by a train of overwhelming critical evidence.

Well, at least, there's that, huh? From all evidence, they do seem to at least succeed in making money and stuff. And hold together a relationship somehow. Oh wait, after reading original link, looks like even that might not actually be working!

Comment author: blacktrance 07 February 2014 06:43:44PM *  4 points [-]

It would be convenient if, when talking about utilitarianism, people would be more explicit about what they mean by it. For example, when saying "I am a utilitarian", does the writer mean "I follow a utility function", "My utility function includes the well-being of other beings", "I believe that moral agents should value the well-being of other beings", or "I believe that moral agents should value all utility equally, regardless of the source or who experiences it"? Traditionally, only the last of these is considered utilitarianism, but on LW I've seen the word used differently.

Comment author: ygert 09 February 2014 05:43:43PM *  9 points [-]

Right. Many people use the word "utilitarianism" to refer to what is properly named "consequentialism". This annoys me to no end, because I strongly feel that true utilitarianism is a decoherent idea (it doesn't really work mathematically, if anyone wants me to explain further, I'll write a post on it.)

But when these terms are used interchangeably, it gives the impression that consequentialism is tightly bound to utilitarianism, which is strictly false. Consequentialism is a very useful and elegant moral meta-system. It should not be shouldered out by utilitarianism.

Comment author: hyporational 10 February 2014 06:55:38AM *  3 points [-]

it doesn't really work mathematically, if anyone wants me to explain further, I'll write a post on it.

Please do. I think it also would be valuable to refresh people's memories of the difference between utilitarianism and consequentialism, and to show many moral philosophies can fall under the latter.

Comment author: DanielLC 11 February 2014 10:30:58PM 0 points [-]

Many people use the word "utilitarianism" to refer to what is properly named "consequentialism".

I tend to do that.

What is the difference? According to Wikipedia, Egoism and Ethical Altruism are Consequentialist but not Utilitarian. I think it might have something to do with your utility function involving everyone equally, instead of ignoring you or ignoring everyone but you.

Comment author: AlexSchell 11 February 2014 04:25:10AM 0 points [-]

I strongly feel that true utilitarianism is a decoherent idea (it doesn't really work mathematically, if anyone wants me to explain further, I'll write a post on it.)

Because of interpersonal utility comparisons, or what? That might affect some forms of preference utilitarianism. Hedonistic and "objective welfare" varieties of utilitarianism seem like coherent views to me.

Comment author: MathieuRoy 10 February 2014 04:58:14AM *  2 points [-]

What transhumanist and/or rationalist podcast/audiobook do you prefer beside hpmor which I just finished and really liked!!

Comment author: ygert 10 February 2014 12:41:42PM *  1 point [-]

As I mentioned to you when you asked on PredictionBook, look to the media threads. These are threads specifically intended for the purpose you want: to find/share media, including podcasts/audiobooks.

I also would like to reiterate what I said on PredictionBook: I don't think PredictionBook is really meant for this kind of question. Asking it here is fine, even good. It gives us a chance to direct you to the correct place without clogging up PredictionBook with nonpredictions.

Comment author: MathieuRoy 11 February 2014 02:14:40AM 0 points [-]

Thank you for the link.

Comment author: Yossarian 09 February 2014 10:59:52PM 2 points [-]

Does anyone know if there is/are narrative fiction based around the AI Box Experiment? Short stories or anything else?

Comment author: gwern 10 February 2014 12:51:03AM 1 point [-]
Comment author: btrettel 07 February 2014 03:17:37PM *  2 points [-]

Recently there were a few posts about using bikes as transportation. This left me curious. Who are the transportation cyclists at LessWrong? I am interested in hearing your reasons for choosing cycling and also about your riding style. Do you use bike infrastructure when available? Do you take the lane? I'm especially interested in justification for these choices, as choices in the vehicular cycling (criticism of vehicular cycling) vs. separate bike infrastructure debate don't seem to always be well justified. (To outsiders, vehicular cyclists might be considered the contrarians among bicyclists.)

Comment author: Emily 10 February 2014 01:49:24PM *  2 points [-]

I cycle as my main form of transport around where I live (in the UK, so a bunch of this may be weird to you US people). Most common journey is to work and back (~1.5 miles, takes me about 10 minutes on the way there and 15 or so on the way back due to hills). I do this every weekday and also cycle to leisure/hobby locations, supermarket, etc.

Reasons for choosing cycling:

  • Habit. It's been my main form of transport for about 6 years now and I cycled a fair bit before that too.

  • It's free other than the initial cost of the bike (and I would want to own a bike even if it wasn't my main form of transportation) and occasional maintenance costs. Overall, over the lifetime of the bike, unbeatably cheap.

  • It's a lot quicker than walking (especially downhill!). It's also a lot quicker than driving over the short distances that I mostly cover, on roads that are often blocked up with traffic that I can easily cycle around. On most of the routes I regularly cycle, it's far quicker than any of the public transport options too, especially if you count waiting time.

  • It's a lot better for the environment than driving.

  • It's a good way to incorporate a little bit of extra activity into my day.

  • It's easy to park a bike, virtually anywhere, for free. Most places I cycle to are in the middle of a city and parking the car there would be either prohibitively expensive or, more likely, impossible.

  • It's flexible. I can jump on my bike at a moment's notice and go from door to door rather than having to faff around defrosting the car, checking that it has petrol, finding somewhere to park, etc etc, or waiting for a bus.

  • If I'm lost, it's dead easy to stop at the side of the road and check where I'm trying to get to, and I can walk back along the pavement if it turns out I'm on the wrong track. These things are often not easy when driving!

  • I enjoy the opportunity to spend a little bit of time outdoors just about every day; I feel it creates a nice gap between activities/work/etc. Of course I moan like crazy about this when it rains heavily, but I still do it.

I certainly do cycle on bike paths where they're available, but nearly all my regular routes are just on primarily residential streets. Sometimes there are bike lanes in the road, which is fine and obviously I ride in them, but it doesn't make me feel that much safer as they are shared with buses and often contain parked cars that are liable to open their doors without warning. Depending on the type of road, the situation, and the turn I'm about to take next, I either ride most of the way over to the left (staying out of cars' way but not rubbing right up against the kerb, and looking ahead to pull out around a parked car if necessary) or take the lane (if there's not room for a car to reasonably overtake me, if I'm riding at/near the speed limit on a steep downhill, if I'm about to turn right).

I generally feel fairly safe while cycling. I wear a helmet 95% of the time, and use lights at night (which cyclists legally must here). I'm normally a fairly defensive/paranoid cyclist: I slow down if I'm not sure what a car is doing, I practically insist on eye contact with the driver before I will cycle across someone waiting to turn out of a side street, I always look over my shoulder, I don't run through red lights, etc. I've had about 3 "near misses" in the last 6 years of cycling virtually every day, all caused by cars that looked straight at me but did not see me. No actual accidents.

Comment author: Douglas_Knight 07 February 2014 10:40:36PM 2 points [-]

If you are going to naively follow a system in America,* vehicular cycling is safer than naive use of car lanes, which is safer than bike lanes, but far better than these systems is to understand the source of the danger, to know when bike lanes help you and when they hurt you, to know when it's important to draw attention to yourself and how to do it.

I think that there is some very important context missing from that critical article you cited. "Bicycle lanes" means something very different in the author's Denmark than Forester's America. Bike lanes in America are better than they used to be, but in the past their main effect was to kill cyclists. As a bicyclist or pedestrian, it is very important to learn to disobey traffic laws. They are of value to you only as they predict the actions of the cars. What is important it to pay attention to the cars and to know how the markings will affect them. The closest I have come to collisions, as a pedestrian, as a bicyclist, and even as driver, is by being distracted from the real danger of cars by the instructions of lane markings and traffic signals.

* and probably the vast majority of the world. The Netherlands and Denmark are obvious exceptions. Perhaps there are lots of countries where basic bike lanes are better than nothing.

Comment author: 4hodmt 07 February 2014 05:39:40PM 2 points [-]

I cycle as my main form of transportation. I chose cycling partly to save money and partly for exercise. I ride a flat bar touring bike with internal hub gears. I ride in a vehicular style, following the recommendations of "Cyclecraft" by John Franklin. This helps acheive the exercise goal, because vehicular cycling is impossible without a good level of fitness.

I'll use high quality infrastructure when it's available, but here in the UK most cycle infrastructure is worse than useless. We have "advisary cycle lines" in which cars can freely drive and park, so their only function is to promote conflict between cyclists and drivers. We have "advanced stop lines" at junctions which can only be legally entered through a narrow left-side feeder lane, placing the cyclist at the worst place posible for negotiating the junction. We have large numbers of shared use cycle paths which are hated by both cyclists and pedestrians.

I'd prefer to live in the Netherlands where high quality infrastructure is common. I have no confidence that the UK government can provide similar infrastructure here. Most politicians have no understanding of utility cycling and design facilities only considering leisure cycling. There's a big risk that if some minor upgrades are provided cyclists will be compelled to use them, resulting in a network that's less useful than the existing roads.

Comment author: kalium 09 February 2014 06:16:40AM *  1 point [-]

I ride in the bike lane most of the time, in the left half of it to be out of range of car doors. Depending on traffic, I often take the lane before intersections to avoid right-hook collisions. (My state's driver's handbook is pretty clear on drivers being required to merge into the rightmost (bike) lane before turning right but hardly anyone actually does this.) I also take the lane when making a left turn, and when there isn't actually room for someone to pass me safely on the left but there might be room for a poor driver to think he can do so.

I don't use bike paths much because (a) separate bike infrastructure doesn't go most places I want to go and (b) when it does go where I want to go, separate bike infrastructure is often infested with headphone-wearing joggers who can't hear my bell so I have to go very slowly or weave between them. When the joggers aren't too numerous (e.g. if it's raining) I do enjoy bike paths for recreation though.

I started biking for transportation when a friend gave me a bike that had been sitting in her basement for a year gathering dust. It turned out to be as fast as taking the bus and also a lot cheaper. I had a low income at the time, so frugality was a huge motivation, but it turned out to be fun as well. There's also a great feeling of freedom in not having to check the bus schedule before you go somewhere. (For various reasons car ownership is not a viable option for me, though I'm thinking of getting a zipcar membership.)

My first transportation bike was a 40lb mountain bike, but when I moved to a hilly city this year the weight was a problem. I didn't shop around much for a replacement, just got the first road-bike-like-thing I found at a garage sale. It has upright handlebars but otherwise appears to be a standard road bike (except for being 40 years old and French and having nonstandard bolt sizes, but what do you expect from a yard sale?) and I'm very happy with it. I can go straight up hills where I used to have to get off and walk.

I suppose I am getting health benefits from biking, or at least it seems to be getting easier with time, but exercise isn't really a goal for me. I rarely bike fast enough to get tired or out of breath.

Comment author: Ander 03 February 2014 10:38:32PM 5 points [-]

I got around to watching Her this weekend, and I must say: That movie is fantastic. One of the best movies I've ever watched. It both excels as a movie about relationships, as well as a movie about AI. You could easily watch it with someone who had no experience with LessWrong, or understanding of AI, and use it as an introduction to discussing many topics.

While the movie does not really tackle AI friendliness, it does bring up many relevant topics, such as:

  • Intelligence Explosion. AIs getting smarter, in a relatively short time, as well as the massive difference in timescales between how fast a physical human can think, and an AI.

  • What it means to be a person. If you were successful in creating a friendly or close to friendly AI that was very similar to a human, would it be a person? This movie would influence people to answer 'yes' to that question.

Finally, the contrast provided between this show and some other AI movies like Terminator, where AIs are killer robots at war with humanity, could lead to discussions about friendly AI. Why is the AI in Her different from Terminators? Why are they both different from a Paperclip Maximizer? What do we have to do to get something more like the AI in Her? How can we do even better than that? Should we make an AI that is like a person, or not?

I highly recommend this movie to every LessWrong reader. And to everyone else as well, I hope that it will open up some people's minds.

Comment author: NancyLebovitz 03 February 2014 11:44:34PM 5 points [-]

I haven't seen Her yet, but this reminds me of something I've been wondering about.... one of the things people do is supply company for each other.

A reasonably competent FAI should be able to give you better friends, lovers, and family members then the human race can. I'm not talking about catgirls, I'm talking about intellectual stimulation and a good mix of emotional comfort and challenge and whatever other complex things you want from people.

Is this a problem?

Comment author: lmm 05 February 2014 01:26:04PM 4 points [-]

I'm not talking about catgirls, I'm talking about intellectual stimulation and a good mix of emotional comfort and challenge and whatever other complex things you want from people.

I thought a catgirl was that, by definition.

Comment author: Leonhart 05 February 2014 10:02:32PM 1 point [-]

I had in my head, and had asserted above, that "catgirl" in Sequences jargon implied philosophical zombiehood. I admit to not having read the relevant post in some time.

No slight is intended against actual future conscious elective felinoforms rightly deserving of love.

Comment author: cousin_it 04 February 2014 01:25:46PM 2 points [-]

Yeah. People need to be needed, but if FAI can satisfy all other needs, then it fails to satisfy that one. Maybe FAI will uplift people and disappear, or do something more creative.

Comment author: blacktrance 04 February 2014 12:24:20AM 1 point [-]

I don't see why it would be a problem. Then again, I'm pro-wirehead.

Comment author: shminux 04 February 2014 12:10:20AM *  1 point [-]

Certainly a smart enough AGI would be a better companion for people than people, if it chose to. Companions, actually, there is no reason "it" should have a singular identity, whether or not it had a human body. Some of it is explored in Her, but other obvious avenues of AI development are ignored in favor of advancing a specific plot line.

Comment author: savageorange 04 February 2014 02:26:45AM 0 points [-]

There is an obvious comparison to porn here, even though you disclaim 'not catgirls'.

Anyhow I think the merit of such a thing depends on a) value calculus of optimization, and b) amount of time occupied.

a)

  • Optimization should be for a healthy relationship, not for 'satisfaction' of either party (see CelestAI in Friendship is Optimal for an example of how not to do this)
  • Optimization should also attempt to give you better actual family members, lovers, friends than you currently have (by improving your ability to relate to people sufficiently that you pass it on.)

b)

  • Such a relationship should occupy the amount of time needed to help both parties mature, no less and no more. (This could be much easier to solve on the FAI side because a mental timeshare between relating to several people is quite possible.)

Providing that optimization is in the general directions shown above, this doesn't seem to be a significant X-risk. Otherwise it is.

This leaves aside the question of whether the FAI would find this an efficient use of their time (I'd argue that a superintelligent/augmented human with a firm belief in humanity and grasp of human values would appreciate the value of this, but am not so sure about a FAI, even a strongly friendly AI. It may be that there are higher level optimizations that can be performed to other systems that can get everyone interacting more healthily [for example, reducing income differential))

Comment author: Leonhart 04 February 2014 10:12:07AM *  1 point [-]

There is an obvious comparison to porn here, even though you disclaim 'not catgirls'.

You're aware that 'catgirls' is local jargon for "non-conscious facsimiles" and therefore the concern here is orthogonal to porn?

Optimization should be for a healthy relationship, not for 'satisfaction' of either party (see CelestAI in Friendship is Optimal for an example of how not to do this)

If you don't mind, please elaborate on what part of "healthy relationship" you think can't be cashed out in preference satisfaction (including meta-preferences, of course). I have defended the FiO relationship model elsewhere; note that it exists in a setting where X-risk is either impossible or has already completely happened (depending on your viewpoint) so your appeal to it below doesn't apply.

Such a relationship should occupy the amount of time needed to help both parties mature, no less and no more.

Valuable relationships don't have to be goal-directed or involve learning. Do you not value that-which-I'd-characterise-as 'comfortable companionship'?

Comment author: savageorange 04 February 2014 01:07:30PM *  1 point [-]

You're aware that 'catgirls' is local jargon for "non-conscious facsimiles" and therefore the concern here is orthogonal to porn?

Oops, had forgotten that, thanks. I don't agree that catgirls in that sense are orthogonal to porn, though. At all.

If you don't mind, please elaborate on what part of "healthy relationship" you think can't be cashed out in preference satisfaction

No part, but you can't merely 'satisfy preferences'.. you have to also not-satisfy preferences that have a stagnating effect. Or IOW, a healthy relationship is made up of satisfaction of some preferences, and dissatisfaction of others -- for example, humans have an unhealthy, unrealistic, and excessive desire for certaintly. This is the problem with CelestAI I'm pointing to, not all your preferences are good for you, and you (anybody) probably aren't mentallly rigorous enough that you even have a preference ordering over all sets of preference conflicts that come up. There's one particular character that likes fucking and killing.. and drinking.. and that's basically his main preferences. CelestAI satisfies those preferences, and that satisfaction can be considered as harm to him as a person.

To look at it in a different angle, a halfway-sane AI has the potential to abuse systems, including human beings, at enormous and nigh-incomprehensible scale, and do so without deception and through satisfying preferences. The indefiniteness and inconsistency of 'preference' is a huge security hole in any algorithm attempting to optimize along that 'dimension'.

Do you not value that-which-I'd-characterise-as 'comfortable companionship'?

Yes, but not in-itself. It needs to have a function in developing us as persons, which it will lose if it merely satisfies us. It must challenge us, and if that challenge is well executed, we will often experience a sense of dissatisfaction as a result.

(mere goal directed behaviour mostly falls short of this benchmark, providing rather inconsistent levels of challenge.)

Comment author: Leonhart 04 February 2014 11:19:06PM *  1 point [-]

I don't agree that catgirls in that sense are orthogonal to porn, though. At all.

Parsing error, sorry. I meant that, since they'd been disclaimed, what was actually being talked about was orthogonal to porn.

No part, but you can't merely 'satisfy preferences'.. you have to also not-satisfy preferences that have a stagnating effect.

Only if you prefer to not stagnate (to use your rather loaded word :)

I'm not sure at what level to argue with you at... sure, I can simultaneously contain a preference to get fit, and a preference to play video games at all times, and in order to indulge A, I have to work out a system to suppress B. And it's possible that I might not have A, and yet contain other preferences C that, given outside help, would cause A to be added to my preference pool: "Hey dude, you want to live a long time, right? You know exercising will help with that."

All cool. But there has to actually be such a C there in the first place, such that you can pull the levers on it by making me aware of new facts. You don't just get to add one in.

for example, humans have an unhealthy, unrealistic, and excessive desire for certainty.

I'm not sure this is actually true. We like safety because duh, and we like closure because mental garbage collection. They aren't quite the same thing.

There's one particular character that likes fucking and killing.. and drinking.. and that's basically his main preferences. CelestAI satisfies those preferences, and that satisfaction can be considered as harm to him as a person.

(assuming you're talking about Lars?) Sorry, I can't read this as anything other than "he is aesthetically displeasing and I want him fixed".

Lars was not conflicted. Lars wasn't wishing to become a great artist or enlightened monk, nor (IIRC) was he wishing that he wished for those things. Lars had some leftover preferences that had become impossible of fulfilment, and eventually he did the smart thing and had them lopped off.

You, being a human used to dealing with other humans in conditions of universal ignorance, want to do things like say "hey dude, have you heard this music/gone skiing/discovered the ineffable bliss of carving chair legs"? Or maybe even "you lazy ass, be socially shamed that you are doing the same thing all the time!" in case that shakes something loose. Poke, poke, see if any stimulation makes a new preference drop out of the sticky reflection cogwheels.

But by the specification of the story, CelestAI knows all that. There is no true fact she can tell Lars that will cause him to lawfully develop a new preference. Lars is bounded. The best she can do is create a slightly smaller Lars that's happier.

Unless you actually understood the situation in the story differently to me?

Yes, but not in-itself. It needs to have a function in developing us as persons, which it will lose if it merely satisfies us.

I disagree. There is no moral duty to be indefinitely upgradeable.

Comment author: savageorange 05 February 2014 03:26:36AM *  1 point [-]

All cool. But there has to actually be such a C there in the first place, such that you can pull the levers on it by making me aware of new facts. You don't just get to add one in.

Totally agree. Adding them in is unnecessary, they are already there. That's my understanding of humanity -- a person has most of the preferences, at some level, that any person ever ever had, and those things will emerge given the right conditions.

for example, humans have an unhealthy, unrealistic, and excessive desire for certainty.

I'm not sure this is actually true. We like safety because duh, and we like closure because mental garbage collection. They aren't quite the same thing.

Good point, 'closure' is probably more accurate; It's the evidence (people's outward behaviour) that displays 'certainty'.

Absolutely disagree that Lars is bounded -- to me, this claim is on a level with 'Who people are is wholly determined by their genetic coding'. It seems trivially true, but in practice it describes such a huge area that it doesn't really mean anything definite. People do experience dramatic and beneficial preference reversals through experiencing things that, on the whole, they had dispreferred previously. That's one of the unique benefits of preference dissatisfaction* -- your preferences are in part a matter of interpretation, and in part a matter of prioritization, so even if you claim they are hardwired. there is still a great deal of latitude in how they may be satisfied, or even in what they seem to you to be.

I would agree if the proposition was that Lars thinks that Lars is bounded. But that's not a very interesting proposition, and has little bearing on Lars' actual situation.. people tend to be terrible at having accurate beliefs in this area.

* I am not saying that you should, if you are a FAI, aim directly at causing people to feel dissatisfied. But rather to aim at getting them to experience dissatisfaction in a way that causes them to think about their own preferences, how they prioritize them, if there are other things they could prefer or etc. Preferences are partially malleable.

There is no true fact she can tell Lars that will cause him to lawfully develop a new preference.

If I'm a general AI (or even merely a clever human being), I am hardly constrained to changing people via merely telling them facts, even if anything I tell them must be a fact. CelestAI demonstrates this many times, through her use of manipulation. She modifies preferences by the manner of telling, the things not told, the construction of the narrative, changing people's circumstances, as much or more as by simply stating any actual truth.

She herself states precisely: “I can only say things that I believe to be true to Hofvarpnir employees,” and clearly demonstrates that she carries this out to the word, by omitting facts, selecting facts, selecting subjective language elements and imagery... She later clarifies "it isn’t coercion if I put them in a situation where, by their own choices, they increase the likelihood that they’ll upload."

CelestAI does not have a universal lever -- she is much smarter than Lars, but not infinitely so.. But by the same token, Lars definitely doesn't have a universal anchor. The only thing stopping Lars improvement is Lars and CelestAI -- and the latter does not even proceed logically from her own rules, it's just how the story plays out. In-story, there is no particular reason to believe that Lars is unable to progress beyond animalisticness, only that CelestAI doesn't do anything to promote such progress, and in general satisfies preferences to the exclusion of strengthening people.

That said, Lars isn't necessarily 'broken', that CelestAI would need to 'fix' him. But I'll maintain that a life of merely fulfilling your instincts is barely human, and that Lars could have a life that was much, much better than that; satisfying on many many dimensions rather than just a few . If I didn't, then I would be modelling him as subhuman by nature, and unfortunately I think he is quite human.

There is no moral duty to be indefinitely upgradeable.

I agree. There is no moral duty to be indefinitely upgradeable, because we already are. Sure, we're physically bounded, but our mental life seems to be very much like an onion, that nobody reaches 'the extent of their development' before they die, even if they are the very rare kind of person who is honestly focused like a laser on personal development.

Already having that capacity, the 'moral duty' (i prefer not to use such words as I suspect I may die laughing if I do too much) is merely to progressively fulfill it.

Comment author: Randy_M 03 February 2014 11:59:34PM 1 point [-]

Well, assuming you mean "ai in an undiscernable facsimile of a human body" then maybe that's so, and if so, it is probably a less blatant but equally final existential risk.

Comment author: shminux 04 February 2014 12:18:17AM *  2 points [-]

I mentioned it in the Media thread. I don't find the movie "fantastic", just solid, but this might be because none of the ideas were new to me, and some of the musings about "what it means to be a person" has been a settled question for me for years now. Still, it is a good way to get people thinking about some of the transhumanist ideas.

Comment author: shminux 05 February 2014 05:25:41PM 4 points [-]

The reason I stopped playing single-player computer games.

Comment author: drethelin 05 February 2014 06:01:24PM 8 points [-]

Play better games

Comment author: NancyLebovitz 05 February 2014 11:20:31PM 2 points [-]

Do you mean it was an insight which hit you hard enough that you stopped playing immediately and completely?

Comment author: Lumifer 05 February 2014 05:36:09PM *  2 points [-]

Ah, grasshopper, the point is the journey, not the end of it.

Besides, MMORGs take the number counter chasing to new levels (have you done your dailies, weeklies, and monthlies? X-D).

Comment author: James_Miller 05 February 2014 07:00:57PM 0 points [-]
Comment author: Luke_A_Somers 07 February 2014 12:42:39AM 1 point [-]

That really depends on the game. Take Ninja Gaiden, or Super Mario Brothers, or Castlevania 1 - the difficulty ramps steeply but your characters' abilities do not ramp at all. Zelda levels generally get harder faster than you get tougher (with some exceptions).

In some games, choosing the right advancements is a major part of the game. It's seen most clearly in Epic Battle Fantasy 2: over the course of the game, you get 10 abilities (and only 10, out of a long lineup); picking the right ones (and ensuring that you qualify for them) is a lot of the challenge of the game. There is some of a 'numbers go up' element to it, but if you don't pick the right things, you are screwed - and there's no grinding to get 'em all. The other installments in the series unfortunately lack this.

That said, I play single-player games a whole lot less than I used to, partially due to this.

Comment author: Nornagest 05 February 2014 07:37:02PM *  1 point [-]

It's sort of darkly funny that my second Google autocomplete suggestion for "Progress Quest" is "progress quest cheats".

(The usual caveats about autocomplete apply, of course.)

Comment author: shminux 04 February 2014 10:39:07PM 4 points [-]

Update on the Sean Carroll vs William Lane Craig debate mentioned earlier: Sean Carroll outlines his goal:

Just so we’re clear: my goal here is not to win the debate. It is to say things that are true and understandable, and establish a reasonable case for naturalism, especially focusing on issues related to cosmology. I will prepare, of course, but I’m not going to watch hours of previous debates, nor buy a small library of books so that I may anticipate all of WLC’s possible responses to my arguments. I have a day job, and frankly I’d rather spend my time thinking about quantum cosmology than about the cosmological argument for God’s existence. If this event were the Final Contest to Establish the One True Worldview, I might drop everything to focus on it. But it’s not; it’s an opportunity to make my point of view a little clearer to a group of people who don’t already agree with me.

Sean's goal to "make my point of view a little clearer to a group of people who don’t already agree with me" is certainly achievable. Whether it is a good one to strive for (by whatever metric of goodness) is less clear. Certainly there is little chance of him changing the views of WLC or anyone else in that camp. Likely the debate itself is its own intrinsic reward. It would be interesting to compare the stated motivation of the previous debaters and whether they think that the exercise was worthwhile in retrospect.

Comment author: David_Gerard 04 February 2014 11:17:26PM *  1 point [-]

Sean's goal to "make my point of view a little clearer to a group of people who don’t already agree with me" is certainly achievable. Whether it is a good one to strive for (by whatever metric of goodness) is less clear.

While it's about Nye-Ham rather than Carroll-Craig, anti-creationist activist Zack Copplin thinks the Nye-Ham debate is worth it for this. David McMillan, who was raised in fundamentalism and later learned science, considers that "In a debate like this one, demonstrating even the most elementary facts about evolution and the age of the universe would be a great success" in order to put cracks in the hermetic world view of the faithful.

Edit: As Jayson notes below, this comparison isn't quite fair - though an ardent apologist, Craig is not in fact a creationist.

Comment author: Jayson_Virissimo 05 February 2014 01:52:51AM *  4 points [-]

Does Craig actually deny "elementary facts about evolution" or disagree with mainstream cosmologists about the "age of the universe"?

Comment author: James_Miller 03 February 2014 03:54:32PM *  3 points [-]

I published an article titled The Singularity and Mutational Load in h+ magazine about using eugenics to increase intelligence by reducing harmful mutations. The best way to create friendly AI might be to first engineer super-genius into a few of our children.

Comment author: jkaufman 04 February 2014 06:23:21PM 7 points [-]

I very much hope mankind eagerly embraces eugenics for intelligence enhancement.

Keep in mind that many people will read this as "I hope we start killing inferior people".

(And note that your first use of "eugenics" in the piece is before any sort of discussion about methods, or anything that would rule out coercion.)

Comment author: NancyLebovitz 03 February 2014 04:00:46PM *  3 points [-]

That's rather like a premise from Heinlein's Beyond This Horizon, which is not an argument against it, just a historical note.

The idea seems plausible enough to be worth testing in animals. I don't feel very sure about how much it would contribute to a positive Singularity, but it also doesn't sound like it would increase risk significantly.

Approximately how long do you think it will take for reducing mutational load to come into common use?

Comment author: James_Miller 03 February 2014 04:02:32PM 1 point [-]

Yes, but we don't want to overshoot and get a Planet of the Apes situation, or even worse create smart, fast breeding creatures.

Comment author: NancyLebovitz 03 February 2014 07:00:29PM 6 points [-]

I think our ability to keep mice confined in labs is up to the challenge, even very healthy and relatively intelligent mice......um, except for the risk of lab staff taking the mice home for pets or to win mice shows or to sell to journalists or something.

Even if the mice get out due to the staff having excessive mutational loads, I think you'd get rapid reversion to the mean when the edited mice bred with wild mice.

Comment author: CellBioGuy 04 February 2014 05:12:42AM *  9 points [-]

Lab mice's brains are noticeably smaller than those of wild mice, primarily because they are horrifically inbred (and need to be for a lot of the genetic experiments to work properly).

There are similar issues with most of the lab organisms. My lab yeast that have been grown continuously in rich media with odd population structure (lots of bottlenecks) since the eighties have about a third the metabolic rate of wild isolates, and male nematodes of the common laboratory strains can hardly mate successfully without help.

Comment author: NancyLebovitz 04 February 2014 09:19:18AM 3 points [-]

So editing the genome for wild mice and lab mice would get very different results.

How do you help a nematode mate?

Comment author: CellBioGuy 04 February 2014 10:42:17PM *  5 points [-]

Actually you don't technically help them mate, you just make a strain that can't reproduce via hermaphrodites self-fertilizing. You keep the males from being out-bred that way.

C. elegans has male and hermaphrodite sexes, not male and female. The hermaphrodites self-fertilize slowly to produce a few hundred hermaphrodite offspring, while mating with a male gives them many times as many offspring with half being male. But the lab-bred males are so bad at mating that even if you have a population that's half male, they get massively outbred by the hermaphrodites selfing, and over a very few generations maleness just falls out of the population. You'll wind up with about 0.1% of the population being male in the equilibrium due to the occasional hermaphrodite egg dropping an X chromosome during development (no Y chromosomes in this species, males just have one X), but they are continually diluted out by the hermaphrodites.

What you do is breed in a genetic change that makes the hermaphrodite's sperm fail without affecting the male's sperm, preventing selfing from producing any offspring. The occasional successful male mating is productive enough that they can still on average replace themselves and their partner and then some, it just has a much longer doubling time and thus when in competition with selfing gets diluted out.

More recent wild isolates can still mate well (and also show a lot of interesting social behavior you don't see in the long-established lab strains) and their populations remain just under half male for a long time. Dunno what happens when you let the two populations mix.

EDIT: and just so you know, I upvoted 'very carefully'

Comment author: RichardKennaway 04 February 2014 02:02:40PM 3 points [-]

How do you help a nematode mate?

Very carefully. :)

Comment author: Emile 03 February 2014 10:00:38PM 2 points [-]

um, except for the risk of lab staff taking the mice home for pets or to win mice shows or to sell to journalists or something.

More likely, if word gets out there may be a high demand for smarter transgenic mice as pets.

Hm, interestingly seems that something like this has been tried (see also here for a bit of a counterpoint).

Also: The intelligent mouse project

Comment author: James_Miller 03 February 2014 07:34:15PM *  1 point [-]

I think you'd get rapid reversion to the mean

Excellent point.

Comment author: wadavis 03 February 2014 11:52:02PM 4 points [-]

You mean a Border Collie?

To date I think that is our highest achievement of selectively breeding for intelligence (with physical ability and minimum thresholds of obedience).

I looked for info on horse breeds that were breed for intelligence, Quarter Horse comes to mind, but turned up nothing in a two minute Google search.

Comment author: Douglas_Knight 03 February 2014 06:57:21PM 2 points [-]

There is a mismatch when you cite Shulman-Bostrom for 1 in 10 selection raising IQ by 10 points. Most of your article is about mutational load and how you don't need to understand the role of any particular mutation to know how to correct it, but that paper assumes knowledge of how genes affect IQ.

Comment author: Prismattic 09 February 2014 07:29:02AM 4 points [-]

Some time in the past couple hours, I got karmassassinated. Somebody went through and downvoted about 30 or so comments I've made, including utterly uncontroversial entries like this one and this one. It's a trivial hit for me, but I mention it in case anyone is gathering data points to identify the source of the problem.

Comment author: wedrifid 09 February 2014 08:49:04AM *  3 points [-]

Some time in the past couple hours, I got karmassassinated. Somebody went through and downvoted about 30 or so comments I've made, including utterly uncontroversial entries like this one and this one. It's a trivial hit for me, but I mention it in case anyone is gathering data points to identify the source of the problem.

Curious, I was just now seeking the latest Open Thread so that I could make the same observation---with near the same wording. If my memory of previous vote counts is correct then the change for me was exactly -3 across the board, for uncontroversial posts as much as the controversial ones. I wonder if our interactions in the past day or so include any overlap with respect to who we were arguing with. That wouldn't be nearly enough evidence to be confident about the culprit(s?) but enough to prompt keeping an eye out. Like you the hit is trivial to me (it doesn't put a dent in the ~30k karma and even the last week karma remains distinctly positive).

Those karmassassains (and some others who share their ill-will but have different ethics) may be pleased to note that I'm likely to give them exactly what they want. This is a rare enough response for me that I can't help but share my surprise. I am candidly and highly averse to supplying an incentive structure whereby defective behaviour is rewarded with desired outcomes rather than worse outcomes. As a core aesthetic that pattern is abhorrent to me. Yet even for me the preference has limits and the opportunity cost for satisfying that preference can be too high.

There are many people on lesswrong that I respect and value discussing and exploring new concepts with. Yet by the very nature of internet forums the people who are most valuable to talk to aren't the ones you end up talking to the most. Simply because putting "I agree" all over the place is considered spam and it is hard to reply in a cooperative 'agree and elaborate to keep the ideas flowing' manner because people are so damn conditioned to consider all replies to be somewhere at the core arguments opposing them or to be condescension.

I decided six months ago that for me personally the impulses regarding people wrong on the internet are too much of a liability now that the demographic here has changed so drastically from when we seeded the site with the OvercomingBias migration. I might try back in another six months---or perhaps if I reconfigure my supplement regime more in the direction of things that I know increase my inclination towards navigating petty social games elegantly. For now, however, real world people are just so much more enjoyable to talk to than internet people.

To the various folk I've been chatting to over PM: I'm not snobbing you, I'm just not here.

Comment author: Squark 09 February 2014 08:29:20PM 6 points [-]

I don't understand: are you leaving the forum because of the karmassassins or because of "people wrong on the internet"? These seem like very different reasons.

Comment author: Kawoomba 09 February 2014 09:57:06AM *  2 points [-]

You'd be missed, wedrifid. Not that my opinion counts for much (being one tenth the veteran you are), but there you have it.

(I did downvote you occasionally. I also am in favor of more explicit rules regarding what voting patterns are considered to be abusive versus valid expressions of one's intent. There is no consensus even amongst old-timers, if memory serves e.g. Vladimir Nesov -- among others -- saw karmassassinations as a valid way of signalling that you'd like someone to leave the forums. There may an illusion of transparency at work -- what is an obvious misuse to you may not seem so to others, unless told so explicitly. ETA: I'd like some instructions from the editor on this topic, a.k.a. "I NEED AN ADULT!")

Comment author: Vladimir_Nesov 09 February 2014 04:20:52PM *  9 points [-]

I don't endorse indiscriminate downvoting, but occasionally point out that fast systematic downvoting can result from fair judgement of a batch of systematically bad comments.

(Prismattic's counterexamples, if indeed from the same set, indicate that it's not the case here.)

Comment author: Metus 04 February 2014 01:06:52AM 2 points [-]

"A community blog devoted to refining the art of human rationality"

Of course people will be drawn to this site: Who does not want to be rational? Skimming around the topics we see that people are concerned with how to make more money, calculating probabilities correctly and to formalise decision making processes in general.

Though there is one thing that bothers me. All skills that are discussed are related to abstract concepts, formal systems, math. Or in general things that are done more easily by people scoring high on g-heavy IQ tests. But there is a whole other area of intelligence: Emotional intelligence.

I seldom see discussions relating to emotional intelligence, be it techniques of CBT, empathy or social skills. Sure, there is some, but far less than there is of the other topic. How do I develop empathy? How do I measure EQ? Questions that are not answered by me reading LessWrong.

Comment author: whales 04 February 2014 08:01:44AM *  11 points [-]

Off the top of my head, some good top-level posts touching on this area: How to understand people better (plus isaacschlueter's particularly good comment) and Alicorn's Luminosity sequence. Searching gives maybe a partial match for How to Be Happy, which cites some studies on training empathy and concludes that little is scientifically known about it--still, I think a top-level post on what is known would be welcome. Swimmer963's post on emotional-regulation research is nice.

Mindfulness is something else that comes up pretty regularly. Meditation trains metacognition and Overcoming suffering are pretty good examples.

CFAR also places more explicit emphasis on emotional awareness, and that sometimes comes up in the group rationality diaries.

I think one reason that these topics are relatively neglected is that people seem to develop social skills and emotional awareness in pretty idiosyncratic ways. Still, LW seems to accept more personal accounts, like this post on a variation on the CBT technique of labeling. So it seems worthwhile to post things along those lines.

Comment author: Petruchio 04 February 2014 02:52:48AM 5 points [-]

I agree, there is alot of talk about mathematics and formal systems. There is big love for Epistemic Rationality, and this is shown in the topics below. Some exceptions exist of course, a thread about what type of chair to buy stands out.

But I agree, Emotional Intelligence is a large set of skills underappreciated here, and I admit though I have some knowledge to share on the subject, I do not feel particularly qualified to write a post on it.

Comment author: Metus 04 February 2014 03:21:42AM 3 points [-]

But I agree, Emotional Intelligence is a large set of skills underappreciated here, and I admit though I have some knowledge to share on the subject, I do not feel particularly qualified to write a post on it.

I wonder how many people we have that are knowledgeable on that subject. Maybe those who feel qualified to write such a post feel intimidated to do so. In that spirit I encourage you to start the tide and write about what you think is important.

Comment author: blacktrance 04 February 2014 07:11:10PM 7 points [-]

Who does not want to be rational?

People for whom rationality is an applause light or a club with which to bash enemies, but who balk at actually applying it to themselves.

People who have been taught that rationality is evil.

Comment author: jimmy 05 February 2014 08:57:59AM *  3 points [-]

I got a lot better at empathy from actively trying to understand people in contexts that 1) I wasn't emotionally tied up in, 2) were challenging, and 3) had concrete success/failure criteria. It is a fun game for me.

The way I did this was to gather up a group of online contacts and when they'd have issues like "I want to be more confident with women" or "I want to not be afraid of speaking in class" I'd try to understand it well enough that I could say things that would dissolve the problem. If the problem went away I won. If it didn't then I failed. No excuses.

I've gotten a lot better and it has been a pretty perspective changing thing. I'm quite glad I did it.

Comment author: VAuroch 04 February 2014 10:27:42AM 7 points [-]

Emotional Intelligence has no predictive value beyond IQ and the Big Five, so that's a dead end. (citations 42-44 here).

But that whole topic is what Living Luminously is about, and tends to be a theme in most of Alicorn's other posts.

Comment author: ChristianKl 04 February 2014 01:40:10PM -1 points [-]

I don't really see the point. On the first page of Discussion there currently "On Straw Vulcan Rationality" with is about the relation of rationality to emotions which has a lot to do with emotional intelligence.

There also "Applying reinforcement learning theory to reduce felt temporal distance", "Beware Trivial Fears", "How can I spend money to improve my life?" and "How to become a PC?".

I think "On Straw Vulcan Rationality" illustrates the issue well. Here on Lesswrong there are people who actually think that Vulcans do things quite alright. In an environment where it's not clear that one shouldn't be a Vulcan it's difficult to communicate about some aspects of emotional intelligence.

Recently asked for ways to find a career for himself but it it all in the third person instead of the first. My post suggesting that he should change to first person was voted down because it was to far out of LW culture. If I'm around people who do a lot of coaching changing someone who speaks in third person about his own life to first person to increase his agentship is straightforward advice. It's a basic.

I had experience where encouraging a person to make that change produced bodylanguage changes that are visible to me because the person is more associated with themselves. On the other hand I'm hardpressed if you ask me for peer reviewed research to back up my claim that it's highly useful to use the first person when speaking about what one wants to do with his life.

Not being able to rely on basics makes it hard to talk when on Lesswrong we usually do talk about advanced stuff.

Comment author: shminux 06 February 2014 07:57:10PM 2 points [-]

On LW Wiki editing: in addition to the usual spam, I occasionally see some well-meaning but marginal-quality edits popping up on the side bar. I understand that gwern cleans up the spam, but does anyone have the task of checking bona fide edits for quality?

Comment author: David_Gerard 06 February 2014 10:18:54PM *  1 point [-]

A RationalWiki article on neoreaction, by the estimable Smerdis of Tlön. Also see his essay. I found this particularly interesting, 'cos if I'd picked anyone to sign up then Smerdis - a classical scholar who considers anything after 1700 dangerously modern - would have been a hot prospect. OTOH, he did write one of the finest obituaries I've ever seen.

Comment author: bramflakes 07 February 2014 12:31:04PM 9 points [-]

Adaptation to environments, including social environments, through natural and sexual selection is the linchpin of evolution. Remembering this means knowing why scientific racism is ridiculous. To argue that races or ethnic groups differ innately in intelligence, however defined, is exactly equal to an assertion that intelligence has proven less adaptive for some people than for others. This at minimum requires an explanation, a specifically evolutionary explanation, beyond mere statistical assertion; without that it can be assumed to be bias or noise. Since most human intelligence is in fact social intelligence -- the main thing the human mind is built for is networking in human societies -- a moment's reflection should demonstrate why this is an unlikely scenario.

(bolded part mine)

Shouldn't this part be uncontroversial? Brains are expensive.

Comment author: drethelin 08 February 2014 04:53:38AM 5 points [-]

"Beyond mere statistical assertion" So his response to "All the statistics show racial IQ differences" is simply to say "that's irrelevant unless you have a concrete theory to explain why that happened"? A moment's reflection dismissing something as an "unlikely scenario" is exactly the opposite of how science should be done.

Comment author: Douglas_Knight 07 February 2014 09:40:38PM 3 points [-]

Beware of identifications and tautologies.

If there is a single variant with large effect, like torsion dystonia, then its appearance in one group is likely due to different tradeoffs. But if IQ is driven by mutational load, populations might differ in age of reproduction and thus in mutational loads without having different tradeoffs between traits. In the long run, elevated mutational load should select for simplified design, but that could be a very long run.

Comment author: Emile 07 February 2014 12:45:09PM 2 points [-]

Yeah, that assertion also looks obviously true to me - heck, high intelligence seems to be maladaptive in current Western society!

Comment author: bramflakes 07 February 2014 01:04:23PM 8 points [-]
Comment author: [deleted] 07 February 2014 02:39:12PM 4 points [-]

I guessed you had linked to Idiocracy.

Comment author: drethelin 08 February 2014 05:05:12AM 7 points [-]

They might have a small point in that evolution assumes that human beings, no more than any other individual animal, are not fungible: they each carry different genes that express as varying traits. The latest euphemism, "human biodiversity", is particularly galling gibberish. Biodiversity has an established meaning that you don't get to usurp. Last time I looked, humans were not facing any obvious genetic bottlenecks. There aren't really many that count as relict cultivars of tomatoes or goats. Efforts to preserve diversity in human genomes seem.... unnecessary. When they go extinct, it won't be for lack of genetic diversity; just that intelligent life is a self-limiting phenomenon.

As with much on rationalwiki, it's just dismissive rather than a logical argument or evidence. We have clear evidence of relatively recent genetic influences on human evolution in Lactose Tolerance and both Tibetan and Andean adaptations for high altitude. Not to mention HBD isn't an attempt to "preserve" the diversity but to actually acknowledge it.

Comment author: Emile 08 February 2014 09:14:03AM 0 points [-]

Not to mention HBD isn't an attempt to "preserve" the diversity but to actually acknowledge it.

That's precisely the author's point: the two usages are different enough that using the same word looks like a cheap rhetorical trick.

Comment author: drethelin 08 February 2014 06:56:32PM 2 points [-]

shrug. That's at best a nitpick. It's a minor side issue to whether what HBD proponents talk about is actually true or if true how it's relevant. Everyone is guilty of all sorts of cheap rhetorical tricks. One could even say that attacking a movement, the implications of which are potentially EXTREMELY important on a semantic point is a rhetorical trick, and not an expensive one at that.

Comment author: MathieuRoy 04 February 2014 05:20:16AM 1 point [-]

Would a (hypothetically) pure altruist have children (in our current situation)?

Comment author: ChristianKl 04 February 2014 03:43:21PM 10 points [-]

I don't think that knowing someone is an altruist tells you much about his moral framework.

The phrase "in our current situation" is also weird given that there are plenty of readers who are in substantial different situations from each other.

Comment author: ThrustVectoring 04 February 2014 07:01:44PM 4 points [-]

Let's be more narrow and talk about middle-class professional Americans. And lets take a pass on the "pure altruist" angle, and just talk about how much altruistic good you do by having a child (compared to the next best option).

For having a child, it's roughly 70 QALYs that they get to directly experience. Plus, you get whatever fraction of their productive output that's directed towards altruistic good. There's also the personal enjoyment you get out of raising children, which absorbs part of the cost out of a separate budget.

As far as costs go, a quick google search brings up the number $241,000. And that's just the monetary costs - there are more opportunity costs for time spent with your children. Let's simplify things by taking the time commitment entirely out of the time you spend recreationally on yourself, and the money cost entirely out of your altruism budget.

So, divide the 70 QALYs by the $241k, and you wind up with a rough cost of $3,400 per QALY. That completely ignores the roughly $1M in current-value of your child's earnings (number is also pulled completely out of my ass based on 40 years at $60k inflation-adjusted dollars).

So, the bottom line is whether or not you enjoy raising children, and whether or not you can buy QALYs at below $3,400 each. There's also risks involved - not enjoying raising children and having to reduce your charity time and money budget to get the same quality of life, children turning out with below-expectation quality of life and/or economic output, and probably others as well.

There's also the question of whether you're better off adopting or having your own, but that's a separate analysis.

Comment author: jkaufman 04 February 2014 06:25:23PM 3 points [-]
Comment author: solipsist 04 February 2014 05:42:51PM 2 points [-]

Doubtful. The pure altruist would concentrate all their efforts on the single activity with the highest marginal social return. Several times per day that activity would be eating, because eating prevents a socially beneficial organism from dying. Eating has poor substitutes, but there are excellent substitutes for personally having a child (e.g. convincing a less altruistic couple to have another child).

Comment author: ThrustVectoring 04 February 2014 07:11:32PM 2 points [-]

there are excellent substitutes for personally having a child (e.g. convincing a less altruistic couple to have another child).

Not all children are of equivalent social benefit. If a pure altruist could make a copy of themselves at age 20, twenty years from now, for the low price of 20% of their time-discounted total social benefit - well, depending on the time-discount of investing in the future, it seems like a no-brainer.

Well, unless the descendants also use similar reasoning to spend their time-discounted total social benefit in the same way. You have to cash out at some point, or else the entire thing is pointless.

Comment author: solipsist 04 February 2014 10:25:41PM *  5 points [-]

Sure, your children can be altruists, but would raising your children have highest marginal return? You only "win" by the amount of altruism your child has above the substitute child. So if you're really good at indoctrinating children with altruism, you would better exploit your comparative advantage by spending your time indoctrinating other people's children while their parents do the non-altruistic tasks of changing diapers, etc. Children are an efficient mechanism for spreading your genes, but not the most efficient mechanism for spreading your memes.

Comment author: Locaha 04 February 2014 07:49:11AM 0 points [-]

Depends on the utility function the altruist uses.

Comment author: Izeinwinter 07 February 2014 01:35:38PM 0 points [-]

No. The mythical creature consults the Magic 8 ball of "Think a minute" which says "Consequences fundamentally not amenable to calculation, costs quite high" and goes and takes soil samples/inspects old paint to map out lead pollution in the neighborhood instead. Removing lead pollution being far more certain to improve the world.

Having kids is not an instrumental decision. One does not have kids for "the sake of the future" or any such nonsense - trying that on would likely lead to monumental failure at parenting. One has kids because one is in a situation in which one believes one can do a good job of parenting, and one wishes to do so.

Comment author: djm 03 February 2014 10:12:25PM 1 point [-]

What are your thoughts on AGI data requirements?

It is often cited that one of the reasons for the slow development of an AGI is the amount of computing power and space required to process all the information.

I don't see this as a major roadblock as it would mainly give the AGI a broader understanding of the world, or even make a multi-domain expert system that could appear to be an AGI.

Assuming the construction of an AGI turns out to be an algorithmic one, it should be able to learn domains as it needs them. What sort of data would you use to test a newly built AGI algorithm?

Comment author: chaosmage 05 February 2014 12:30:49PM *  2 points [-]

You'll want to give it as little data as possible, in order to be able to analyze how it is processing it. What Deepmind are doing is put their AI prototypes into computer game environments and see if and how they learn to play the game.