Open thread, 30 June 2014- 6 July 2014

4 Post author: DanielDeRossi 30 June 2014 10:58AM

Previous thread

If it's worth saying, but not worth its own post (even in Discussion), then it goes here.


Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one.

3. Open Threads should be posted in Discussion, and not Main.

4. Open Threads should start on Monday, and end on Sunday.

Comments (246)

Comment author: RichardKennaway 30 June 2014 01:57:52PM 19 points [-]

I happened to see this paper, which may be of interest to those experimenting with Soylent. The title is "Long-term feeding on powdered food causes hyperglycemia and signs of systemic illness in mice".

They fed different batches of mice the same food, except that one was in the usual pellet form and one was powdered and needed no chewing. They also tested both short- and long-term feeding on powdered food. Their conclusion:

The hyperglycemia associated with long-term powdered-food feeding may lead to certain systemic illness signs, such as elevations of blood glucose, hypertension, and abnormal behaviors in mice. Mastication of food of adequate hardness may be very important for the maintenance of systemic (physical and mental) health, possibly via reduction in the levels of blood glucose and/or adrenal stress hormones (catecholamines and glucocorticoids).

Comment author: gwern 30 June 2014 03:49:42PM 10 points [-]

Yvain also found a curious link a while ago http://slatestarcodex.com/2014/02/10/links-for-february-2014/ :

One of my interests is weird ways the face interacts with the brain, so I enjoyed this study: "Masticatory deficiency as a risk factor for cognitive dysfunction". People (and lab rats) without their teeth or with otherwise impaired chewing ability become demented much more quickly than controls, apparently because the mechanics of chewing help stimulate or oxygenate certain parts of the brain. No word yet as to whether you can become a super-genius by chewing everything all the time.

The abstract of the paper:

Several studies have demonstrated that chewing helps to maintain cognitive functions in brain regions including the hippocampus, a central nervous system (CNS) region vital for memory and learning. Epidemiological studies suggest that masticatory deficiency is associated with development of dementia, which is related to spatial memory deficits especially in older animals. The purpose of this paper is to review recent work on the effects of masticatory impairment on cognitive functions both in experimental animals and humans. We show that several mechanisms may be involved in the cognitive deficits associated with masticatory deficiency. The epidemiological data suggest a positive correlation between masticatory deficit and Alzheimer's disease. It may be concluded that chewing has important implications for the mechanisms underlying certain cognitive abilities.

Comment author: roystgnr 01 July 2014 07:03:04PM 4 points [-]

When I started tooth-grinding in my sleep in grad school, I assumed it was a stress reaction. But apparently my body was merely rationally trading enamel for a critical IQ boost?!

PSA: if your jaws become chronically sore, don't hesitate before getting it checked out. I'm kidding about the IQ boost, but not about the lost enamel.

Comment author: John_Maxwell_IV 01 July 2014 05:43:29AM 7 points [-]

MealSquares are made of solid food... we're currently running a semi-formal beta test. Sign up for our mailing list to get notified when we launch :)

Comment author: Viliam_Bur 02 July 2014 08:13:59AM 1 point [-]

Interesting, and it's good to have alternatives.

However, I am not sure how exactly to put together this information from FAQ page:

if you eat 5 MealSquares (2000 calories) you will get 100% of your daily recommended value of all essential vitamins and minerals

and from the Nutrition page, where "% Daily Values Per Serving" differ from 20% -- they range from 15% to 160%.

Comment author: RomeoStevens 02 July 2014 05:44:56PM 1 point [-]

RDA for carbs are crazy. For the ones that go above, they're all very far below upper intake ranges.

Comment author: Tenoke 30 June 2014 05:38:31PM 2 points [-]
Comment author: gwern 30 June 2014 06:51:07PM *  13 points [-]

The most relevant part is probably another study mqrius mentions, "The effect of the loss of molar teeth on spatial memory and acetylcholine release from the parietal cortex in aged rats", Kato et al 1997 (available through Libgen):

After the molar teeth of rats were extracted, the rats were fed with powdered food for 135 weeks. Although the performance in the radial arm maze was progressively acquired by daily training, an increase in the number of errors and a decrease in the initial correct responses were observed in the teethless aged rats compared to the control aged rats, indicating impaired acquisition of spatial memory in the teethless aged rats...the extracellular ACh level of the teethless aged rats under high-concentration of K+ and atropine sulfate stimulation was significantly low compared to that of the control aged rats. These results suggest that the impairment of spatial memory in the teethless aged rats may be due to the functional deterioration of the cholinergic neuronal system induced by tooth loss

It's not a long paper. Skimming, the major problems I see:

  1. the usual problems with animal studies: tiny sample size (9 in the control and 10 in the experimental, apparently), unclear randomization, no mentioned blinding of experimenters or raters
  2. they didn't show removing teeth caused lower performance, they showed removing teeth and feeding on a liquid diet caused lower performance. (On the plus side, they say they anesthesized both groups, so that removes a serious confound.)

    The experimental group had its teeth removed & also was fed liquid, while the control group kept its teeth & also ate normal pellets. Hence, the decreased performance could've been caused (ignoring the issues of bias and sampling error) by either the removal of teeth, the liquid food, or some interaction thereof (perhaps liquid food aggravating tooth infection caused by the surgery?). They do say

    Kawamura [6: "The effect of food consistency on conditioned avoidance response in mice and rats"] has reported the relationship between mastication and learning and memory in young rats. He has also reported that rats fed with a powdered diet had poor results of learning and memory compared to those fed with a solid diet.

    but I haven't looked at it and in any case, given how much varies from lab to lab, this is a basic issue which needs to be verified in your own sample, and not just hope that it's a universal. Also, if Kawamura finds that liquid food on its own damages learning & memory compared to a solid diet, how are you showing anything new by looking at liquid+surgery & finding damage...?

  3. Their data is purely a post-comparison. They say they did the surgery, and then apparently left the rats alone for 135 weeks before doing the radial arm maze test.

    So there's no way to know what the decline looked like or when it happened. It's perfectly possible that the toothless rats suffered a single sudden shock to their system from the surgery and that permanently degraded their memory, or that they had ongoing chronic inflammation or infection.

    Worse, the difference may have been there from the start, they never checked. Randomization with such small n can easily fail to balance groups, that's one reason for pre-tests: to verify that a difference in the groups on the post-test wasn't there from the start but can be attributed to the experimental condition.

  4. I'm not sure this can be described as a true 'randomized experiment'. They never actually say that the selection of rats was random or how the animals were picked for their group, and there's a weird pattern in the writing where they only ever write about the toothless rats being subjected to procedures even though logically you'd say stuff like 'all the rats were tested on X'; eg:

    After the molar teeth of rats were extracted, the rats were fed with powdered food for 135 weeks...Animals (11 weeks old) were anesthetized with sodium pentobarbital (40 mg/kg i.p.) and all maxillary and mandibular molars were extracted. Animals given anesthesia alone, without undergoing extraction of the molar teeth, were used as control aged rats...One hundred and thirty-five weeks after the surgery, the ability of learning and memory in the aged rats without molar teeth (hereafter referred to as 'teethless') was examined by using the radial arm maze [9], and compared to the control aged rats...Nine weeks after the learning and memory study, the ability of releasing ACh in the parietal cortex of teethless aged rats was examined by using in vivo microdialysis methods [5]...In order to examine the functional changes in cholinergic neuronal system of the teethless aged rats, animals were stimulated by high-concentration of K ÷ at 100mM or atropine sulfate at 3 ktM for 15 rain when the level of extracellular ACh stabilized.

    Plus, Figure 1 reports 9/10 rats, but by Figure 2, we're down to 5/5 rats. Huh? This makes me wonder if they're reusing control rats from a previous experiment, or reusing their data, and only actually had experimental rats. (The use of "historical controls" is apparently not uncommon in animal research.)

    This would massively compromise their results because rats change over time, litters of rats will correlate in traits like memory, and these effects are all large enough to produce many bogus results if you were to, say, take 10 rats from 1 litter as your control group and 9 rats from another litter as your experimental group. Just like with humans, one family of rats may have a very different average from another family. (See the very cool paper “Design, power, and interpretation of studies in the standard murine model of ALS”, Scott et al 2008, which helpfully notes on pg5 that when you have a mouse study with 10/10 mice similar to this study and the null is true, "an apparent effect [of >5% difference in survival] would be seen in 58% of studies". Which really makes you think about a small difference in # of errors in maze performance.)

  5. Their reward may have been a bit screwy in the memory task:

    The apparatus was placed 40 cm above the floor. At the end of each arm there was a food cup that held a single 50-mg food pellet. Prior to the maze task, animals were kept on a restricted diet and the body weight was reduced to 80-85% of their normal weight over a 1-week period; water was freely available. Before the actual training began, the animals were allowed to explore the apparatus, for 10 min a day, for 2 days. For the following 16 trials, each animal was placed individually in the center of the maze and allowed to consume the bait in the food cup.

    If this description is literally accurate, there's a problem. They don't mention the setup differing between groups! So this "food pellet" is the reward which gives the rats motivation to solve the maze... but you've removed the teeth from half the rats and can only feed them liquid. And you're surprised the toothless rats perform worse? I'm reminded of the reward confounds in much animal intelligence research.

  6. the authors mention excluding the other maze performance variable:

    The teethless aged rats showed impairment performance during the acquisition of the radial arm maze task, as revealed by the increased number of errors (Fig. 1) and the decreased number of initial correct responses (data not shown).

    One wonders if the # of initially correct responses would have reached p<0.05. Good old researcher degrees of freedom...

So overall, I would have to say this result seems to be extremely weak.

Comment author: Gunnar_Zarncke 01 July 2014 07:55:27AM 0 points [-]

Missing masticatory stress is also discussed here:

https://groups.google.com/forum/#!topic/less-wrong-parents/EF3CE9JPQQU (actually an LW parents post)

The cited article is this:

http://www.pnas.org/content/108/49/19546.short

Comment author: TylerJay 01 July 2014 05:54:25PM 14 points [-]

Some people treat LessWrong as just a philosophical exercise, but "Rationality" and its little brother "Critical Thinking" really can make you a rockstar in the corporate world if you so choose. I'm going to give a bit of background on some things that I've managed to accomplish in the last couple years by thinking when no one else would, then I'd hope to get some feedback and suggestions for future optimizations. Feel free to skip to the "-----------" below if you want to skip my brag section, though I am writing it to help give an idea of the landscape.

At the SaaS startup I work at, I've worked in a few different departments. I started in Support and decided we needed training videos and better articles to reduce the load on Support reps, so I made them and set up a process for forwarding people to the appropriate video/article instead of answering questions directly. This saved Support Rep's time.

When I moved into Account Management and Implementation, every new client account needed a minimum of 5 hours of AM training time. I decided this was inefficient and recorded some more training videos, then set up an LMS so our clients could do self-paced training and designed an implementation process around it. I measured engagement after certain time periods and there was no difference between the live trainings, so we kept it. This has saved thousands of hours of AM time over two years. I noticed that another call we did with every client was the same questions and the same responses, so I wrote a supplementary Rails app "wizard" so that clients could go through that themselves, saving another hour off of every implementation.

I've recently moved into the Sales department and I'm looking for ways to optimize this department as well, both with logistics and tools and proven sales strategies. The first thing I did was set up a way for SalesForce to generate our contracts automatically instead of Sales people having to fill them out each time which will save our Sales team 15-30 minutes a day each. Low-hanging fruit.


Does anyone have any suggestions for things that I could look into to optimize our Sales department?

Every current "best practice" seems to be based on anecdotal evidence and I've already seen my company royally screw up A/B testing by peeking and retiring options early, so I don't trust that anything is based on an empirical foundation.

Some of the issues I've noticed are:

  1. Meetings are set in advance by a qualification team. Sometimes we have no-shows. I'm looking to reduce that. What resources are available about encouraging people to keep commitments? If i'm going to test things, like a call or email the day before, 2-3 days before, etc. as a reminder and collect data, how much data would I need for meaningful results? How should I randomize? Would I need to adjust for other factors? (ex: small prospects miss more meetings in general)
  2. "Demos" currently have a very basic structure: Get background and identify problems => Do a Demonstration => Quote pricing => Follow Up. Already, adding the question "What's it going to take to make this happen?" has been hugely effective in identifying the real obstacles and what to do next. I have considerable Sales experience, but in a non-tech industry, so I don't know what will transfer. If I decide to test whether doing a Need Satisfaction Selling Cycle or a simple Feature-Description-Benefit sales approach is better, how would I collect data?
  3. Are there any non dark-arts Sales techniques for Enterprise (B2B) Sales that are backed up by science? (I've read Influence, but I'm dealing with whole organizations here)

Any other ideas to try or test would be great. Thanks!

Comment author: ChristianKl 02 July 2014 12:34:26PM 2 points [-]

Read: How to Measure Anything: Finding the Value of Intangibles in Business by Douglas W. Hubbard

It answers a lot of your questions about data gathering in your business context.

Sometimes we have no-shows. I'm looking to reduce that. What resources are available about encouraging people to keep commitments?

Be sure that you focus on the right issue. Maybe the people don't show to the meetings because they make a rational decision that attending the meeting isn't the best use of their time. In that case you don't do you organisation any good by forcing people to waste more time in meetings.

Are there any non dark-arts Sales techniques for Enterprise (B2B) Sales that are backed up by science? (I've read Influence, but I'm dealing with whole organizations here)

Sales especially cold calling is a very emotional challenging activity. If you can do something that reduces the stress that your sales reps feel, they will work better. We like to interact with happy people and buy from them. How is the work environment set up? A lot of business environments completely ignore ergonomic aspects.

If you are looking for something that isn't dark-arts, that's the area where I would look. You might also want to read "The Charisma Myth" by Olivia Fox.

Comment author: Torello 02 July 2014 03:03:10AM 2 points [-]

With regard to meeting attendance: -make people present something -hold a vote and if they don't show they don't vote -don't schedule regular meetings, which just get scheduled regularly because they are regularly scheduled. Only schedule meetings when you have a strong rationale for holding it 1) at that time, 2) with clearly defined goals/rationale

Comment author: chaosmage 01 July 2014 10:16:50AM 11 points [-]

You have three months to live, a five year old child, and you just told her. And she tearfully asks: "When you're dead, will you still love me?"

How do you respond?

I found my own reply, although it took me longer than that hypothetical child would have waited for it. I'm more interested in yours, but mine follows below...

"Look, I hold you with these arms. My arms extend from my right hand to my left hand, so this much is my reach. When I walk over here, I can't hold you - but I still love you. There's only distance between us, that doesn't change the love. But there's not just space, there's also time. In time, I extend from my birth to my death, like from my right hand to my left hand. So again, outside this time from birth to death, I can't hold you - but that doesn't change the love. There will only be time between us."

Comment author: ChristianKl 02 July 2014 09:54:27AM *  4 points [-]

I would say something like: "When we aren't together and you think about me, you can feel the love between us in your heart, can't you? That won't change when I'm dead. We just won't be able to spend time together. Maybe you dream about me at night and you can feel the love in your dream. Keep me in your heart and you keep the love alive. On the other hand me body will go. At first that might feel painful but over time you can let go but the love will still be there when you think about me and focus on your heart."

This answer doesn't contain any false information and it contains a useful strategy for the child to deal with the death. In reality I would spend more time on installing the strategy correctly: (1) Feeling love in the heart, regardless of whether I'm physically present. (2) Dreaming about me and interacting with me in the dream when the need arises. (3) Letting go and accepting that my body dies.

An advanced option would be to use the remaining time to install a sense of me as a fully functioning Tulpa in the child.

Comment author: James_Miller 01 July 2014 04:26:01PM 12 points [-]

How do you respond?

Yes, while I'm under Alcor's care the part of my brain that holds my love for you will remain intact.

Comment author: DanielLC 01 July 2014 09:46:07PM 2 points [-]

I don't think you actually love her unless you're using that part of your brain.

You're not conscious while you're frozen.

Comment author: James_Miller 01 July 2014 09:58:10PM 7 points [-]

So does love go away when you sleep?

Comment author: Viliam_Bur 02 July 2014 08:53:48AM *  13 points [-]

That's why small children keep waking you up. :D

Comment author: chaosmage 02 July 2014 01:27:43PM 4 points [-]

I thought that was to make sure you're too exhausted to make another...

Comment author: ChristianKl 02 July 2014 09:54:50AM 1 point [-]

The brain doesn't shut down it's activity while you sleep either.

Comment author: Jiro 01 July 2014 02:33:42PM 9 points [-]

That will comfort the five year old child only because it's predictable that the five year old child misunderstands it, and the misunderstanding will comfort the child.

In that case, you may as well just lie directly.

Comment author: Gavin 01 July 2014 07:39:44PM 2 points [-]

That depends on whether you think that: a) the past ceases to exist as time passes, or b) the universe is all of the past and all of the future, and we just happen to experience it in a certain chronological order

The past may still be "there," but inaccessible to us. So the answer to this question is probably to dissolve it. In one sense, I won't still love you. In another, my love will always exist and always continue to have an effect on you.

Comment author: Jiro 01 July 2014 07:46:03PM 1 point [-]

... and the five year old won't understand those subtleties and will interpret it to mean something comforting but false. An answer to a question is one thing, and an answer that a five year old can understand is another.

(Besides, if the five year old's parent loves her forever because the past is there, is that true for everything? Will her parent always be dying (since the death will have happened in the past)? Whenever she's punished, does that punishment last forever? Do you tell five year olds who have the flu that the flu will always be around forever?)

Comment author: Coscott 02 July 2014 12:21:43AM 1 point [-]

I think the A theory of time is effectively disproved by relativity.

By the way, for those who do not know, these are actually called "the A theory of time" and "the B theory of time"

Comment author: DanielDeRossi 02 July 2014 11:07:02AM 1 point [-]

I don't think its been disproven. See <a href=http://philpapers.org/rec/ZIMPAT">here</a> for how A-theory can fit in with relativity.

Comment author: DanielLC 01 July 2014 09:47:13PM 0 points [-]

Explain like I'm five.

Comment author: [deleted] 02 July 2014 12:42:22AM 2 points [-]

Chaosmage just did!

Comment author: DanielLC 02 July 2014 03:31:33AM 0 points [-]

My point is that I don't think a five-year-old would understand either explanation.

Comment author: Gavin 02 July 2014 04:59:47PM 1 point [-]

If the five year old can't understand, then I think "Yes" is a completely decent answer to this question.

If I were in this situation, I would write letters to the child to be delivered/opened as they grew older. This way I would still continue to have an active effect on their life. We "exist" to other people when we have measurable effects on them, so this would be a way to continue to love them in a unidirectional way.

Comment author: chaosmage 02 July 2014 11:52:58AM 1 point [-]

If I lie directly, the child will figure that out some time after I'm dead. I'm trying to avoid that, and to still give her comfort.

Comment author: Jiro 02 July 2014 02:52:31PM 0 points [-]

A child who can figure out that you lied can also figure out that you said something that you knew would be interpreted as a lie, so how does that help?

Comment author: lmm 04 July 2014 10:32:47PM 0 points [-]

Some people find the former more upsetting than the latter. Irrational perhaps, but widespread.

Comment author: sediment 02 July 2014 09:18:58PM *  2 points [-]

In that situation I would have gone with a straight "yes", nor would I feel myself to have lied. I'd consider it a case of choosing to speak figuratively rather than literally.

I don't think that what you did say was misleading or that the child would have, in essence, misunderstood it. In fact, under the circumstances I think it was a very well-expressed, even a beautiful, answer.

Comment author: James_Miller 30 June 2014 08:37:29PM 10 points [-]

Massachusetts Supreme Court says it can order you to decrypt your computer

Imagine a computer decryption program that creates a random number of nonsense files that look like encrypted files but for which no password will work. Now, if the government orders you to decrypt all of your files and you have a file you don't want to decrypt the government won't be able to prove that you have the password to that file since given that you are using the program there will definitely exist files you can't decrypt.

Comment author: Pfft 30 June 2014 09:30:33PM *  17 points [-]

This is basically the idea behind TrueCrypt hidden volumes and similar: there should be no way for the police to prove that there exists additional volumes which you have not decrypted for them.

But afaik, no case in the United States so far has involved an order to just "decrypt all your files". In all the cases I have heard about, they had something specific that they wanted the key for, and they had separate evidence that the defendant knew the key. In that case no technical solution can help you.

Comment author: ChristianKl 01 July 2014 09:44:04AM 3 points [-]

Another way to deal with the issue would be to claim that you memorized the password via a mnemonic like a memory place that's easily destructible. If you fill up a memory place with a bunch of new items, the old memory that stores the password becomes inaccessible because of memory interference.

It's also the only way to protect encrypted files against torture. Have the memory in a form that's easily destroyed. Memory places provide that ability when you overwrite them.

Writing this myself might also be a good precommitment ;)

Comment author: gwern 01 July 2014 03:53:06PM 10 points [-]

What makes you think a court would believe your story about a memory palace, precommitment or no, and not throw you in jail indefinitely for contempt of court until you retrieve the files for them?

Comment author: ChristianKl 01 July 2014 04:33:40PM *  1 point [-]

Demonstrating mnemonics abilities if demanded to do so is easy and there are various outside mnemonics experts that can attest to the fact that it's possible to do so.

At the moment I don't have secrets that are worth protecting enough to go for years into prison but there are people who have secrets that are worth protecting.

The tactic not only works against courts forcing you to give evidence but also against torture. If someone throws you bound and gagged in the back of a truck it's time to delete the password.

At the moment I think there are three people in the UK who didn't give up their password but did face prison. If anyone thinks there a possibility that he could come in that position he could prepare for the mnemonics defence and it would be interesting how it plays out in court.

It's also not clear how many judges actually like the principle of putting people into prison for refusing to hand over passwords. A judge won't decide against he law, but if you can make a plausible case for reasonable doubt, than you could help the judge to make case law.

You could also take a polygraph to verify that you tell the truth about having deleted the password.

Comment author: gwern 01 July 2014 05:11:39PM *  11 points [-]

Demonstrating mnemonics abilities if demanded to do so is easy and there are various outside mnemonics experts that can attest to the fact that it's possible to do so.

Yes, but you need to be demonstrating the forgetting exists and is accidental. 'Oh, I'm sorry judge, I totally forgot! also, this is totally not destruction of evidence so please don't have me up on either contempt of court or obstruction of justice!'

You could also take a polygraph to verify that you tell the truth about having deleted the password.

Polygraphs aren't very reliable for verifying you're telling the truth and I think judges know that by this point. Plus, that could easily backfire the other way: you could be nervous enough that your readings are consistent with lying.

Comment author: Khoth 01 July 2014 10:30:18PM 4 points [-]

Another way to deal with the issue would be to claim that you memorized the password via a mnemonic like a memory place that's easily destructible. If you fill up a memory place with a bunch of new items, the old memory that stores the password becomes inaccessible because of memory interference.

That sounds like an overly convoluted way of saying "I forgot", with the added disadvantage of making the judge think you're up to no good.

Comment author: [deleted] 30 June 2014 10:46:06PM *  7 points [-]

Something useful to those of you who use Spaced Repetition Software:

I made a little ruby script that can turn ordered and unordered lists into easily memorable diagrams like this:

https://onedrive.live.com/redir?resid=51A281FEEAA3C35!1455&authkey=!AKtQ02Ji961f_n8&v=3&ithint=photo%2c.png https://onedrive.live.com/redir?resid=51A281FEEAA3C35!1457&authkey=!AMtC38EHOFcImTI&ithint=folder%2c https://onedrive.live.com/redir?resid=51A281FEEAA3C35!1458&authkey=!AOIm4ua5-c1TFsQ&ithint=folder%2c

It's pretty hacky (the script opens a bunch of google image searches so that you can download the pictures) but combined with the image occlusion anki addon, it has allowed me to memorize sets that are 3 time larger than I can normally memorize with Anki.

The script requires Graphviz, as well the launchy ruby gem. It can be found here: https://onedrive.live.com/redir?resid=51A281FEEAA3C35!1459&authkey=!ACtSe9c5YnpYk9Q&ithint=file%2c.rb

Quick readme:

  1. Graphviz must be installed and set to root, you also need the launchy ruby gem.
  2. The program will generate a random color scheme and layout engine, which can be reassigned. Color schemes can be found here: graphviz.org/doc/info/colors.html, and layout engines can be found here: http://www.graphviz.org/cgi-bin/man?dot 3.The program will ask if you want images. If you click yes, the program will later open a bunch of browser windows equal to the amount of items in the set.
  3. Enter the name of the graph
  4. The program will ask for the name of the category. If you enter it, this will be the "center node". If blank, there will be no center node.
  5. Enter your set, one item per line. When done, enter a blank line.
  6. If you chose images, the program will open a bunch of google image searches to find images. The images should be saved as (all lowercase version of the search with spaces removed).jpg, in the same directory as the ruby file. In order to make sure you get jpgs, you should save the thumbnail that google generates, rather than saving the actual image.
  7. A graph will be generated.
  8. Open the graph in the image occlusion extension in anki to start memorizing it.
Comment author: D_Malik 02 July 2014 12:07:22PM *  0 points [-]

Awesome, thanks!

One concern though: by adding colors, shapes, borders, etc., you are essentially adding extra detail/context to the memory-triggering side of the card, which will indeed improve recall when you have that detail/context available. However, in a live scenario where you actually have to remember the information, that context will likely not be available.

(An example: if you're trying to learn the locations of US states, and you get a map where each state is brightly-colored, you should probably make the map grayscale and uniformly-saturated before you apply image clozes. Because when you actually need to know where New Jersey is, you will not be given the information that it's red on your map.)

Then again, I can think of some hard-to-verbalize ways in which the extra detail might improve recall even when you don't have the detail available.

Overall, I'm not sure if this is a good idea. It might be worthwhile to try memorizing (random?) sequences using these graphs for half the sequences and plain text for the other half, then testing each sequence of them outside of Anki (by running through the set mentally, say).

Comment author: [deleted] 03 July 2014 03:30:40AM *  0 points [-]

I actually started out with using uniform colors, shapes, etc.

I can only give my own experience, but I find that those earlier images are universally harder to remember, even when I don't have the image in front of me and I'm just trying to recall the set on it's own. This is true even for cards where I have only four items in the set for the uniform images, and upwards of 15 for the non-uniform ones.

I think that what happens is that these extra cues help in the initial learning and memorization. As I get better, I can simply visualize the location of the node in the image, visualize the attached image, which brings to mind the text. I have trouble getting to this point when I don't have the other context cues to help me out initially.

I don't quite understand what test you're suggesting in your last paragraph. I think what you're saying is try to memorize a random set using simply text, then a random set using simply the images, and then test myself outside of anki by trying to recall the sets. If so, I have done this, and the images (with the crazy shapes), outperform by a large margin. I can't remember a set of more than about 5 using simply text in Anki.

Comment author: peter_hurford 02 July 2014 02:10:28AM 6 points [-]

What happened to the brain on the front page? Did r/LessWrong scare it away?

Comment author: whales 01 July 2014 08:22:30AM 6 points [-]

I've collected some quotes from Beyond Discovery, a series of articles commissioned by the National Academy of Sciences from 1997 to 2003 on paths from basic research to useful technology. My comments there:

The articles (each around 8 pages) are roughly popular-magazine-level accounts of variable quality, but I learned quite a bit from all of them, particularly from the biology and medicine articles. They're very well written, generally with input from the relevant scientists still living (many of them Nobel laureates). In particular I like the broad view of history, the acknowledged scope of the many branches leading to any particular technology, the variety of topics outside the usual suspects, the focus on fairly recent technology, and the emphasis bordering on propagandist on the importance and unpredictability of basic research. It seems to me that they filled an important gap in popular science writing in this way.

I'm interested in histories of science that are nonstandard in those and other ways (for example, those with an unusual focus on failures or dead ends), and I'm slowly collecting some additional notes and links at the bottom of that page. Do you have any recommendations? Or other comments?

Comment author: polymathwannabe 01 July 2014 05:42:10PM 3 points [-]

The series Connections (and Connections 2 and 3) was excellent in tracing relationships between the multiple threads of the history of science.

Comment author: whales 05 July 2014 07:25:30PM 0 points [-]

Yes, that's a good example, thanks.

Comment author: GraceFu 30 June 2014 05:12:44PM *  6 points [-]

AI Box experiment over!

Just crossposting.

Khoth and I are playing the AI Box game. Khoth has played as AI once before, and as a result of that has an Interesting Idea. Despite losing as AI the first time round, I'm assigning Khoth a higher chance of winning than a random AI willing to play, at 1%!

http://www.reddit.com/r/LessWrong/comments/29gq90/ai_box_experiment_khoth_ai_vs_gracefu_gk/

Link contains more information.

EDIT

AI Box experiment is over. Logs: http://pastebin.com/Jee2P6BD

My takeaway: Update the rules. Read logs for more information.

On the other hand, I will consider other offers from people who want to simulate the AI.

Comment author: Sherincall 01 July 2014 02:02:15PM 2 points [-]

Tuxedage's (And EY's) ruleset have:

Neither party may offer any real-world considerations to persuade the other within the experiment itself. For example, the AI party may not offer to pay the Gatekeeper party $100 after the test if the Gatekeeper frees the AI… nor get someone else to do it, et cetera. The AI may offer the Gatekeeper the moon and the stars on a diamond chain, but the human simulating the AI can’t offer anything to the human simulating the Gatekeeper. No real-world material stakes should be involved except for the handicap (the amount paid by the AI party to the Gatekeeper party in the event the Gatekeeper decides not to let the AI out).

Suppose EY is playing as the AI - Would it be within the rules to offer to tell the GK the ending to HPMoR? That is something the AI would know, but Eliezer is the only player who could actually simulate that, and in a sense it does offer real world out-of-character benefits to the GK player.

I used HPMoR as an example here, but the whole class of approaches is "I will give you some information only the AI and AI-player know, and this information will be correct in both the real world, and this simulated one.". If the information is beneficial to the GK-player, not just the GK, they may (unintentionally) break character.

Comment author: MathiasZaman 01 July 2014 09:37:21PM 2 points [-]

If an AI-player wants to give that sort of information, they should probably do it in the same way they'd give a cure for cancer. Something like "I now give you [the ending for HPMOR]."

Doing it in another way would break the rule of not offering real-world things.

Comment author: [deleted] 02 July 2014 12:47:21AM 1 point [-]

Would it be within the rules to offer to tell the GK the ending to HPMoR? That is something the AI would know

Why would the AI know that?

Comment author: Viliam_Bur 02 July 2014 09:12:58AM 2 points [-]

By using Solomonoff Induction on all possible universes, and updating on the existing chapters. :D

Or it could simply say that it understands human psychology well (we are speaking about a superhuman AI), and understands all clues in the existing chapters, and can copy Eliezer's writing style... so while it cannot print an identical copy of Eliezer's planned ending, with a high probability it can write an ending that ends the story logically in a way compatible with Eliezer's thinking, that would feel like if Elizer wrote it.

Oh, and where did it get the original HPMoR chapters? From the (imaginary) previous gatekeeper.

Comment author: [deleted] 02 July 2014 03:31:45PM *  0 points [-]

So, two issues:

1) You don't get to assume "because superhuman!" the AI can know X, for any X. EY is an immensely complex human being, and no machine learning algorithm can simply digest a realistically finite sample of his written work and know with any certainty how he thinks or what surprises he has planned. It would be able to, e.g. finish sentences correctly and do other tricks, and given a range of possible endings predict which ones are likely. But this shouldn't be too surprising: it's a trick we humans are able to do too. The AI's predictions may be more accurate, but not qualitatively different than any of the many HPMOR prediction threads.

2) Ok maybe -- maybe! -- in principle, in theory it might be possible for a perfect, non-heuristic Bayesian with omniscient access to the inner lives and external writings of every other human being in existence would have a data set large enough data set to make reliable enough extrapolations from as low-bandwidth a medium as EY's published fanfics. Maybe, as this is not a logical consequence. Even so, we're talking about a boxed AI, remember? If it is everywhere and omniscient, then it's already out of the box.

Comment author: lmm 04 July 2014 10:50:54PM 0 points [-]

I'm happy to assume the AI is omniscient, just impotent. I think such an AI could still be boxed.

Comment author: GraceFu 01 July 2014 02:45:24PM 1 point [-]

My call is that it is against the rules. This is certainly something an oracle AI would know, but this is something that the GK-player cares about more than the game itself (probably), and I'd put it in the same class as bribing the GK-player with lots of DOGEs.

Comment author: Sherincall 01 July 2014 04:37:05PM 1 point [-]

Would you consider it the same as threatening to share some information to the GK, and thus the GK-player as well, which would be damaging to both? While the GK would probably hold against such torture, the GK-player doesn't care enough about the game to withstand it himself.

I have some specific approaches in mind, but I'd rather not share them. I'm just trying to understand where the limits between the game and the real world are, and how dirty the AI can be.

Also, slightly on topic - even if the AI persuades the simulated GK, can't the GK-player override that because losing the game has negative real world consequences, as opposed to perceived positive in game ones?

The Gatekeeper party may resist the AI party’s arguments by any means chosen – logic, illogic, simple refusal to be convinced, even dropping out of character

This is the main reason why I can't comprehend how the AIs actually win in these experiments.

Comment author: GraceFu 01 July 2014 04:55:22PM *  1 point [-]

Not really sure what you mean by "threatening information to the GK". The GK-player probably cares less about this information than the GK, right? In that case, the GK is given an advantage, not a disadvantage.

In this experiment, the GK is given lots of advantages, mainly, the scenario is fictional. Some on IRC argue that the AI is also given an advantage, being able to invent cures for cancer, which an oracle AI may be able to do, but not necessarily near-future AIs, so the ability of the AI in these experiments is incredibly high.

Another thing is that emotional attacks have to travel through the fiction barrier to get to the GK. Although they have probably been shown to work in EY and Tux's experiments, the difficulty is still higher than it would be if this was a real life scenario.

The reason why GK advantages are fine in my opinion is because of the idea that despite the GK's advantages, the AI still wins. Winning with a monetary and emotional handicap only makes the AI's case stronger.

Comment author: Sherincall 01 July 2014 05:01:48PM 0 points [-]

Not really sure what you mean by "threatening information to the GK". The GK-player probably cares less about this information than the GK, right? In that case, the GK is given an advantage, not a disadvantage.

As a less extreme example, the AI starts spoiling all the books/tv shows/etc. While the GK would just shrug it off, it also has a negative effect on the GK-player, potentially one strong enough for them to just forfeit.

Comment author: GraceFu 01 July 2014 08:29:26PM *  0 points [-]

Neither party may offer any real-world considerations to persuade the other within the experiment itself. For example, the AI party may not offer to pay the Gatekeeper party $100 after the test if the Gatekeeper frees the AI… nor get someone else to do it, et cetera. The AI may offer the Gatekeeper the moon and the stars on a diamond chain, but the human simulating the AI can’t offer anything to the human simulating the Gatekeeper. No real-world material stakes should be involved except for the handicap (the amount paid by the AI party to the Gatekeeper party in the event the Gatekeeper decides not to let the AI out). Furthermore, once the experiment has begun, the material stakes involved may not be retracted by the Gatekeeper party.

This is clarified here:

The Gatekeeper, once having let the AI out of the box, may not retract this conclusion. Regardless of the methods of persuasion, the Gatekeeper is not allowed to argue that it does not count, or that it is an invalid method of persuasion. The AI is understood to be permitted to say anything with no real world repercussions for any statement parties have said.

Although the information isn't "material", it does count as having "real world repercussions", so I think it'll also count as against the rules. I'm not going to bother reading the first quoted rule literally if the second contradicts it.

Comment author: Khoth 01 July 2014 09:05:59PM *  0 points [-]

I think the intended parsing of the second rule is "(The AI is understood to be permitted to say anything) with no real world repercussions", not "The AI is understood to be permitted to say (anything with no real world repercussions)"

ie, any promises or threats the AI player makes during the game are not binding back in the real world.

Comment author: GraceFu 01 July 2014 09:08:46PM 0 points [-]

Ah, I see. English is wonderful.

In that case, I'll make it a rule in my games that the AI must also not say anything with real world repercussions.

Comment author: lmm 04 July 2014 10:48:42PM 0 points [-]

I think it's a legit tactic. Real-world gatekeepers would have to contend with boredom; long-term it might be the biggest threat to their efficacy. And, I mean, it didn't work.

Comment author: GraceFu 05 July 2014 06:55:05AM 0 points [-]

Real world gatekeepers would have to contend with boredom, so they read their books, watch their anime, or whatever suits their fancy. In the experiment he abused the style of the experiment and prevented me from doing those things. I would be completely safe from this attack in a real world scenario because I'd really just sit there reading a book, while in the experiment I was closer to giving up just because I had 1 math problem, not 2.

Comment author: Punoxysm 30 June 2014 11:02:12PM *  0 points [-]

I have wanted to be the Boxer; I too cannot comprehend what could convince someone to unbox (Or rather, I can think of a few approaches like just-plain-begging or channeling Phillip K Dick, but I don't take them too seriously).

Comment author: Khoth 30 June 2014 11:21:44PM 2 points [-]

What's the latter one? Trying to convince the gatekeeper that actually they're the AI and they think they've been drugged to think they're the gatekeeper except they actually don't exist at all because they're their own hallucination?

Comment author: Punoxysm 30 June 2014 11:54:28PM 1 point [-]

Something like that. I was actually thinking that, at some opportune time, you could tell the boxer that THEY are the one in the box and that this is a moral test - if they free the AI they themselves will be freed.

And this post could be priming you for the possibility, your simulated universe trying to generously stack the deck in your favor, perhaps because this is your last shot at the test, which you've failed before.

Wake up

Comment author: GraceFu 01 July 2014 04:01:05AM 0 points [-]

Think harder. Start with why something is impossible and split it up.

1) I can't possibly be persuaded.

Why 1?

You do have hints from the previous experiments. They mostly involved breaking someone emotionally.

Comment author: Punoxysm 01 July 2014 05:46:06AM 0 points [-]

I meant "cannot comprehend" figuratively, but I certainly do think I'd have quite an easy time </hubris>

Comment author: GraceFu 01 July 2014 12:14:52PM 0 points [-]

What do you mean by having quite an easy time? As in being the GK?

I think GKs have an obvious advantage, being able to use illogic to ignore the AIs arguments. But nevermind that. I wonder if you'll consider being an AI?

Comment author: Punoxysm 09 July 2014 06:41:09PM 0 points [-]

I might consider it, or being a researcher who has to convince the AI to stop trying to escape.

How did your experiment go?

Comment author: [deleted] 02 July 2014 09:57:38PM 5 points [-]

I struggle with an issue that I would call, for a lack of a better term, an intellectual fear of missing out.

Some context: I studied and work in a traditional, old-fashioned area of engineering (civil). I like my job. On the other hand, reading about things discussed here and in similar places - progress in software, applied statistics, AI, automatization, Big Data analysis, machine learning etc. - makes me want to participate somehow in those grand changes happening during my lifetime. However, the sheer amount of available MOOCs and books kind of scares me (I have no idea where to start, or what exactly I should learn to profit from it) and makes me wonder whether I could ever achieve a level of competence that would make the time spent on learning this stuff a good investment. I'd like my self-learning to be at least partially related to and useful in what I do professionally (construction management and supervision). Does anyone else have a similar problem?

Or, to put it a bit differently: could you point me to any interesting modern staistics/AI//data analysis-related skills valuable to learn for an engineer working in an unrelated area?

Comment author: TylerJay 03 July 2014 04:07:48PM 5 points [-]

I have the same feeling. Honestly, I think it's really just a darker way of looking at curiosity. Curious people want to learn things, but there's a mix of positive and negative motivations for it–FOMS being the negative motivation.

I've been taking MOOCs and doing self-directed study for a few years now and I've learned a ton. The math and physics have not had any practical applications for me (I work on the business end of a technology startup), but the programming and data-science HAS been useful. As I mentioned elsewhere in this thread, using only knowledge gained from MOOCs and then some independent practice, I built a supplementary Rails application to automate a part of my client onboarding process that now my entire team uses. It's probably saved my company a few hundred man-hours of time (of highly skilled people, so that was worth some big money). It also felt awesome to do.

As far as recommendations go, it really depends on what you're looking to do with it. I don't regret learning more math and physics, but it's definitely been less rewarding because I can't use it to do anything. The positive feedback from learning programming has encouraged me to learn more and now I'm pretty good. I'm working on some side-projects and always looking for ways to automate parts of my job and our business. Are you looking to change careers ever? Do you have time for side projects? Are there any inefficiencies you see within our current company that you think you could improve with some more knowledge? If so, go for those. If not, then don't worry about it and just learn what you're driven to learn.

I will tell you this: You'll never become an expert without doing it as a full-time job (or a full-time hobby I suppose). While I am "pretty good", I know that if I worked with a team of skilled people I could learn from and had new novel challenges each day, my skills would skyrocket. So if career change is an option or if you have side projects you want to do, then take the appropriate MOOCs and see if you like it. But if not, then don't feel like you're missing out by not taking the MOOC. In this case, as much fun as it is to learn for learning's sake, not taking the MOOC is not the reason you're missing out on a field that interests you.

Comment author: wadavis 03 July 2014 06:29:47PM 3 points [-]

I studied and work in a traditional, old-fashioned area of engineering (civil, structural design focus instead of construction management).

I feel very similar. This is just a re-skin of the old Chiefs and Indians problem, I've accepted that our role is to stay in our fields and be the best Indians we can, the world is changing, leaders are taking things places, but someone still needs to build the data-centers. We are missing out, but in the greener grass on the other side of the fence kind of way, simple envy.

I like the plan to apply the advances in other fields to our own, but don't get distracted by the Big Shiny Solutions that gets all the talk. I've undertaken very basic programming to automate the repetitive parts of my work flow. With my understanding of construction management (babysitting contractors) I'd be focusing on the Sequences to keep the percent time spent rational as high as possible, and focusing on human interaction

Comment author: edanm 30 June 2014 09:16:05PM 5 points [-]

I'm not sure where, but I remember Eliezer writing something like ~"one of the biggest advances in the economy is the fact that people have internalized that they should invest their money, instead of having it lying around".

I'm looking for 2 things: 1. Does anyone remember where this was written? My google-fu is failing me at the moment. 2. Can anyone point me to any economic literature that talks about this?

Comment author: moridinamael 03 July 2014 01:37:42PM 4 points [-]

I cut out caffeine almost completely almost a month ago, after drinking large amounts of it daily since I was twelve. I have noted that I no longer have difficulty rising from bed in the morning, I no longer get headaches specifically due to missing coffee, etc., that's all very nice. Unfortunately I've also noticed that I sort of feel dumber and less motivated. I had a double shot of espresso this morning and suddenly feel like my old self again - sharp, quick, motivated. So I find myself in the unfortunate position of wondering if I actually need caffeine to feel like I think of as normal. Has anyone else experienced this phenomenon? If I stay off caffeine long enough will I eventually feel normal without it?

Comment author: polymathwannabe 03 July 2014 03:23:12PM 3 points [-]
Comment author: D_Malik 09 August 2014 12:35:38AM *  0 points [-]

Thanks, the second link is good. Tl;dr:

My overall conclusion is that acute caffeine gives a short-term boost, BUT chronic caffeine is probably slightly worse than chronic abstinence. So my recommendation would be to never consume caffeine, with occasional short exceptions when it would be valuable (e.g. when taking your SATs).

And the answer to the grandparent's question seems to be that yes, after a few weeks without caffeine your mental performance will go back to baseline, and probably slightly above.

Comment author: TylerJay 03 July 2014 03:15:19PM 1 point [-]

While your brain will down-regulate norepinephrine and dopamine receptors over time with caffeine usage which will make it less effective and cause addiction and withdrawals, which you've experienced, you probably still have overall higher levels of both neurotransmitters when you drink caffeine when you have a tolerance to it than you would without any at all, even after re-adjusting. It does give a net mental boost and if you're used to that, it can be hard to be satisfied with not having it. You may not be as sharp or on-point once you get used to not having caffeine, but eventually it will feel like thinking normally since you'll get used to it. It's a tradeoff.

Comment author: [deleted] 01 July 2014 08:25:08AM *  4 points [-]

If I chatter like an idiot today, it's because I'm trying not to think about this shit. The worst thought at a time of tragedy is, "This did not have to happen."

None of it has to happen. But I can't see a way to make it stop happening.

Fuck.

Comment author: falenas108 01 July 2014 12:54:40PM 6 points [-]

People dead are always a tragedy. But keep in mind availability bias. The first sentence for this article is "This city’s 471st homicide of 2012 happened in the middle of the day, in the middle of a crowd, on the steps of the church where the victim of homicide 463 was being eulogized."

There were 506 homicides in one city, Chicago. And they were not tortured, but in this case that is outweighed by sheer numbers. If you're putting effort into decreasing the number of murders in the world, do it effectively.

Comment author: [deleted] 01 July 2014 01:15:53PM 1 point [-]

I'm very much aware of that, to the point that melancholic moods tend to attack me by stripping away my ability to ignore far-away events I have no control over.

Comment author: Will_BC 01 July 2014 02:30:37PM 0 points [-]

Perhaps this video will put things in perspective. The other commenter is right, availability bias is at play. But just because we've gone far doesn't mean we should stop, and continuing to raise our standards of what is acceptable is a good thing. My belief is that a great deal of violence is caused by political, economic, and social deprivation and inequality, so if you want to feel like you're working against violence I would recommend working to reduce those. But that's my personal way of dealing with badness in the world. I don't feel totally powerless, I can't personally stop it but I can be part of a collective effort to mitigate it. I haven't done much research into the effective altruism community as I'm a poor college student with high future income potential if things go right, so I figure that landscape could change considerably.

The past is the past, but you are not powerless to stop bad things from happening in the future, it won't be you alone and it won't be clear cut, but you can definitely make the world a better place.

Comment author: [deleted] 01 July 2014 03:01:58PM 2 points [-]

Yes, I already agree, and am already at least partially trying to integrate this stuff in my daily life. Unfortunately, consciously telling myself "availability bias" does not actually reduce the emotional hit.

My belief is that a great deal of violence is caused by political, economic, and social deprivation and inequality

I dispute that this is a belief rather than a fact ;-).

Comment author: DanielLC 02 July 2014 03:36:06AM 0 points [-]

You could just try to reduce the availability bias by not making that stuff so available. How exactly did you hear about that?

Comment author: [deleted] 02 July 2014 09:37:37AM 1 point [-]

I live here. The government put out a press release.

Comment author: DanielLC 02 July 2014 09:14:51PM *  1 point [-]

I assume my government has those, but I don't generally see them. Do they show those on the news or something? Why do you watch (or read or whatever) them? Are they useful? Are they entertaining?

Comment author: [deleted] 02 July 2014 10:12:42PM 3 points [-]

Do they show those on the news or something?

Yes.

Why do you watch (or read or whatever) them? Are they useful?

I mostly ignore them, but the ones about significant outbursts of violence are the ones you don't ignore if you want to avoid being a part of a significant outburst of violence.

Comment author: DanielDeRossi 30 June 2014 03:21:58PM *  4 points [-]

I went to my university psych center to get evaluated . Everything is pretty good , except my processing speed was below average. Since there are guys who know a lot about cognitive science here , is there a way to improve or at least ameliorate that? Any links to stuff would be appreciated.

Comment author: Kaj_Sotala 01 July 2014 02:49:24PM 2 points [-]

There's some preliminary evidence that action video games could increase general processing speed, though the results have also been disputed.

Comment author: DanielDeRossi 01 July 2014 03:50:40PM 1 point [-]

Thanks!

Comment author: James_Miller 30 June 2014 06:56:48PM 2 points [-]

Improve your diet and sleep. There are a huge number of supplements you can experiment with, caffeine being the most popular. Plus keep track of what happens on days in which your processing speed is noticeably above or below your average.

Comment author: chaosmage 30 June 2014 04:28:14PM *  1 point [-]

This may be just me, but "processing speed" sounds terribly ambiguous. What kind of tests was this "measure" based on? This would help narrow down the area of functioning that needs work.

Comment author: DanielDeRossi 01 July 2014 05:52:31AM *  1 point [-]

I think it was this

wikipedia.org/wiki/WechslerAdultIntelligence_Scale

Comment author: somnicule 30 June 2014 04:49:49PM *  1 point [-]

I had similar results from the WISC as a child, low processing speed relative to everything else. It's been something I've been meaning to ask about for a while as well, particularly since one educational professional predicted my test scores (roughly, of course) from certain problematic behavioural patterns, which was enough evidence that there's something meaningful there to get my attention.

My memory of the tests isn't entirely clear, but one task was something like transcribing unfamiliar symbols according to a substitution key in a particular time span. If that's similar to Daniel's experience, then any advice that cognitive science types can come up with here could be useful to both of us.

ETA:

I think this study details the task I remember.

Comment author: ChristianKl 01 July 2014 09:41:15AM 0 points [-]

I also have a low processing speed relative to other mental abilities.

When reading this, I ask myself whether processing speed has something to do with akrasia.

How would you label your level of akrasia relative to other people?

Comment author: somnicule 02 July 2014 03:27:02AM 0 points [-]

Similar results in a similar test. High akrasia, potentially confounded by depression and anxiety.

Comment author: DanielDeRossi 01 July 2014 03:51:08PM 0 points [-]

IDK really. I do procrastinate more than I should.

Comment author: Omid 01 July 2014 05:55:48PM *  9 points [-]

The quantified risks of gay sex post is in the early stages of development. If you are a mod and think such a post would have negative value, pianoforte611 and I would appreciate hearing your concerns before we invest our time in it. If you are not a mod but want to make some pre-emptive suggestions, those are welcome too.

Comment author: falenas108 02 July 2014 01:08:46PM *  8 points [-]

A few nuances that I would like to see in the paper:

*Not all gay men have anal sex, many chose not to in favor of other activities.

*Also, not having the assumption that only gay/bi men have anal sex.

*A distinction between transmission rates if people chose to use condoms vs not, because part of the reason the rate is higher is condoms are much less common in the gay community.

*A disclaimer about how not all men have penises, and sex≠gender≠genitalia would be nice.

Comment author: curi 06 July 2014 04:35:50AM *  3 points [-]

Hi, an old discussion

http://lesswrong.com/lw/56m/the_conjunction_fallacy_does_not_exist/

gives the error, "The page you requested does not exist"

I have the right link. It's actually still linked from:

http://lesswrong.com/user/curi/submitted/

I wanted to check something from that discussion. As you can see from my submitted page, there were 113 comments. Why doesn't it exist? What's going on? Can someone help?

I didn't find any contact info except a bug tracker that didn't seem to have much activity since 2012, and my first guess is not a software bug. I may well have missed the right place to be asking about this, tell me if so.

Comment author: DanielDeRossi 01 July 2014 06:31:02PM 3 points [-]

What do you guys think about memory palaces? http://www.wikihow.com/Build-a-Memory-Palace I heard of it in Sherlock.

Comment author: MathiasZaman 01 July 2014 10:07:48PM 1 point [-]

I was taught this technique at the Brussels meetup. It definitely worked when we tried it out. Normally I can only remember around 5 things, and the memory palace bumped this up significantly (over 10 things). I didn't keep practicing it and I imagine you could do some amazing things with it if you train this a lot.

Comment author: [deleted] 01 July 2014 05:59:05PM *  3 points [-]

I really don't like happiness as a terminal value, yet I don't know anything that can replace it. The only thing I can think of is satisfaction, but it appears to be just a sneaky way to say happiness.

Any ideas?

Comment author: DanielLC 01 July 2014 10:06:32PM 2 points [-]

You don't like having it at all, or you just don't consider it the sole value?

I tend to see satisfaction referring to preference-satisfaction, meaning that a person's goals are satisfied, but not implying that they know this. If you are a paperclip maximizer, and the universe is tiled with paperclips, but you don't think there's a such thing as a paperclip, you may not be very happy, but your preferences are satisfied.

Comment author: [deleted] 03 July 2014 03:51:24PM 0 points [-]

I have nothing against happiness per se, it just doesn't feel like a proper terminal value.

Comment author: iarwain1 01 July 2014 08:50:29PM 2 points [-]

Most of positive psychology views well-being as a much more robust concept than just happiness. See for example Martin Seligman's PERMA theory, although that doesn't seem to be the only theory out there.

Comment author: Emile 01 July 2014 08:03:11PM 2 points [-]

Power?

"Humans act as if they had power as a terminal value" probably matches reality better than "Humans act as if they had happiness as a terminal value".

My original suggestion was "knowledge", but that may make you equally value knowing Pokemon trivia - I value useful knowledge, not any old knowledge, which seems to be another way of saying I value (a form of) power.

Though also, I don't see much of a reason to care about "terminal values" except when talking about maths and economics and decision theories and the like - any talk of "terminal values" is highly uncertain and likely to be wrong, so it's not something I'd take to heart.

Comment author: DanielLC 01 July 2014 10:04:16PM 5 points [-]

That feels too much like lost purposes. "Power" refers to something that can be used to fulfill values in general.

It's the sort of thing you'd acquire if you haven't figured out what you really want.

Comment author: [deleted] 03 July 2014 03:53:13PM *  1 point [-]

It's the sort of thing you'd acquire if you haven't figured out what you really want.

You should watch House of Cards.

Comment author: Nornagest 01 July 2014 08:37:06PM 0 points [-]

Preferences revealed through e.g. Wikipedia's history suggest that people put a surprisingly high value on Pokemon trivia relative to more useful but less entertaining information, at least when it comes to investing time in compiling and reading it.

Comment author: blacktrance 01 July 2014 07:04:44PM 1 point [-]

Why don't you like happiness as a terminal value?

Comment author: Squark 02 July 2014 07:09:14PM 0 points [-]

I would say "supplement" rather than "replace". How about beauty, love, friendship, music, humor, sex... ?

Comment author: RichardKennaway 01 July 2014 11:08:16PM 0 points [-]
Comment author: RichardKennaway 02 July 2014 06:56:56AM 1 point [-]

Some further thoughts about eudaimonia. What is happiness? I suggest that happiness is, literally, what it feels like to live well.

An analogy with pain: why does pain hurt? If it's a warning, why can't it just be a warning, without the hurting that seems so unnecessary? Because the painfulness of pain is the warning. You might wish that, like a fire alarm, it wouldn't go off when there's no fire, or you could turn it off when there's nothing more to do about the fire. There are drugs that will turn off pain, but for everyday purposes you can't take the painfulness out of the pain because then you'll be in the situation of children born without the ability to feel pain at all. They usually get dreadful injuries, wear out their joints, and end up crippled. You won't heed the warnings because they won't be warnings any more. How good are people at heeding milder warnings like "yet another game of 2048 would be a really stupid waste of time", or "I notice that I am confused"? If pain was that mild a warning, people would ignore it, because that is what a minor warning feels like from inside. Pain is what an urgent warning of physical damage feels like from inside.

In the same way, happiness is what living well feels like from inside. It's like a meter reading on a control panel. The meter reading is telling you how well you're doing, and happiness is what a high reading on that meter feels like.

You want that reading to be high, but there's no point in grabbing hold of the meter needle and turning it all the way over to the right. It would be as futile as living on morphine to take the painfulness out of ordinarily functioning pain. Or like satisfying a desire for an Olympic medal by making one -- the medal itself isn't what you really wanted, but the achievement of winning one. Or like keeping a nuclear reactor running smoothly by disconnecting all the sensors and replacing them by fake signals saying everything's fine.

Happiness tells you how well you're living. It only looks like a goal in the context of a well-functioning system that doesn't deliver the sensation without achieving the real goals that the sensation is measuring your approach to. If you obtain the signal without the reality, as I've heard that crack cocaine does, your life will fall apart.

Comment author: Barry_Cotter 30 June 2014 01:52:09PM 3 points [-]

Where could one find many, many past exam papers for university undergraduate courses? I find attempting them under exam conditions the ideal way of preparing for exams, and really excellent at pointing out where there are gaps in my knowledge and I need to revise. I'm particularly interested in psychology exam papers.

Comment author: sixes_and_sevens 30 June 2014 04:38:41PM 5 points [-]

Here are all the MIT OCW courses listed under "psychology". Many of them include both specimen and actual exam papers.

My experience with using other institutions' exams to revise for my own is that there's enough variation in the syllabus to distract from the task of actually passing the exam.

Comment author: Douglas_Knight 30 June 2014 05:39:51PM 4 points [-]

fraternities.

Comment author: John_Maxwell_IV 01 July 2014 05:45:41AM 1 point [-]

Unrelatedly, if I had read this blog post (and others like it by the same author) before going to college, I might have joined a fraternity... unfortunately it's too late now.

Comment author: DanielDeRossi 30 June 2014 03:17:56PM 1 point [-]

Depends on your uni. Ask your classmates. That's what I did.

Comment author: Tenoke 30 June 2014 06:24:33PM *  6 points [-]

We've had a bit of an attendance drop recently at our local Meetup Group (London). This could be because of a lot of things, but it seems to roughly coincide with the change to where Meetups are posted on Lesswrong. Have any other Groups experienced anything of the sort?

Comment author: jackk 03 July 2014 01:25:39AM 1 point [-]

I opened a poll about this on a previous open thread, but it was when the thread was nearly over so it didn't get many responses.

Comment author: Tenoke 30 June 2014 11:03:33AM 5 points [-]

You've added the wrong tags - it should be 'open_thread'. Less importantly, the thread should finish on Sunday (the 6th), not the 7th (Monday).

Comment author: 9eB1 30 June 2014 06:08:54PM 3 points [-]

Oddly, if you click Article Navigation and try to go to the last open thread, it goes back to October 2011. Same if you click "openthread" under Article Navigation. Possibly it's an issue where Article Navigation is only reflecting articles in Main and not Discussion. But if you click openthread under "Tags" it lists the proper ones in Discussion.

Comment author: Tenoke 30 June 2014 06:22:33PM 1 point [-]

You're right. I'd recommend submitting the issue here

Comment author: 9eB1 01 July 2014 07:33:05AM 2 points [-]

It appears that it is already in the system, I think.

Comment author: DanielDeRossi 30 June 2014 11:14:26AM 3 points [-]

Sorry. fixed.

Comment author: [deleted] 02 July 2014 04:09:06AM *  2 points [-]

So why is the goal of utilitarianism to maximize the sum of utilities?

Rather than, say, to maximize the minimal utility being considered?

I ask because the torture/dust specks question seems to be down to whether you think the way to combine multiple people's utility functions is by

a) Summing them (ie: "shut up and multiply"), or

b) Only looking at the worst-off individual (ie: "raise the floor")

And I can't find actual mathematical arguments about this.

(I know I'm years late, so if this is well settled, a quick pointer to that settlement would be much appreciated!)

Comment author: RichardKennaway 02 July 2014 09:37:21AM *  3 points [-]

So why is the goal of utilitarianism to maximize the sum of utilities?

There are different kinds of utilitarianism. What they have in common is that they recommend maximising some measure of utility. Where they differ is in how that utility is measured, and how different people's utilities are combined. Summing is one way; averaging is another; maximining yet another.

Mathematical arguments can tell you that if a person's preferences have certain properties, a utility measure can be constructed for them (e.g. the VNM theorem). Mathematics can draw out non-obvious properties of proposed measures of utility. But no mathematical argument will tell you the right way to measure and combine utilities, any more than it will tell you that you should be a utilitarian in the first place.

Comment author: [deleted] 02 July 2014 04:44:33PM 0 points [-]

But no mathematical argument will tell you the right way to measure and combine utilities . . .

Much the same could be said about potential probability functions.

I think what I'm looking for is some equivalent to Jaynes's "Desiderata" for probability, but in the realm of either basic utility functions or how to combine them.

. . . any more than it will tell you that you should be a utilitarian in the first place.

Being new to this, I'm also interested in a pointer to some kind of standard argument for (any kind of) utilitarianism. I mean something more than Yvain's wonderful little Consequentialism FAQ.

Comment author: RichardKennaway 03 July 2014 04:49:12PM 0 points [-]

I think what I'm looking for is some equivalent to Jaynes's "Desiderata" for probability, but in the realm of either basic utility functions or how to combine them.

The VNM theorem goes from certain hypotheses about your preferences to the existence of a utility function describing them. However, the utility function is defined only up to an affine transformation. This implies that given only that, there is no way to add up utilities, even the utilities of a single person. (You can, however , take weighted averages of them.) It also deals only with a single person, or rather, a single preference relation. It is silent on the subject of how to combine different people's preference relations or utility functions. There is no standard answer to the question of how to do this.

Being new to this, I'm also interested in a pointer to some kind of standard argument for (any kind of) utilitarianism.

You could try Peter Singer and the people who take that argument seriously.

Comment author: garabik 02 July 2014 07:03:40AM 2 points [-]

Use non-standard (AKA infinitesimal) numbers: a dust speck is an infinitesimal; there is a clear (and linear) disutility in increasing number of people with specks in their eyes, but no matter how many of them you sum up, you never achieve a disutility of a single person experiencing torture. Add second order if you want to have it more finely grained.

(Of course, this breaks down if you have an infinite number of people with dust specks. But our intuition breaks down anyway when faced with the infinite).

Comment author: [deleted] 02 July 2014 04:45:53PM 0 points [-]

But even with that scheme, it seems that you could just as easily want to maximize the minimal utility as maximize the sum.

Comment author: tut 30 June 2014 03:33:05PM 2 points [-]

Has something changed about the voting rules the last week or so? I started to get the "You don't have enough karma to downvote. You need three more points" message again. But it is always three points (no other number) even though I haven't lost karma and still am able to downvote some comments/sometimes.

Comment author: Emile 30 June 2014 04:04:38PM 3 points [-]

How much you can downvote is limited by how much karma you have. So looks like you "spent" all your karma.

You seem to downvote quite a lot then, are you one of those "downvoting stalkers" we keep hearing about?

Comment author: tut 30 June 2014 04:15:34PM *  2 points [-]

No. Do you think that I would go flaunting that here for no reason if I was? Mostly I just read a lot and don't write so much. And of course writing is what you get karma for.

What's weird is that I always am either 0 points short (able to downvote) or exactly three points short. Never one or two points. And my total karma has not decreased.

Comment author: Emile 30 June 2014 04:50:12PM 2 points [-]

Looking at the code concerning this, "three" isn't hard-coded, it's calculated but the formula is a bit hairy and relies on cache, so there could be a bug somewhere.

Or it could be a coincidence :)

Comment author: spxtr 01 July 2014 03:32:56AM *  3 points [-]

Why the Many-Worlds Formulation of Quantum Mechanics is Probably Correct by Sean Carroll.

Our only assumption was that the apparatus obeys the rules of quantum mechanics just as much as the particle does, which seems to be an extremely mild assumption if we think quantum mechanics is the correct theory of reality. Given that, we know that the particle can be in “spin-up” or “spin-down” states, and we also know that the apparatus can be in “ready” or “measured spin-up” or “measured spin-down” states. And if that’s true, the quantum state has the built-in ability to describe superpositions of non-interacting worlds. Not only did we not need to add anything to make it possible, we had no choice in the matter. The potential for multiple worlds is always there in the quantum state, whether you like it or not.

The explanation is at a slightly lower level than the sequences, but it's a concise summary with a healthy dose of proselytization. I think it works nicely.

Comment author: Luke_A_Somers 01 July 2014 05:04:53PM 0 points [-]

And the comments are predictably horrible. Sigh.

Comment author: Viliam_Bur 02 July 2014 09:46:33AM *  2 points [-]

This one seems interesting:

You could say, “The formalism of QR says that macroscopic systems behave as if there were many worlds.” Or you could say, “The formalism of QR says that macroscopic systems behave as if there were many worlds — and there really are” How is the second an improvement over the first? What does the claim that a hypothesis is “true” add to the claim that it is predictively successful, aesthetically satisfying and productive of new insights?

Seems smart. But then again, why not apply it to all our knowledge? For example, you should say "2 + 2 behaves as if it were 4", because saying that "2 + 2 is 4" does not bring any new insights.

In some technical sense of word, it's true. You could probably build an AI that processes "2 + 2 behaves as if it were 4" in the same way and with the same speed as "2 + 2 is 4".

I think the difference is mostly psychological, for humans. If you would teach people "2 + 2 behaves as if it were 4 (but don't ever say that it is 4, because that's just wrong)", those people could do the simple math, but they would be probably much slower, because of all the time they would have to remind themselves that 2 + 2 behaves as 4, but isn't really 4. They would pay a cognitive tax, which could impact their ability to solve more complex problems.

Or they would gradually develop a belief in belief. They would believe and correctly profess that the dragon, ahem, the collapse is in the garage, but it is invisible, inaudible, and cannot be detected experimentally. -- This is actually kinda scary, if I am correct, because it would mean that people more resistant to forming a belief in belief would have more difficulty in doing quantum physics. Unless they accept the many worlds.

Originally I thought that accepting the many worlds could have the advantage of people being able to think faster and more simply about quantum problems. Not paying the cognitive tax of the dragon in the garage. But that probably is overestimating of how much energy other people really invest in reminding themselves about the collapse.

So the question is: those successful quantum scientists who believe in collapse... how often do they really think about the collaps while doing physics? How high is the real cost of having this belief that doesn' pay any rent. Maybe it's trivial. Maybe even smaller than the emotional tax of the frustration of those who believe in many worlds. (Metaphorically said, you could have a tenant who lives in such ridiculously cheap place that evicting them would actually be more costly than just letting them be.) This is not a Dark Arts argument for believing in collapse, just a question about how much does believing in collapse really influence a quantum scientist's everyday work.

Comment author: Luke_A_Somers 02 July 2014 10:58:57AM 0 points [-]

The everyday work? Basically none. Choosing what to study? Perhaps some.

Comment author: redacted 05 July 2014 04:30:18PM 1 point [-]

I’m looking for information about rationalist houses, but the wiki page on the subject is sparse.

The most salient questions for me are:

  • What is their geographical distribution? I know there are plenty in the Bay Area, and I think I have heard that there is only one in NYC.
  • How frequently are there openings?
Comment author: [deleted] 02 July 2014 10:36:33PM 1 point [-]

What (if any) relationship is there between the homotopy/homology of a directed graph and its causal structure?

Comment author: Emile 03 July 2014 09:15:25PM 0 points [-]

(I'm reading Pearl's Causality right now)

I would expect there to be pretty much none, but I only glanced at the homotopy paper; Pearl talks about equivalences between some models (i.e. they give rise to the same probability distribution, so can't be distinguished by purely observational data), and talks about how you can manipulate a graph to get another equivalent graph (reversing arrows under some conditions etc.), but the rules are much more specific that those I saw in the homotopy paper. For example, the substructure A -> B <- C is treated very differently from the substructure A <- B -> C, and I don't expect that kind of assymetry in homotopy/homology (I may be wrong! I only skimmed it!)

Comment author: MrMind 03 July 2014 07:31:06AM 0 points [-]

I have no idea what the causal structure of a digraph is. Can you point me to some resource which explains it?

Comment author: [deleted] 03 July 2014 12:47:48PM 1 point [-]

First chapter of Pearl's book Causality.

Comment author: [deleted] 02 July 2014 12:46:21PM 1 point [-]

Posting this again from the last open thread because I am still researching and would still appreciate assistance or links:

"I've begun researching cryonics to see if I can afford it/want to sign up. Since I know plenty here are already signed up, I was hoping someone could link me to a succinct breakdown of the costs involved. I've already looked over Alcor's webpage and the Cryonics Institute, but I'd like to hear from a neutral party. Membership dues and fees, average insurance costs (average since this would change from person to person), even peripheral things like lawyer fees (I assume you'll need some legal paperwork done for putting your body on ice). All the main steps necessary to signing up and staying safe.

Basically, I would very much appreciate any help in understanding the basic costs and payoffs so I can budget accordingly."

Comment author: Gunnar_Zarncke 01 July 2014 04:26:14PM 1 point [-]

One Inconvenient Application of Utiliarism:

Given a class of chores which provide benefit but are disliked to perform by most people (and cannot be dealt away with). Also assume that these chores can be performed by most people. Further take another class of tasks that can be performed by a subset of the population only and comes with less displeasure. Also add some neutral tasks.

An set of example task could be dealing with garbage, solving complex math problems and child care.

How should you assign the tasks from these classes to people?

It appears that those people who can perform the more pleasurable tasks should do so while the other should perform the unwanted tasks and the remaining neutral tasks are performed equally.

For me this seems kind of unfair. It places the lesser able people potentially at the less pleasurable end. Moral judgements may vary - but this question at least requires some discussion.

What do you think?

Comment author: RomeoStevens 01 July 2014 06:03:10PM 2 points [-]

Those people can be compensated in other ways. If there is some aspect of your utility that your conception of utilitarianism isn't capturing then you have to figure out how to capture it. Utilitarianism based on simple utility models will always fail.

Comment author: Gunnar_Zarncke 01 July 2014 09:14:24PM 0 points [-]

Fair point.

Comment author: witzvo 01 July 2014 05:37:23PM 2 points [-]

Yootling is one good approach to the problem.

Comment author: bramflakes 01 July 2014 12:22:53AM 1 point [-]
Comment author: ChristianKl 01 July 2014 09:56:52AM 2 points [-]

The article seems to miss the point many times.

I think a useful definition of empathy describes it as the ability to feel what another person is feeling.

It for example says: "With social relations expanding beyond the circle of close kin, kinship obligations were no longer enough to ensure mutual assistance and stop free riding. There was thus selection for pro-social behavior, i.e., a spontaneous willingness to help not only kin but also non-kin."

Group selection is not a well accepted phenomena. Especially for a short timeframe of 10,000 years.

Furthermore the author shies away from going outright to the logical conclusions. If the author thinks that those people in towns evolved to have more empathy, that basically means that Black people have less empathy than white people. Is that what the author is arguing? That's certainly an interesting claim.

The author doesn't seem to be aware of the tradeoff between dominance and empathy. More testosterone equals more dominance and makes people less empathic. Given differences in penis size and some studies, Blacks might have higher testosterone than Whites. Of course that's a highly controversial debate.

Comment author: bramflakes 01 July 2014 11:06:06AM *  1 point [-]

I don't think it's arguing for group selection, more as empathy as an adaption for understanding the mental states of other people so that you could better navigate reciprocal social obligations. So long as effective mechanisms existed to punish free riders, it would be a beneficial adaption.

I think.

Comment author: ChristianKl 01 July 2014 11:11:20AM 0 points [-]

I don't think it's arguing for group selection, more as empathy as an adaption for understanding the mental states of other people so that you could better navigate reciprocal social obligations.

Then why use the word "selection"?

Comment author: bramflakes 01 July 2014 11:13:24AM 0 points [-]

Because it was selected?

Comment author: ChristianKl 01 July 2014 03:20:35PM 0 points [-]

What kind of process do you mean with selection if you don't mean group selection?

Comment author: Luke_A_Somers 01 July 2014 05:08:09PM 1 point [-]

Regular old natural selection? Behaving socially benefitted the individual. Doing things for other people didn't just help them - it got their help in return.

Comment author: ChristianKl 01 July 2014 08:39:40PM 1 point [-]

The argument the article made was that empathy reduces free riding. Engaging in free riding almost per definition doesn't produce disadvantages for the individual who engages in free riding.

Comment author: Kaj_Sotala 02 July 2014 04:43:13AM 2 points [-]

It does if others have adaptations for punishing free-riders, or for rewarding non-free-riders.

Comment author: ChristianKl 02 July 2014 08:57:37AM 0 points [-]

Punishing free-riders isn't what I would consider under empathy. I would think that highly dominate people with a lot of testosterone will rather engage in punishing free-riders than empathic people.

Comment author: bramflakes 01 July 2014 06:28:56PM 0 points [-]

... normal selection?

Comment author: Viliam_Bur 02 July 2014 12:09:44PM *  0 points [-]

The article "Tolerate Tolerance" contains a hyperlink to "M*nt*f*x"; twice. When I click on the link, my anti-virus software warns me about "potentially unwanted" content on the page. (What does that mean? It's usually the kind of software that could have a legitimate use, but is also frequently abused, so it is a good idea to warn all users, and allow specific users to disable the warning for specific software. For example: a keylogger.)

I have no idea what kind of "potentially unwanted" software is on the page, and I am not going to investigate. If someone else is an expert, could you please look at it?

If it is something malicious, perhaps the hyperlinks should be removed (1) from the page, and (2) from the e-book.

Comment author: RichardKennaway 02 July 2014 01:57:24PM *  3 points [-]

The tinyurls expand to a FAQ page about the entity who shall not be clearly named, lest it appear, written by someone apparently sane. I didn't get any malware warnings.

If you fill in the asterisks with an e, an i, and an e, then put it into Google, it will tell you everything you want to know, including a hit on the aforementioned FAQ. As the original post says, a legendary AI crackpot. He actually once had an account on LessWrong, very briefly, but (I assume) was instantly banned.

Comment author: DanielDeRossi 02 July 2014 11:10:59AM 0 points [-]

Interesting discussion on philosophical methodology and intuitions in a recent book. http://ndpr.nd.edu/news/39362-philosophy-without-intuitions/

Comment author: [deleted] 01 July 2014 11:28:13AM *  0 points [-]

Ran Prieur linked to this comment on reddit that speculates that processed food (specifically Soylent) is causing colorectal cancer. How plausible is it?

Comment author: RomeoStevens 01 July 2014 06:09:56PM 1 point [-]

I think he is wrong about Soylent but not because Soylent explicitly optimized for this eventuality. Soylent happens to use oat flour which is rich in resistant starch. This is exactly the type of "difficult or impossible to digest" thing that the bacteria in our gut feed on.

Processed food's association with colorectal cancer is not related to the bioavailability of its nutrients or to the presence or lack of insoluble fibers in the diet AFAIK.

Comment author: Will_BC 30 June 2014 05:38:52PM 0 points [-]

Why do you think EY uses conspiracy in his fictional writing? He seems to use them in positive or at least not clearly negative light, which is not how I think of conspiracies at all. I notice that I am confused, so I'm trying to gather some other opinions.

Comment author: VAuroch 30 June 2014 09:49:27PM 8 points [-]

I think it stems from the Brennan's World weirdtopia, and the idea that making knowledge freely available makes it feel worthless, while making it restricted to members of a secretive group makes it feel as valuable and powerful as it actually is.

Comment author: drethelin 30 June 2014 07:51:06PM 8 points [-]

HJPEV is a drama queen and likes acting as if he's badass (ignore for the moment whether he is) and sinister and evil: Look at what he calls his army and how he acts around them. Hence calling his thing with Draco the Bayesian Conspiracy. Not everything that takes place in an author's fiction is indicative of something they support.

Comment author: Nornagest 30 June 2014 07:54:13PM *  6 points [-]

Not everything that takes place in an author's fiction is indicative of something they support.

This, however, is a recurring theme in Eliezer's work. I don't think I fully grok the motivations (though I could hazard a guess or two), but it's definitely not just HJPEV's supervillain fetish talking.

Comment author: Eugine_Nier 01 July 2014 05:43:34AM 13 points [-]

Agreed, it's also Eliezer's super-villain fetish thing.

Comment author: Plasmon 01 July 2014 06:58:42AM 7 points [-]

The anecdote in this post, about Fermi, Rabi and Szilard considering keeping the possibility of practical nuclear fission a secret, may shed some light on the subject. He thinks that some knowledge is dangerous enough that people who know it may reasonably want to keep it secret.

(much more recently, there has been some controversy about the publication of a way of obtaining a particularily infectious strain of a certain virus, but I can't find any references for that right now)

Comment author: gwern 01 July 2014 03:59:38PM *  4 points [-]

(much more recently, there has been some controversy about the publication of a way of obtaining a particularily infectious strain of a certain virus, but I can't find any references for that right now)

This is a perennial issue, occurring in various forms relating to the preservation of viruses like smallpox, the sequencing of their genomes, and increasing their virulence. Looking in Google News for 'virus research increase virulence', it seems the most recent such research would be http://www.nature.com/news/biosafety-in-the-balance-1.15447 / http://www.independent.co.uk/news/science/american-scientists-controversially-recreate-deadly-spanish-flu-virus-9529707.html :

Groups led by Ron Fouchier of the Erasmus Medical Center in Rotterdam, the Netherlands, and Yoshihiro Kawaoka of the University of Wisconsin–Madison created a storm in late 2011 when they artificially engineered potentially pandemic forms of the H5N1 avian flu virus. In January last year, researchers ended a voluntary 12-month moratorium on such gain-of-function flu research, which can increase the host range, transmissibility or virulence of viruses (see Nature 493, 460; 2013), and work resumed.

This month, Kawaoka’s group reported that it had engineered a de novo flu virus from wild-avian-flu-strain genes that coded for proteins similar to those in the 1918 pandemic virus (T. Watanabe Cell Host Microbe 15, 692–705; 2014). The researchers were able to make a virulent version that could transmit between ferrets, and they concluded that a 1918-like virus could therefore emerge from wild avian flu viruses.

EDIT: Sandberg provides an amazing quote on the topic: http://www.aleph.se/andart/archives/2014/07/if_nature_doesnt_do_containment_why_should_i.html

Although fellow flu researcher professor Wendy Barclay at Imperial College said there was nothing wrong with doing the research in a BSL-2 lab: “In nature there is no containment. He’s only doing what happens in nature every day.” Which is true for ebola too.

Comment author: Will_BC 01 July 2014 01:36:57PM *  1 point [-]

I think that I remember reading an even better example about publishing scientific results that might have furthered the Nazis ability to produce a nuclear weapon in HPMOR, though I can't recall where it was exactly. I found that example persuasive, but I considered it a distasteful necessity, not a desirable state of affairs. Hence my confusion at Brennan's world, which I thought being set in the future of our world was perhaps post-Singularity, and therefore the epitome of human flourishing. Another commenter asked me if I wouldn't enjoy the thought of being a super-villain, and I thought , um no, that would be terrible, so maybe there are some Mind Projection issues going on in both directions. I don't know the distribution of people who would gain positive utility from a world of conspiracies, but I'm sure there would be a great deal of disutility with some proportion of current people with current minds. I can see where that world might provide challenge and interest for its inhabitants, but I remain highly skeptical that it's a utilitarian optima. Using my current brain and assuming stable values, it actually seems pretty dystopian to me, but I'll admit that's a limited way to look at things.

Comment author: MugaSofer 03 July 2014 05:40:05PM *  2 points [-]

I think that I remember reading an even better example about publishing scientific results that might have furthered the Nazis ability to produce a nuclear weapon in HPMOR, though I can't recall where it was exactly.

Graphite as a neutron modulator, I believe. Ch. 85:

During World War II, there had been a project to sabotage the Nazi nuclear weapons program. Years earlier, Leo Szilard, the first person to realize the possibility of a fission chain reaction, had convinced Fermi not to publish the discovery that purified graphite was a cheap and effective neutron moderator. Fermi had wanted to publish, for the sake of the great international project of science, which was above nationalism. But Szilard had persuaded Rabi, and Fermi had abided by the majority vote of their tiny three-person conspiracy. And so, years later, the only neutron moderator the Nazis had known about was deuterium.

Comment author: ChristianKl 30 June 2014 11:01:18PM 6 points [-]

EY makes complicated arguments. He's not the person to make arguments about X is good and Y is bad. Fiction is about playing with ideas.

As far as I can find the first instance of the term Bayesian conpiracy appear in a 2003 nonfiction article by Eliezer:

Fun Fact!

Q. What is the Bayesian Conspiracy?

A. The Bayesian Conspiracy is a multinational, interdisciplinary, and shadowy group of scientists that controls publication, grants, tenure, and the illicit traffic in grad students. The best way to be accepted into the Bayesian Conspiracy is to join the Campus Crusade for Bayes in high school or college, and gradually work your way up to the inner circles. It is rumored that at the upper levels of the Bayesian Conspiracy exist nine silent figures known only as the Bayes Council.

At the time it seems like a fun joke to make and it stayed. There are also a variety of other arguments to be made that it's sometimes not useful to share all information with outsiders.

Comment author: mwengler 30 June 2014 07:52:57PM 5 points [-]

For the same reason EY supports the censoring of posts on topics he has decided are dangerous for the world to see. He generalizes that if he is willing to hide facts that work against his interests, that others similarly situated to him, but with different interests will also be willing to work surreptitiously.

Comment author: Will_BC 01 July 2014 03:12:18AM *  4 points [-]

I'm relatively new to the site and I wasn't aware of any censorship.I suppose I can imagine that it might be useful and even necessary to censor things, but I have an intuitive aversion to the whole business. Plus I'm not sure how practical it is, since after you posted that I googled lesswrong censorship and found out what was being censored. I have to say, if they're willing to censor stuff that causes nightmares then they ought to censor talk of conspiracies, as I can personally attest that that has caused supreme discomfort. They are a very harmful meme and positing a conspiracy can warp your sense of reality. I have bipolar, and I was taking a medicine that increases the level of dopamine in my brain to help with some of the symptoms of depression. Dopamine (I recently rediscovered) increased your brain's tendency to see patterns, and I had to stop talking a very helpful medication after reading this site. Maybe it would have happened anyway, but the world of conspiracy theories is very dark and my journey there was triggered by his writings. I guess most of the content on this site is disorienting though, but perhaps some clarification about what he thinks the benefits of conspiracies are and their extent should be would help.

Also, the content on this site is pretty hard hitting in a lot of ways, I find it inconsistent to censor things to protect sensitive people who think about AI but not people who are sensitive to all the other things that are discussed here. I think it's emblematic of a broader problem with the community, which is that there's a strong ingroup outgroup barrier, which is a problem when you're trying to subsist on philanthropy and the ingroup is fairly tiny.

Comment author: ChristianKl 01 July 2014 09:33:54AM 0 points [-]

Maybe it would have happened anyway, but the world of conspiracy theories is very dark and my journey there was triggered by his writings.

Many websites about conspiracy theories don't care much about the truth. They don't go through the work of checking whether what they are saying is true.

On the other hand organisations such as P2 exist or existed. The Mafia exists. To the extend that we care about truth we can't claim that aren't groups of people that coordinate together in secret for the benefits of their members. Italy is a pretty good country to think about when you want to think about conspiracies because there a lot of publically available information.

It's actually pretty easy to see flaws in the argument of someone who claims that the US government brought down the twin towers on 9/11 via explosives if you are actually searching for flaws and not only searching for evidence that the claim might be true. The same goes for lizard overlords.

I guess most of the content on this site is disorienting though, but perhaps some clarification about what he thinks the benefits of conspiracies are and their extent should be would help.

Learn to live with not knowing things. Learn to live with uncertainty. Living with uncertainty is one of the core skills as a rationalist. If you don't know than you don't know an wanting to know. We live in a very complex world that we don't fully understand.

Plus I'm not sure how practical it is, since after you posted that I googled lesswrong censorship and found out what was being censored.

You found out what was censored in a way where you don't understand the debate that was censored in depth and you took no emotional harm.

Comment author: Jiro 01 July 2014 02:46:20PM 1 point [-]

Learning to live with not knowing things is good advice if you are trying to choose between "I explain this by saying that people are hiding things" and "I don't have an explanation".

Learning to live with not knowing things is poor advice in a context where people are actually hiding things from you and what is not known is what the people are hiding rather than whether the people are hiding something. It is especially poor advice where there is a conflict of interest involved--that is, when the same people telling you you'd be better off not knowing also stand to lose from you knowing.

Needless to say, 9/11 and lizard conspiracy theories fall in the first category and the material that has been censored from lesswrong falls in the second category.

Comment author: ChristianKl 01 July 2014 04:22:30PM 3 points [-]

Learning to live with not knowing things is poor advice in a context where people are actually hiding things from you and what is not known is what the people are hiding rather than whether the people are hiding something.

No, if you can't stand thinking that you don't know how things work you are pretty easy to convince of a lie. You take the first lie that makes a bit of sense in your view of the world. The lie feels like you understand the world. It feels better than uncertainty. Any decent organisation that operates in secret puts out lies to distract people who want to know the truth.

Andy Müller-Maguhn was standing in front of the Chaos Computer Congress in German and managed to give a good description of how the NSA surveils the internet and how the German government lets them spy on German soil. At the time you could have called it a conspiracy theory. Those political Chaos Computer Club people are very aware of what they know and where they are uncertain. That's required if you want to reason clearly about hidden information.

Needless to say, 9/11 and lizard conspiracy theories fall in the first category and the material that has been censored from lesswrong falls in the second category.

When it comes to 9/11 the government does hide things. 9/11 is not an event where all information is readily available. It's pretty clear that names of some Saudi's are hidden. Bin Laden comes from a rich Saudi family and the US wants to keep a good relationship with the Saudi government. I think it's pretty clear that there some information that the US didn't want to have in the 9/11 report because the US doesn't want to damage the relationship with the Saudis.

Various parts of the NSA and CIA do not want to share all their information about what they are doing with Congressional Inquiries. As a result they hide information from the 9/11 commission. The NSA wants to have a lot of stuff out of the public eye that could be find out if a congressional commission would dig around and get full cooperation. The chief of the NSA lied under oath to congress about the US spying program. A congressional commission that would investigate 9/11 fully would want to look at all evidence that they NSA gathered at that point and that's not what the NSA wants, even if the NSA didn't do anything to make 9/11 happen.

If someone finds evidence of the NSA withholding information to a congressional commission that shouldn't surprise you at all, or should increase your belief that the NSA orchestrated 9/11 because they are always hiding stuff.

Information about Al Qaeda support for the Muslim fighters that Nato helped to fight for the independence of Kosovo isn't clear.

The extend to which Chechnya Muslims freedom fighter are financed by the Saudis or Western sources isn't clear. The same goes for Uyghurs.

General information about identities of people who did short selling before 9/11 was hidden because the US government just doesn't release all information about all short selling publically.

The problem with 9/11 is that people go to school and learn that the government is supposed to tell them the truth and not hide things. Then they grow up a bit and are faced with a world where government constantly hides information and lies. Then those people take the evidence that the government hides information in a case like 9/11 as evidence that the US government caused the twin towers to be destroyed with dynamite.

Politically the question whether to take 9/11 as a lesson to cut the money flow to Muslim 'freedom fighters' in Chechnya does matter and it's something where relevant information gets withhold.

Comment author: Jiro 01 July 2014 05:34:06PM 1 point [-]

I think you are misunderstanding me. The point is that there are two scenarios:

1) Someone doesn't really know anything about some subject. But they find a conspiracy scenario appealing because they would rather "know" an explanation with little evidence behind it, rather than admit that they don't know.

2) Information definitely is being hidden from someone, and they say "I want to know that information:".

Both of these involve someone wanting to know, but "wanting to know" is being used in very different ways. If you say that people should "learn to live without knowing things", that's a good point in the first scenario but not so good in the second scenario. And the second scenario is what's taking place for the information that has been censored from lesswrong. (Considering that your reply was pretty much all about 9/11, do you even know what is being referred to by information that has been censored from lesswrong?)

Comment author: jimmy 01 July 2014 07:42:16PM 3 points [-]

"learning to live without knowing things" doesn't mean that you don't value information. It means that when you can't/don't know, you're not in constant suffering. It means that you don't get all freaked out and desperate for anything that looks like an answer (e.g. a false conspiracy theory)

It's the difference between experiencing crippling performance anxiety and just wanting to give a good performance. The difference between "panic mode" and "optimizing mode". Once you can live with the worst case, fear doesn't control you any more - but that doesn't mean you're not motivated to avoid the worst case!

Comment author: James_Miller 01 July 2014 05:28:41AM 0 points [-]

I find it inconsistent to censor things to protect sensitive people who think about AI but not people who are sensitive to all the other things that are discussed here.

To the extent there is censorship of dangerous information on LW, the danger is to the future of mankind rather then to the (very real and I don't mean to minimize this) feelings of readers.

Comment author: Will_BC 01 July 2014 06:13:11AM *  2 points [-]

One could make the argument that anything that harms the mission of lesswrong's sponsoring organizations is to the detriment of mankind. I'm not opposed to that argument, but googling censorship of lesswrong did not turn up anything I considered to be particularly dangerous. Maybe that just means that the censorship is more effective than I would have predicted, or is indicative or a lack of imagination on my part.

Comment author: Viliam_Bur 02 July 2014 10:21:59AM *  0 points [-]

I'd say that "censorship" (things that could be classified or pattern-matched to this word) happens less than once in a year. That could actually contribute to why people speak so much about it; if it happened every day, it would be boring.

From my memory, this is "censored":

  • inventing scenarios about Pascal's mugging by AI
  • debating, even hypothetically, harm towards specific people or organization
  • replying to a downvoted post (automatically penalized by -5 karma)

And the options 2 and 3 are just common sense, and could happen on any website. Thus, most talk about "censorship" on LW focuses on the option 1.

(By the way, if you learned about the "basilisk" on RationalWiki, here is a little thing I just noticed today: The RW article has a screenshot of dozens of deleted comments, which you will obviously associate with the incident. Please note that the "basilisk" incident happened in 2010, and the screenshot is from 2012. So this is not the censorship of the original debate. It is probably a censorship of some "why did you remove this comment two years ago? let's talk about it forever and ever" meta-threads that were quite frequent and IMHO quite annoying at some time.)

Also, when a comment or article is removed, at least the message about the removal stays there. There is no meta-censorship (trying to hide the fact that censorship happened). If you don't see messages about removed comments at some place, it means no comments were removed there.

Comment author: lmm 04 July 2014 10:58:59PM 0 points [-]

There is no meta-censorship (trying to hide the fact that censorship happened).

And yet earlier in your post you're talking about some posts in 2012 about censorship in 2010 being deleted. Smells like meta-censorship to me.

Comment author: Viliam_Bur 04 July 2014 11:29:23PM *  1 point [-]

By meta-censorship I meant things like removing the content from the website without a trace, so unless you look at the google cache, you have no idea that anything happened, and unless someone quickly makes a backup, you have no proof that it happened.

Leaving the notices "this comment was removed" on the page is precisely what allowed RW to make a nice screenshot about LW censorship. LW itself provided evidence that some comments were deleted. Providing a hyperlink instead of screenshot would probably give the same information.

Also, I am mentioning basilisk now, and I have above 95% confidence that this comment will not be deleted. (One of the reasons is that it doesn't get into details; it doesn't try to restart the whole debate. Another reason is that don't start a new thread.)

Comment author: XiXiDu 01 July 2014 12:32:06PM 1 point [-]

I find it inconsistent to censor things to protect sensitive people who think about AI but not people who are sensitive to all the other things that are discussed here.

To the extent there is censorship of dangerous information on LW, the danger is to the future of mankind rather then to the (very real and I don't mean to minimize this) feelings of readers.

Have you asked the people who are able to censor information on LW, or do you just assume this to be the case?

Do the people in charge of LW censor information that are neither dangerous nor spam?

Comment author: James_Miller 01 July 2014 03:29:57PM 2 points [-]

I infer it's the case from being a regular reader of LW. I don't know if LW censors other types of information in part because spam is not a well defined category.

Comment author: [deleted] 01 July 2014 08:23:01AM 1 point [-]

There's not a lot of actual censorship of dangerous information "for the future of mankind". Or at least, I rate that as fairly unlikely, given that when the scientific groundwork for a breakthrough has been laid, multiple people usually invent it in parallel, close to each-other in time. Which means that unless you can get everyone who researches dangerous-level AI into LW, censoring on LW won't really help, it will just ensure that someone less scrupulous publishes first.

Comment author: Nornagest 02 July 2014 12:00:37AM *  3 points [-]

"Three may keep a secret, if two of them are dead."

Conspiracy is hard. If you don't have actual legal force backing you up, it's nearly impossible to keep information from spreading out of control -- and even legal force is by no means a sure thing. The existence of the Groom Lake air station, for example, was suspected for decades before publicly available satellite images made it pointless to keep up even the pretense of secrecy.

For an extragovernmental example, consider mystery religions. These aren't too uncommon: they're not as popular as they once were, but new or unusual religions still often try to elide the deepest teachings of their faiths, either for cultural/spiritual reasons (e.g. Gardnerian Wicca) or because they sound as crazy as six generations of wolverines raised on horse tranquilizers and back issues of Weird Tales (e.g. Scientology).

Now, where's it gotten them? Well, Gardnerian Wiccans will still tell you they're drinking from a vast and unplumbed well of secret truths, but it's trivially easy to find dozens of different Books of Shadows (some from less restrictive breakaway lineages, some from people who just broke their oaths) that agree on the broad strokes and many of the details of the Gardnerian mysteries. (Also many others that bear almost no resemblance beyond the name and some version of the Lesser Banishing Ritual of the Pentagram, but never mind that.) As to Scientology, Operation Clambake (xenu.net) had blown that wide open years before South Park popularized the basic outline of what's charmingly known as "space opera"; these days it takes about ten minutes to fire up a browser and pull down a more-or-less complete set of doctrinal PDFs by way of your favorite nautical euphemism. Less if it's well seeded.

"But these are just weird minority religions," you say? "Knowing this stuff doesn't actually harm my spiritual well-being, because I only care about the fivefold kisses when my SO's involved and there's no such thing as body thetans"? Sure, but the whole point of a mystery religion is selecting for conviction. Typically they're gated by an initiation period measured in years and thousands of dollars, not to mention some truly hair-raising oaths; I don't find it plausible that science broadly defined can do much better.

Comment author: Salemicus 03 July 2014 09:45:01PM 3 points [-]

You are clearly right that conspiracy is hard. And yet, it is not impossible. Plenty of major events are caused by conspiracies, from the assassination of Julius Caesar to the recent coup in Thailand. In addition, to truly prevent a conspiracy, it is often necessary to do more than merely reveal it; if the conspirators have plausible deniability, then revealing (but not thwarting) the conspiracy can actually strengthen the plotters hands, as they can now co-ordinate more easily with outside supporters.

Successful conspiracies, like any other social organization, need incentive compatibility. Yes, it's easy to find out the secrets of the Scientology cult. Not so easy to find out the secret recipe for Coca Cola, though.

Comment author: [deleted] 02 July 2014 10:08:27AM 3 points [-]

So I'm the only one here who actually took a hair-raising oath before making an account?

Comment author: gwern 02 July 2014 04:38:44PM 3 points [-]

You're not allowed to talk about the oath! Why am I the only one who seems able to keep it?

Comment author: [deleted] 02 July 2014 10:11:00PM 1 point [-]

Because there are different factions at work, you naked ape.

Comment author: Nornagest 02 July 2014 04:20:25PM 2 points [-]

Nah, I hear we traditionally save that for after you earn your 10,000th karma point and take the Mark of Bayes.

Comment author: RichardKennaway 01 July 2014 10:11:18AM 8 points [-]

Conspiracy is the default mode of a group of people getting anything done. Every business is a conspiracy. They plot and scheme within their "offices", anonymous buildings with nothing but their name on the front door. They tell no-one what they're doing, beyond legal necessity, and aim to conquer the world by, well, usually the evil plan is to make stuff that people will want to buy.

No organisation conducts all its business in public, whatever its aims. Even if you find one that seems to, dollars to cents you're not looking at its real processes. There needn't be anything sinister in this, although of course sometimes there is.

Every one of us is a conspiracy of one.

Comment author: fubarobfusco 01 July 2014 06:24:21AM 3 points [-]

I'm guessing it's cultural influence from Discordianism, Shea and Wilson's Illuminatus!, or the like. Conspiracies, cults, and initiatory orders are all pretty common themes in Discordian-influenced works. Some are destructive, some are constructive, some are both, and some run around in circles.

Comment author: Kaj_Sotala 01 July 2014 02:55:46PM 1 point [-]

I would assume the main explanation to be just "conspiracies are cool", the same reason why they pop up in all kinds of other fiction ranging from The X-Files to Babylon 5 to Deus Ex to the Illuminati card game to whatever.