You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Open Thread, May 19 - 25, 2014

2 Post author: somnicule 19 May 2014 04:49AM

Previous Open Thread

 

You know the drill - If it's worth saying, but not worth its own post (even in Discussion), then it goes here.

 

Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one.

3. Open Threads should start on Monday, and end on Sunday.

4. Open Threads should be posted in Discussion, and not Main.

 

Comments (289)

Sort By: Popular
Comment author: Gunnar_Zarncke 28 May 2014 08:24:56PM 1 point [-]

What do you think about using visualizations for giving "rational" advice in a compact form?

Case in point: I just stumbled over relationtips by informationisbeautiful and thought: That is nice.

This also reminds me of the efficiency of visual presentation explained in The Visual Display of Quantitative Information by Tufte.

And I wonder how I might quote these in the Quotes Thread...

Comment author: Oscar_Cunningham 29 May 2014 12:33:31PM 1 point [-]

Modern "infographics" like those by informationisbeautiful are extremely often terrible in exactly the ways that Tufte warns against. They are often beautiful, but rarely excel at their original purpose of displaying data.

Comment author: Gunnar_Zarncke 29 May 2014 04:37:10PM 0 points [-]

I agree what many infographics (the contraction is the first hint) are often more beautiful than informational.

But yours was a general remark. Did you mean it to imply that the idea isn't good or just my particular example?

Comment author: Oscar_Cunningham 30 May 2014 10:43:16AM 0 points [-]

I think that good graphics illustrating some point about rationality would be a really cool thing to have in the quotes thread.

Comment author: Raythen 25 May 2014 11:46:07AM *  2 points [-]

Asking "Would an AI experience emotions?" is akin to asking "Would a robot have toenails?"

There is little functional reason for either of them to have those, but they would if someone designed them that way.

Edit: the background for this comment - I'm frustrated by the way AI is represented in (non-rationalist) fiction.

Comment author: ChristianKl 28 May 2014 05:23:01AM 1 point [-]

I think you are plain wrong.

There a lot of thought in AI development of mimicking human neural decision making processes and it's very well possible that the first human level AGI will be similar in structure to human decision making. Emotions are a core part of how humans make decisions.

Comment author: Raythen 28 May 2014 08:53:32AM *  0 points [-]

I should probably make clear that most of my knowledge of AI comes from LW posts, I do not work with it professionally, and that this discussion is on my part motivated by curiosity and desire to learn.

Emotions are a core part of how humans make decisions.

Agreed.


Your assessment is probably more accurate than mine.

My original line was of thinking was that while AIs might use quick-and-imprecise thinking shortcuts triggered by pattern-matching (which is sort of how I see emotions), human emotions are too inconveniently packaged to be much of use in AI design. (While being necessary, they also misfire a lot; coping with emotions is an important skill to learn; in some situations emotions do more harm than good; all in all this doesn't seem like good mind design). So I was wondering if whatever AI uses for its thinking, we would even recognize as emotions.

My assessment now is that even if AI uses different thinking shortcuts than humans do, they might still misfire. For example, I can imagine a pattern activation triggering more patterns, which in turn trigger more and more patterns, resulting in a cascade effect not unlike emotional over-stimulation/breakdown in humans.
So I think it's possible that we might see AI having what we would describe as emotions (perhaps somewhat uncanny emotions, but emotions all the same).


P. S. For the sake of completeness: my mental model also includes biological organisms needing emotions in order to create motivation (rather than just drawing conclusions). (example: fear creating motivation to escape danger).
AI should already have a supergoal so it does not need "motivation". However it would need to see how its current context connects to its supergoal, and create/activate subgoals that apply to the current situation, and here once again thinking shortcuts might be useful, perhaps not too unlike human emotions.

Example: AI sees a fast-moving object that it predicts will intersect its current location, and a thinking shortcut activates a dodging strategy. This is a subgoal to the goal of surviving, which is in turn is a subgoal to the AI's supergoal (whatever that is).

Having a thinking shortcut (this one we might call "reflex" rather "emotion") results in faster thinking. Slow thinking might be inefficient to the point of fatal "Hm... that object seems to be moving mighty fast in my direction... if it hits me it might damage/destroy me. Would that be a good thing? No, I guess not - I need to functional in order to achieve my supergoal. So I should probably dodg.. <CRASH>"

Comment author: ChristianKl 28 May 2014 12:28:47PM 0 points [-]

AI should already have a supergoal so it does not need "motivation".

We know relatively little about what it takes to create a AGI. Saying that an AGI should have feature X or feature Y to be a functioning AGI is drawing to much conclusions from the data we have.

On the other hand we know that the architecture on which humans run produces "intelligence" so that at least one possible architecture that could be implemented in a computer.

Bootstraping AGI from Whole Brain Emulations is on of the ideas that is in discussion even on LessWrong.

Comment author: Mitchell_Porter 26 May 2014 07:34:10AM 3 points [-]

What sort of AIs have emotions? How can I tell whether an AI has emotions?

Comment author: polymathwannabe 26 May 2014 12:58:46PM 0 points [-]

Given how emotions are essential to decision-making, I'd ask what sort of AI doesn't have emotions.

Comment author: DanielLC 04 June 2014 12:45:48AM 1 point [-]

I'd say that a chess-playing program does not have emotions, and a norn does.

Comment author: DanielLC 04 June 2014 12:44:20AM 0 points [-]

Define "emotion".

I find it highly unlikely robots would have anything corresponding to any given human emotion, but if you just look at the general area in thingspace that emotions are in, and you're perfectly okay with the idea of finding a new one, then it would be perfectly reasonable for robots to have emotions. For one thing, general negative and positive emotions would be pretty important for learning.

Comment author: XiXiDu 26 May 2014 08:07:45AM *  0 points [-]

I have never thought about this, so this is a serious question. Why do you think evolution resulted in beings with emotions and what makes you confident enough that emotions are unnecessary for practical agents that you would end up being frustrated about the depiction of emotional AIs created by emotional beings in SF stories?

From Wikipedia:

Emotion is often the driving force behind motivation, positive or negative. An alternative definition of emotion is a "positive or negative experience that is associated with a particular pattern of physiological activity."

...cognition is an important aspect of emotion, particularly the interpretation of events.

Let's say the AI in your story becomes aware of an imminent and unexpected threat and allocates most resources to dealing with it. This sounds like fear. The rest is semantics. Or how exactly would you tell that the AI is not in fear? I think we'll quickly come up against the hard problem of consciousness here and whether consciousness is an important feature for agents to possess. And I don't think one can be confident enough about this issue in order to become frustrated about a science fiction author using emotional terminology to describe the AIs in their story (a world in which AIs have "emotions" is not too absurd).

Comment author: Error 25 May 2014 02:26:54AM 2 points [-]

Probably too late, but: I have the impression there's a substantial number of anime fans here. Are there any lesswrongers at or near MomoCon (taking place in Atlanta downtown this weekend) and interested in meeting up?

Comment author: patrickmclaren 24 May 2014 12:42:42AM *  9 points [-]

I've been searching LessWrong for prior discussions on Anxiety and I'm not getting very many hits. This surprised me. Obviously there have been well developed discussions on arkrasia, and ugh fields, yet little regarding their evil siblings Anxiety, Panic, and Mania.

I'd be interested to hear what people have to say about these topics from a rationalist's perspective. I wonder if anyone has developed any tricks, to calm the storm, and search for a third alternative.

Of course, first, and foremost, in such situations one should seek medical advice.

EDIT: Some very slightly related discussions: Don't Fear Failure, Hoping to start a discussion about overcoming insecurity.

Comment author: ChristianKl 30 May 2014 05:50:45PM 0 points [-]

I think you probably find some relevant hits if you search for depression. In particular you will find recommendations of Burn's Feel Good Handbook.

Comment author: TylerJay 27 May 2014 01:10:28AM 0 points [-]

A combination of controlled breathing, visualization, and mantra is pretty effective for me at battling acute anxiety and panic attacks. Personally, I use the Litany Against Fear from Dune. I'm happy to elaborate if there's any interest.

Comment author: shminux 23 May 2014 08:55:20PM *  12 points [-]

A few questions about cryonics I could not find answers to online.

What is the fraction of deceased cryo subscribers who got preserved at all? Of those who are, how long after clinical death? Say, within 15 min? 1 hour? 6 hours? 24 hours? Later than that? With/without other remains preservation measures in the interim?

Alcor appears to list all its cases at http://www.alcor.org/cases.html , and Ci at http://198.170.115.106/refs.html#cases , though the last few case links are dead. So, at least some of the statistics can be extracted. However, it is not clear whether failures to preserve are listed anywhere.

Some other relevant questions which I could not find answers to:

  • How often do cryo memberships lapse and for what reasons?

  • How successful are last-minute cryo requests from non-subscribers?

Comment author: NancyLebovitz 23 May 2014 03:59:08PM 3 points [-]

Slatestarcodex isn't loading for me. It's obviously loading for other people-- I'm getting email notifications of comments. I use chrome.

Anyone have any idea what the problem might be?

Comment author: Yvain 24 May 2014 02:48:38AM *  2 points [-]

It wasn't working for me either all day. A few hours ago it mysteriously reappeared. I called tech support. They said they had no explanation.

It should be up again now. I will investigate better hosting solutions.

Comment author: CAE_Jones 23 May 2014 07:03:50PM *  0 points [-]

It's not loading for me, either; I'm getting my ISP's "website suggestions" page, which tells me it's probably a DNS issue (this page theoretically only shows up when a domain name is not registered).

I wound up googling the URL in the "Recent on Rationality Blogs" sidebar, and was able to read Google's cache of the latest post. Said cache includes no comments. I did not try to comment from the cached page (I didn't expect it to work).

[edit: This is with Firefox.]

Comment author: Randy_M 23 May 2014 06:20:58PM *  0 points [-]

working for my cheap mobile phone, not for my new laptop with IE. Which is a shame, because it's a very good post, but I'm going to be way behind to contribute to any comment threads.

edit: Shame for me, I mean, not for the observer concerne with signal to noise ratio.

Comment author: Nornagest 23 May 2014 05:23:22PM 0 points [-]

It's down for me, too. Ping is failing to resolve the address, so I think we're looking at a DNS issue.

Comment author: Error 23 May 2014 05:20:04PM *  0 points [-]

On my Firefox it works fine. If it's loading for everyone else and not you, some things you might look at: See if you can ping the site, and see if it works under a clean browser profile. I'm not sure how to get one on Chrome but I'm sure there's a way.

You might also post whatever error message you're getting, if any. "Not loading" covers a fairly broad range of behavior.

[Edit: It worked when I was at work, but does not work at home. And yes, it looks like a DNS issue.]

Comment author: jaime2000 23 May 2014 04:49:19PM 0 points [-]

Using Chrome as well, not having a problem. Have you e-mailed Yvain at the reverse of gro.htorerihs@ttocs?

Comment author: Oscar_Cunningham 23 May 2014 04:45:39PM 0 points [-]

It's not loading for me either, nor for downforeveryoneorjustme.com. I use firefox.

Comment author: Leonhart 23 May 2014 02:40:28PM *  2 points [-]

ETA: Problems solved, LW is amazing, love you all, &c.

I am in that annoying state where I vaguely recall the shape of a concept, but can't find the right search terms to let me work out what it was I originally read. Does anyone recognise either of the things below?

  • a business-ethics test-like-thing where someone left confectionary and a donation box out unsupervised, and then looked at who paid, in some form.

(One of the many situations where googling "morality of doughnuts" doesn't help much)

  • a survey-design concept where instead of asking people "do you do x", you ask them "do you think your co-workers do x" and that is taken as more representative, or used to debias the first answer; or, um, something.

Any help appreciated!

Comment author: Unnamed 23 May 2014 06:09:09PM 2 points [-]
Comment author: Leonhart 24 May 2014 12:06:16PM 0 points [-]

The very thing. Thank you!

Comment author: [deleted] 23 May 2014 03:08:11PM 2 points [-]

I wonder if you aren't thinking of this bagel vendor.

Comment author: Leonhart 23 May 2014 03:20:36PM 0 points [-]

Bingo. Thank you!

Comment author: JoshuaFox 23 May 2014 01:24:50PM *  1 point [-]

Could someone write a Wiki article on Updateless Decision Theory? I'm looking for an article that is not too advanced and not too basic, and I think that a typical wiki article would be just right.

Comment author: charlemango 23 May 2014 02:25:10AM 6 points [-]

What would happen if citizens had direct control over where their tax dollars went? Imagine a system like this: the United States government raises the average person's tax by 3% (while preserving the current progressive tax rates). This will be a "vote-with-your-wallet" tax, where the citizen can choose where the money should go. For example, he may choose to allocate his tax funds towards the education budget, whereas someone else may choose to put the money towards healthcare instead. Such a system would have the benefit of being at democratic in deciding the nation's priorities, while bypassing political gridlock. What would be the consequences of this system?

Comment author: [deleted] 25 May 2014 02:41:12PM 3 points [-]

In Italy there's something similar: you can choose whether 0.8% of your income taxes goes to the government or to an organized religion of your choice (if you don't choose, it's shared in proportion to the number of people who choose each church), and 0.5% goes to a non-profit or research organization of your choice.

Comment author: Nornagest 23 May 2014 06:18:49PM *  9 points [-]

The biggest problem I can see with this is inefficient resource allocation. Others have mentioned ways of giving money to yourself, but we could probably minimize that with conflict-of-interest controls or by scoping budgetary buckets correctly. But there's no reason, even in principle, to think that the public's willingness to donate to a government office corresponds usefully to its actual needs.

As a toy example, let's say the public really likes puppies and decides to put, say, 1% of GDP into puppy shelters and puppy-related veterinary programs. Diminishing returns kick in at 0.1% of GDP; puppies are still being saved, but at that point marginal dollars would be doing more good directed at kitten shelters (which were too busy herding cats to spend time on outreach in the run-up to tax season). The last puppy is saved at 0.5% of GDP, and the remaining 0.5% -- after a modest indirect subsidy to casinos and makers of exotic sports cars -- goes into the newly minted Bureau for Puppy Salvation's public education fund.

Next tax cycle, that investment pays off and puppies get 2% of GDP.

Comment author: JoshuaFox 23 May 2014 06:43:27AM 3 points [-]

What would happen if citizens had direct control over where their tax dollars went?

That's what the free market looks like -- and the dollars involved are no longer tax.

I suppose the government could still tax and then ask you if you'd rather use it to buy a flatscreen TV for your living room or else better air conditioning for Army tents in Afghanistan, or they could even restrict options to typical government spending,

Take a look at Hanson's proposals for allocating government resource with prediction market.

Comment author: MathiasZaman 23 May 2014 09:41:11AM 0 points [-]

The scenario described is different from a free market in that you still have to pay taxes. You just get more control over how the government can spend your tax-money. You can't use the money to buy a flatscreen TV, but you can decide if it gets spend on healthcare, military spending, NASA...

Comment author: ygert 23 May 2014 02:42:25PM *  1 point [-]

or they could even restrict options to typical government spending.

JoshuaFox noted that the government might tack on such restrictions

That said, it's not so clear where the borders of such restrictions would be. Obviously you could choose to allocate the money to the big budget items, like healthcare or the military. But there are many smaller things that the government also pays for.

For example, the government maintains parks. Under this scheme, could I use my tax money to pay for the improvement of the park next to my house? After all, it's one of the many things that tax money often works towards. But if you answer affirmatively, then what if I work for some institutute that gets government funding? Could I increase the size of the government grants we get? After all, I always wanted a bigger budget...

Or what if I'm a government employee? Could I give my money to the part of government spending that is assigned as my salary?

I suppose the whole question is one of specificity. Am I allowed to give my money to a specific park, or do I have to give it to parks in general? Can I give it to a specific government employee, or do I have to give it to the salary budget of the department that employs that employee? Or do I have to give it to that department "as is", with no restrictions on what it is spent on?

The more specitivity you add, the more abusable it is, and the more you take away, the closer it becomes to the current system. In fact, the current system is merely this exact proposal, with the specificity dial turned down to the minimum.

Think about the continuum between what we have now and the free market (where you can control exactly where your money goes), and it becomes fairly clear that the only points which have a good reason to be used are the two extreme ends. If you advocate a point in the middle, you'll have a hard time justifying the choice of that particular point, as opposed to one further up or down.

Comment author: asr 23 May 2014 03:55:39PM *  3 points [-]

Think about the continuum between what we have now and the free market (where you can control exactly where your money goes), and it becomes fairly clear that the only points which have a good reason to be used are the two extreme ends. If you advocate a point in the middle, you'll have a hard time justifying the choice of that particular point, as opposed to one further up or down.

I don't follow your argument here. We have some function that maps from "levels of individual control" to happiness outcomes. We want to find the maximum of this function. It might be that the endpoints are the max, or it might be that the max is in the middle.

Yes, it might be that there is no good justification for any particular precise value. But that seems both unsurprising and irrelevant. If you think that our utility function here is smooth, then sufficiently near the max, small changes in the level of social control would result in negligible changes in outcome. Once we're near enough the maximum, it's hard to tune precisely. What follows from this?

Comment author: ygert 25 May 2014 10:38:14AM 0 points [-]

Hmm. To me it seemed intuitively clear that the function would be monotonic.

In retrospect, this monotonicity assumption may have been unjustified. I'll have to think more about what sort of curve this function follows.

Comment author: Lumifer 23 May 2014 03:26:22PM *  2 points [-]

If you advocate a point in the middle, you'll have a hard time justifying the choice of that particular point, as opposed to one further up or down.

Trouble with justifying does not necessarily mean that the choice is unjustified.

I like to wash my hands in warm water. I would have a hard time justifying a particular water temperature, as opposed to one slightly colder or slightly warmer. This does not mean that "the only points which have a good reason to be used" are ice-cold water and boiling water.

Comment author: Randy_M 23 May 2014 05:56:44PM 0 points [-]

You can't justify a point, but you could justify a range by speficfying temperatures where it becomes uncomforable. Actually, specifying a range is just specifying the give point with less resolution.

Comment author: NancyLebovitz 23 May 2014 02:35:02AM 4 points [-]

There would be a lot of advertising.

Comment author: charlemango 23 May 2014 03:19:47AM 1 point [-]

I think it would be a plus. Americans would be forced to actually consider which issues are important to them.

Comment author: niceguyanon 23 May 2014 09:40:25AM 1 point [-]

I find using a chess timer in conjunction with Pomodoros helpful in restricting break time overflow. Tracking work vs break time via the chess timer motivates me to keep the ratio in check. It is also satisfying to get your "score" up; a high work to break ratio at the end of a session feels good.

Comment author: CronoDAS 23 May 2014 03:44:00AM 1 point [-]

(Inspired by sci-fi story)

A new intelligence enhancing brain surgery has just been developed. In accordance with Algernon's Law, it has a severe drawback: your brain loses its ability to process sensory input from your eyes, causing you to go blind.

How big of an intelligence increase would it take before you'd be willing to give up your eyesight?

Comment author: Alicorn 23 May 2014 05:04:24AM 4 points [-]

Enough to make learning Braille, meaningfully improving existing screenreader software if I don't care for it, and figuring out how to echolocate relatively short-term projects so that I could move on to other things instead of spending forever trying to reconstruct the anatomy of my life.

I almost said "enough to be able to route around this drawback somehow", but no, I don't think it's quite that dire.

Comment author: ike 22 May 2014 07:56:38PM *  4 points [-]

Hack the SAT essay:

First, some background: The SAT has an essay, graded on a scale from 1-6. The essay scoring guidelines are here . I'll quote the important ones for my purposes:

“Each essay is independently scored by two readers on a scale from 1 to 6. These readers' scores are combined to produce the 2-12 scale. The essay readers are experienced and trained high school and college teachers.” “Essays not written on the essay assignment will receive a score of zero”

Reports vary, but apparently, most grader spend between 90 seconds to 2 and a half minutes on each essay.

My challenge, inspired by the Aibox experiment, is as follows. You are an AI taking the test. You need to write an off-topic anything that will convince both graders to give you a six. (Or, if the two graders disagree by more than one point, a third grader takes over, and you only need to convince them). You have 25 minutes to actually write it, but unlimited time to plan in advance. You could probably draw anything, not just writing, but you run the risk of them seeing a picture and immediately giving a zero without having time to get hacked.

I've come up with two ideas so far:

  • Writing a sob story about how the essay prompt is misprinted on your page (although I don't think that would work)
  • Threatening to commit suicide if the grader doesn't give you a six (would probably result in them calling the police)

I didn't think either of them were very good, but I like the concept. Some rules: No paying them off or threatening them with physical harm.

Can anyone come up with better ideas?

I'm putting this on open thread because it's my first real post, and I'm not sure of the reaction.

Comment author: palladias 24 May 2014 02:06:03AM 3 points [-]

Heh, part of the strategy I used when I took the SAT was slightly darkening my "two-bit" words with my pencil and making sure to fill the exact amount of space provided-minus-one line. I had read (don't have the citation at hand) that length of essay tracks score pretty well. And, to clinch it, I wanted their (very brief) attention to be drawn to good words, used correctly.

Result: 12.

(Though, I think the main thing was just committing to writing a tight, formulaic essay. I outscored some friends who I thought were better writers than I was, because they were trying to write a good essay rather than a good SAT essay.)

Comment author: chaosmage 23 May 2014 06:29:40PM 1 point [-]

Write a subtly but powerfully persuasive narrative about how you've long been planning to become a teacher, and rate essays like this one, because obviously that is the job that ultimately decides what kind of minds will be in charge in the next generation. Include a mention of the off topic problem, and claim that the "official" topic of your essay is merely an element in a more important and more real topic: this situation, happening right now, of a real and complex relationship between the writer and rater that will, in a sense, continue for the rest of both people's lives, even if they never meet again.

I'd rate that a 6 anyway.

Comment author: Alejandro1 22 May 2014 09:15:54PM 8 points [-]

First observation: Surely any entity intelligent enough to hack the essay according to the rules you have set is also intelligent enough to get the maximum grade (much more easily) by the usual means of writing the assigned essay…

Second observation: Since the concept of "being on topic" is vague (essentially, anything that humans interpret as being on a certain topic is on that topic) maybe the easiest way to hack it following your rules would be to write an essay that is not on topic by the criteria the designers of the exam had in mind, but that is close enough that it can confuse the graders into believing it is on topic. An analogy could be how some postmodernists confused people into believing they were doing philosophy...

Comment author: ike 23 May 2014 03:17:07PM 0 points [-]

On the point that any AI smart enough to do this could write a 12 essay: remember that you don't know the essay topic in advance. You only have 25 minutes to write, while if you do one off-topic, you have more time.

Comment author: DanielLC 22 May 2014 11:33:46PM 5 points [-]

This reminds me of something I've read about Isaac Asimov doing. He said that people tended not to believe him when he told them he didn't know anything about the subject he was asked to give a speech on. As a result, he started changing the subject.

He gave an example in which he was asked to give a speech on information retrieval or something. He didn't know anything about it beyond that it was apparently called "information retrieval". He basically said that Mendelian inheritance was discovered long before it was needed to solve certain problems in the theory of evolution, but nobody knew about it so it took a while to figure out the answer, so a better way to retrieve information would be helpful. Mostly he was just talking about Mendelian inheritance.

Comment author: shminux 22 May 2014 10:14:36PM 1 point [-]

You have to reliably convince a grader in the 1-2 min they spend on it that your essay is in the top 1% or so (that's the fraction of perfect 12s), and the grader intuitively knows the score she'll give you within one point after 30 seconds or less. I doubt there is a sure way to do it without hitting their mental model of a perfect essay on all counts.

Comment author: ike 23 May 2014 03:16:18PM *  0 points [-]

You need to reliably convince a grader that they should 1. Take more time to look at the essay or 2. Give a six, regardless of merit.

Few restrictions on how, like with AIbox. (You could tell them you're an AI, or an alien, or whatnot, as long as it's believable.)

Comment author: ShardPhoenix 22 May 2014 11:38:24AM 11 points [-]

Where is somewhere to go for decent discussion on the internet? I'm tired of how intellectually mediocre reddit is, but this place is kind of dead.

Comment author: MathiasZaman 22 May 2014 11:13:16PM 13 points [-]

Alternative: Liven up Less Wrong. I'm not sure how to do that, but it's possible solution to your problem.

Comment author: John_Maxwell_IV 23 May 2014 06:28:48AM *  7 points [-]

If you want to make LW livelier, you should downvote less on the margin... downvoting disincentivizes posting. It makes sense to downvote if there's lots of content and you want to help other people cut through the crap. But if there's too little it's arguably less useful.

Also develop your interesting thoughts and create posts out of them.

Comment author: blacktrance 22 May 2014 04:59:45PM 9 points [-]

Slate Star Codex comments have smart people and a significant overlap with LW, but the interface isn't great (comment threading stops after it gets to a certain level of depth, etc). Alternatively, it may help to be more selective on reddit - no default subreddits, for example.

Comment author: spqr0a1 22 May 2014 03:01:14PM 3 points [-]

Check out metafilter.

Comment author: Lumifer 22 May 2014 04:44:20PM 3 points [-]

Check out metafilter.

Its survival is in doubt. In particular, "The site is currently and has been for several months operating at a significant loss. If nothing were to change, MeFi would defaulting on bills and hitting bankruptcy by mid-summer."

Comment author: [deleted] 22 May 2014 12:58:33PM 2 points [-]

Also looking for LW replacement, with no current success.

Comment author: shminux 22 May 2014 05:12:14PM 4 points [-]

This question occasionally comes up on #lesswrong, too, especially given the perceived decline in the quality of LW discussions in the last year or so. There are various stackoverflow-based sites for quality discussions of very specific topics, but I am not aware of anything more general. Various subreddits unfortunately tend to be swarmed by inanity.

Comment author: Metus 22 May 2014 01:48:29PM 1 point [-]

So LW but bigger? I think you are out of luck there.

Comment author: Risto_Saarelma 22 May 2014 08:02:20AM 10 points [-]

Scott Aaronson isn't convinced by Giulio Tononi's integrated information theory for consciousness.

But let me end on a positive note. In my opinion, the fact that Integrated Information Theory is wrong—demonstrably wrong, for reasons that go to its core—puts it in something like the top 2% of all mathematical theories of consciousness ever proposed. Almost all competing theories of consciousness, it seems to me, have been so vague, fluffy, and malleable that they can only aspire to wrongness.

Comment author: gwern 22 May 2014 02:54:32AM *  11 points [-]

Bad news, guys - we're probably all charismatic psychotics; from "The Breivik case and what psychiatrists can learn from it", Melle 2013:

The court reports clearly illustrate the odd effect Breivik seems to have had on all his evaluators, including the first, in generating reluctance to explore what might lie behind some of his strange utterances. As an illustration, when asked if he ever was in doubt about Breivik's sanity, one of the witnesses stated that he was that once, when Breivik in a discussion suggested that in the future people's brains could be directly linked to a computer, thus circumventing the need for expensive schooling. Instead of asking Breivik to extrapolate, the witness stated that he “rapidly said to himself that this was not a psychotic notion but rather a vision of the future”.

It's a good thing Breivik didn't bring up cryonics.

Comment author: Prismattic 22 May 2014 03:30:33AM *  6 points [-]

Second Livestock

I feel there are many possible Lesswrong punchlines in response to this.

Comment author: jaime2000 21 May 2014 01:46:11PM *  10 points [-]

I just realized you can model low time preference as a high degree of cooperation between instances of yourself across time, so that earlier instances of you sacrifice themselves to give later instances a higher payoff. By contrast, a high time preference consists of instances of you each trying to do whatever benefits them most at the time, later instances be damned.

Comment author: Raythen 21 May 2014 04:08:53PM *  8 points [-]

That makes sense. Even cooperating across short time frames might be problematic - "I'll stay in bed for 10 more minutes, even if it means that me-in-10-minutes will be stressed out and might be late for work"

I prefer to see long-term thinking as increased integration among different time-selves rather than a sacrifice, though - it's not a sacrifice to take actions with a delayed payoff if your utility function puts a high weight on your future-selves' wellbeing.

Comment author: AlexSchell 31 May 2014 06:07:12AM *  0 points [-]

Your definition of sacrifice seems to exclude some instances of literal self-sacrifice.

Comment author: witzvo 21 May 2014 05:24:24PM *  2 points [-]
Comment author: Vaniver 22 May 2014 05:30:48PM *  3 points [-]

The square brackets are greedy. What you want to do is this:

\[Link\]: [Why do people persist in believing things that just aren't true?](http://www.newyorker.com/online/blogs/mariakonnikova/2014/05/why-do-people-persist-in-believing-things-that-just-arent-true.html?utm_source=www&utm_medium=tw&utm_campaign=20140519&mobify=0)

which looks like:

[Link]: Why do people persist in believing things that just aren't true?

Comment author: witzvo 24 May 2014 04:10:19AM 2 points [-]

fixed. Thanks.

Comment author: witzvo 21 May 2014 08:06:06PM 1 point [-]

I notice that I have a hard time getting myself to make decisions when there are tradeoffs to be made. I think this is because it's really emotionally painful for me to face actually choosing to accept one or another of the flaws. When I face making such a decision, often, the "next thing I know" I'm procrastinating or working on other things, but specifically I'm avoiding thinking about making the decision. Sometimes I do this when, objectively, I'd probably be better off rolling a dice and getting on with one of the choices, but I can't get myself to do that either. If it's relevant, I'm bad at planning generally. Any suggestions?

Comment author: wadavis 22 May 2014 03:01:23PM *  2 points [-]

Spend some time deciding if decisiveness is a virtue. Dwell on it until you've convinced yourself that decisiveness is good, and have come to terms that you are not decisive. Around here it may be tempting to label decisiveness as rash and to rationalize your behavior, or not worth the work of changing, if so return to step one and reaffirm that you think it is good to be decisive. Now step outside your comfort zone and practice being decisive, practice at the restaurant, at work, doing chores. Have reminders to practice, set your desktop or phone background to "Be Decisive" in plain text (or whatever suits your esthetic tastes). Pick a role model who takes decisive action. Now after following these steps, you have practiced making decisions and following through on them, you have decided that to make a choice and not dwelling on it is a virtue, Now you can update your image of yourself as a decisive person. From there it should be self sustaining.

Comment author: RolfAndreassen 24 May 2014 07:01:24AM 2 points [-]

Dwell on it until you've convinced yourself that decisiveness is good

Or not, as the case may be! And then there's the possibility that more data is needed.

Comment author: witzvo 24 May 2014 04:56:39AM *  1 point [-]

Whoa. Fascinating! Thanks! I really like the idea of this approach. I'm, ironically, not sure I'm decisive enough to decide that decisiveness is a virtue, but this is worth thinking about. Where should I go to read more about the general idea that if I can decide that something is a virtue and practice acting in accord with that virtue that I can change myself?

Thinking about it just for a minute, I realize that I need a heuristic for when it's smart to be decisive and when it's smart to be more circumspect. I don't want to become a rash person. If I can convince myself that the heuristic is reliable enough, then hopefully I can convince myself to put it into practice like you say. I don't know if this means I'm falling into the rationalization trap that you mentioned or not, though. I don't think so; it would be a mistake to be decisive for decisiveness sake.

I can spend some time thinking more about role-models in this regard and maybe ask them when they decide to decide versus decide to contemplate, themselves. In particular, I think my role-models would not spend time on a decision if they knew that making either decision, now, was preferable to not making a decision until later.

Heuristic 1a: If making either decision now is preferable to making the decision later, make the decision promptly (flip coins if necessary).

In the particular case that prompted my original post, my current heuristics said it was a situation worth thinking about -- the options had significant consequences both good and bad. On the other hand, agonizing over the decision wouldn't get me anywhere and I knew what the consequences would be in a general sense -- I just didn't want to accept that I was responsible for the problems that I could expect to follow either decision, I wanted something more perfect. That's another situation my role-models would not fall prey to. Somehow they have the stomach to accept this and get on with things when there's no alternative....

Goal: I will be a person with the self-respect to stomach responsibility for the bad consequences of good decisions.

Heuristic 1b: When you pretty-much know what the consequences will be of all the options and they're all unavoidably problematic to around the same degree (multiply the importance of the decision by the error in the degree to define "around"), force yourself to pick one right away so you can put the decision-making behind you.

Am I on the right track? I'm not totally sure about how important it is to be decision-making behind yourself.

Comment author: wadavis 09 June 2014 04:21:25PM 1 point [-]

Sorry for the late reply, I couldn't decide how to communicate my point.

You strongly self-identify as not decisive and celebrate cautiousness as a virtue, if you desire to change that must change first. In all your examples you already know what has to be done, just want to avoid committing to action, and now you are contemplating finding methods to decide if you should be decisive on a decision by decision basis. That is a stalling tactic, stop it.

The goal to stomach the consequences is bang on, that might be some foundation work that is required first or something that develops with taking accountability and making decisions.

Comment author: Torello 21 May 2014 10:09:32PM 2 points [-]

If you're not familiar with the ideas read "The Paradox of Choice" by Barry Schwartz or watch a talk about it.

Other ideas:

give yourself a very short deadline for most decisions (most decisions are trivial); i.e. I will make this decision in the next two minutes and then I will stick with it. For long-term life decisions, maybe not so much.

Flip a coin. This is a good way to expose your gut feelings. A pros and cons type of weighting the options allows you to weigh lots of factors. Flipping a coin produces fewer reactions (in my experience): "Shoot, I really wish i had the other option (good information), or "I don't feel to strongly about the outcome (good information), or "I'm content with this flip" (good information).

Comment author: iarwain1 21 May 2014 02:42:24PM *  2 points [-]

What's the best way to learn programming from a fundamentals-first perspective? I've taken / am taking a few introductory programming courses, but I keep feeling like I've got all sorts of gaps in my understanding of what's going on. The professors keep throwing out new ideas and functions and tools and terms without thoroughly explaining how and why it works like that. If someone has a question the approach is often, "so google it or look in the help file". But my preferred learning style is to go back to the basics and carefully work my way up so that I thoroughly understand what's going on at each step along the way.

Comment author: Antiochus 21 May 2014 03:08:15PM 2 points [-]

This might be counter-intuitive and impractical for self-teaching, but for me it was an assembly language course that made it 'click' for how things work behind the scenes. It doesn't have to be much and you'll probably never use it again, but the concepts will help your broader understanding.

If you can be more specific about which parts baffle you, I might be able to recommend something more useful.

Comment author: Raythen 21 May 2014 01:24:25PM 2 points [-]

Is there a way to get email notifications on receiving new messages or comments? I've looked under preferences, and I can't find that option.

Comment author: [deleted] 21 May 2014 02:31:06PM *  1 point [-]

Tetlock thinks improved political forecasting is good. I haven't read his whole book but maybe someone can help me cheat. Why is improved forecasting not zero sum? suppose the USA and Russia can both forecast better but have different interests. so what?

[Edit] my guess might be that on areas of common interest like economics, improved forecasting is good. But on foreign policy...?

Comment author: [deleted] 22 May 2014 07:35:34AM 5 points [-]

International politics is zero-sum once you've already reached the Pareto frontier and can only move along it, but if forecasting is sufficiently bad you might not even be close to the Pareto frontier.

Comment author: Brian_Tomasik 22 May 2014 10:35:44AM 2 points [-]

Right. A lot of politics is not zero-sum. Reduced uncertainty and better information may enable compromises that before had seemed too risky. Forecasting could help identify which compromises would work and which wouldn't. Etc.

Comment author: [deleted] 06 June 2014 12:41:29PM *  0 points [-]

thanks army and bramflakes for illustrating. My guess is to agree--but I still have doubts. Maybe they have nothing to do after all with "zero sum." I think I'm concerned that forecasting could be used by governments against citizens. Before participating again I may need to read something in more detail about why this is unlikely. and also why I shouldn't participate in SciCast instead!

Comment author: bramflakes 21 May 2014 07:18:25PM 7 points [-]

A greatly simplified example: two countries are having a dispute and the tension keeps rising. They both believe that they can win against the other in a war, meaning neither side is willing to back down in the face of military threats. Improved forecasting would indicate who would be the likely winner in such a conflict, and thus the weaker side will preemptively back down.

Comment author: Lumifer 21 May 2014 03:43:39PM *  7 points [-]

improved political forecasting is good. .... Why is improved forecasting not zero sum?

For the simple reason that politics is not zero-sum, foreign policy included.

Comment author: NancyLebovitz 21 May 2014 03:09:08PM 4 points [-]

Improved forecasting might mean that both sides do fewer stupid (negative sum) things.

Comment author: pcm 21 May 2014 02:44:05PM 1 point [-]

I don't think Tetlock talks about that much.

Imagine a better forecast about whether invading Iraq reduces terrorism, or about whether Saddam would survive the invasion. Wouldn't both sides make wiser decisions?

Comment author: Stefan_Schubert 20 May 2014 01:43:34PM *  5 points [-]

Lots of people are arguing governments should provide all citizens with an unconditional basic income. One problem with this is that it would be very expensive. If the government would give each person say 30 % of GDP per capita to each person (not a very high standard of living), then that would force them to raise 30 % of GDP in taxes to cover for that.

On the other hand, means-tested benefits have disadvantages too. It is administratively costly. Receiving them is seen as shameful in many countries. Most importantly, it is hard to create a means-tested system that doesn't create perverse incentives for those on benefits, since when you start working, you will both lose your benefits and start paying taxes under such a system. That may mean that the net income can be a very small proportion of the gross income for certain groups, incentivizing them to stay unemployed.

One middle route I've been toying with is that the government could provide people with cheap goods and services. People who were satisfied with them could settle for them, whereas those who wanted something more fancy would have to pay out of their own pockets. The government would thus provide people with no-frills food - Soylent, perhaps - no-frills housing, etc, for free or for highly subsidized prices (it is important that they produce enough and/or set the prices so that demand doesn't outstrip supply, since otherwise you get queues - a perennial problem of subsidized goods and services).

Of course some well-off people might choose to consume these subsidized goods and services, and some poor people might not choose to do that. Still, it should in general be very redistributionary. The advantage over the basic income system is that it would be considerably cheaper, since these goods and services would only be used by a part of the population. The advantage over the means-tested system is that people will still be allowed to use these goods and services if their income goes up, so it doesn't create perverse incentives.

Another advantage with this system is that it could perhaps rein in rampant consumerism somewhat. Parts of the population will be habituated to smaller apartments and less fancy food. Those who want to distinguish themselves from the masses - who want to consume conspiciously - will also be affected, since they will have to spend less to stand out from the crowd.

I guess this system to some extent exist - e.g. in many countries, the government does provide you with education and health care, but rich people opt to go for private health-care and private education. So the idea isn't novel - my suggestion is just to take it a bit further.

Comment author: ChristianKl 30 May 2014 06:39:54PM 2 points [-]

Lots of people are arguing governments should provide all citizens with an unconditional basic income. One problem with this is that it would be very expensive.

You are missing the point. It's cheaper to give the poor unconditional basic income than to have a huge bureaucratic administration that makes sure that they pass certain conditions to be eligible for welfare payments.

That might mean a low basic income but it would still be an unconditional basic income. Don't confuse the debate for a unconditional income with the debate about how high it or welfare payments to the poorest should be.

I guess this system to some extent exist - e.g. in many countries, the government does provide you with education and health care

Actually you are looking at the wrong countries. Countries like Iran would be an example where essential goods like food get's heavily subventioned.

There are many reasons why subventions are a bad idea. The produce incentives for companies to lobby heavily to be included. The encourage people to waste products that get subventioned. They need bureaucracy to be organised. The prevent innovation because new products usually don't fit into the template along with old products are subventioned.

Comment author: DanielLC 04 June 2014 12:41:20AM 0 points [-]

It's cheaper to give the poor unconditional basic income than to have a huge bureaucratic administration that makes sure that they pass certain conditions to be eligible for welfare payments.

I decided to see what I could find on how much the administrative costs are, and I found this: http://mediamatters.org/research/2005/09/21/limbaugh-dramatically-overstated-administrative/133859

The most useful part seems to be this line:

Finally, the report estimated that the federal administrative costs amounted to $12,452,000,000 for the 11 programs studied -- 6.4 percent of total federal expenditures on these programs.

That doesn't sound like much of an issue.

Comment author: drethelin 22 May 2014 04:46:40AM 1 point [-]

how is this better than Walmart and Mcdonalds?

Comment author: Lumifer 21 May 2014 03:28:18PM 2 points [-]

One middle route I've been toying with is that the government could provide people with cheap goods and services.

This is a popular practice in the third world.

See e.g. this or this.

Comment author: Kaj_Sotala 21 May 2014 06:57:44AM 4 points [-]

The advantage over the basic income system is that it would be considerably cheaper, since these goods and services would only be used by a part of the population. The advantage over the means-tested system is that people will still be allowed to use these goods and services if their income goes up, so it doesn't create perverse incentives.

The universal basic income schemes that seem the most reasonable to me adjust the taxation so that, while the UBI itself is never taxed, if you make a lot of money then your non-UBI earnings get an extra tax so that the whole reform ends up having very little direct effect on you. In effect, that ends up covering the "only used by a part of the population" criteria. The perverse incentives can't be avoided entirely, but they can be mitigated somewhat if the tax system is set up so that you're always better off working than not working.

For a concrete example, there's e.g. this 2007 proposal by the Finnish Green party. Your working wage (in euros per month) is on the X-axis, your total income after is on the Y-axis. Light green is the basic income, dark green is your after-tax wage, red is paid in tax. According to their calculations, this scheme would have been approximately cost neutral (compared to what the Finnish state normally gets in tax income and pays out in welfare).

Comment author: Stefan_Schubert 21 May 2014 10:28:36AM *  2 points [-]

Thanks, that's interesting. 440 euro is not a lot, though - could you live in Helsinki on that (in 2007)? Is this supposed to replace for instance unemployment benefits (which I'm sure are much higher)? It so, this system would make some people who aren't that well off worse off.

One thing that is seldom noted is that the Scandinavian "general welfare states" are in effect half-way to the UBI. In Sweden, and I would guess the other Scandinavian countries as well, everyone gets a significant pension no-matter what, child benefits are not means-tested, etc. Also virtually everyone uses public schools, public health-care, public universities and public child-care (all of which are either heavily subsidized or free). So it's not a question of either you have an Anglo-saxon system where benefits mostly go to the poor or a UBI system, but there are other options.

Comment author: Kaj_Sotala 21 May 2014 12:10:30PM *  3 points [-]

440 euros is almost the same amount as direct student benefits were in 2007, though that's not taking into account the fact that most students also have access to subsidized housing which helps substantially. On the other hand, the proposed UBI model would have maintained as separate systems the current Finnish system of "housing benefits" (which pays a part of your rent if you're low-income, exact amount depending on the city so as to take into account varying price levels around the country) as well as "income support", which is supposed to be a last-resort aid that pays for your expenses if you can show that you have reasonable needs that you just can't meet in any other way. So we might be able to say that in total, the effective total support paid to someone on basic income would have been roughly comparable to that paid to a student in 2007.

Some students manage to live on that whereas some need to take part-time jobs to supplement it, which seems to be roughly the right level to aim for - doable if you're really frugal about your expenses, but low enough that it will still encourage you to find work regardless. Might need to increase child benefits a bit in order to ensure that it's doable even if you're having a family, though.

The Greens' proposed UBI would have replaced "all welfare's minimum benefits", so other benefits that currently pay out about the same amount. That would include student benefits and the very lowest level of unemployment benefit (which you AFAIK get if your former job paid you hardly anything, basically), but it wouldn't replace e.g. higher levels of unemployment benefits.

Comment author: Stefan_Schubert 21 May 2014 02:47:37PM 1 point [-]

Thanks, that's interesting and comprehensive.

Housing benefits are an alternative to the idea discussed here; i.e. subsidizing particular low-cost, low-standard flats. However, the problem with housing benefits is that you tend to get more of them if you have higher rent, and thus you in effect reward people with more expensive tastes, which leads to a general increase of housing consumption. My proposal is intended to have the exact opposite consequence.

I'm not that adverse to the UBI but there is something counter-intuitive about the idea that rich people first pay taxes and then get benefits back. This forces you to either lower the level of basic income (or other government expenditure) or raise taxes. My suggestion is intended to take care of this without having to resort to means-testing.

Comment author: kevin_p 21 May 2014 02:53:30AM 8 points [-]

"Those who want to distinguish themselves from the masses - who want to consume conspiciously - will also be affected, since they will have to spend less to stand out from the crowd" - maybe I've misunderstood this, but surely it would have the opposite result? Let's say rents are ~$20/sqm (adjust for your own city; the principle stays the same). If I want my apartment to be 50 sqm rather than 40 sqm, that's an extra $200. But if 40 sqm apartments were free, the price difference would be the full $1000/month price of the bigger apartment. You've still got a cliff, just like in the means-tested welfare case; it's just that now it's on the consumption side.

In practice this would probably destroy the market for mid-priced goods - who wants to pay $1000/month just for an extra 10 square meters? Non-subsidized goods will only start being attractive when they get much better than the stuff the government provides, not just slightly better.

Also, if you give out goods rather than money, you're going to have to provide a huge range of different goods/services, because otherwise there will be whole categories of products that people who legitimately can't work (elderly, disabled etc) won't have access to. And if you do that, the efficiency of your economy is going to go way down - not just because the government is generally less efficient than the free market, but also because people can't use money to allocate resources according to their own preferences.

Comment author: Stefan_Schubert 21 May 2014 10:14:44AM *  1 point [-]

You've still got a cliff, just like in the means-tested welfare case; it's just that now it's on the consumption side.

Yes, that's what it's like (only the cliff is actually usually less steep under means-tested welfare). And you're also right about this:

In practice this would probably destroy the market for mid-priced goods - who wants to pay $1000/month just for an extra 10 square meters?

To clarify, I should say that my idea was that these subsidized or free goods and services would be so frugal that they would in effect not be an option to the majority of the population. Hence, it's not exactly the market for mid-priced goods, but the market for "low-priced but not extremely low-priced goods" that would get destroyed.

To your main point: since some people go down in standard, thanks to the fact that they by doing so they can get significantly cheaper goods, the average standard will go down. Now say that to get the average standard before this reform you had to pay 1000 dollars a month, but after the reform you just have to pay 900 dollars a month (because the average standard is now lower). Then those who want higher than the average standard will only have to pay more than 900 dollars rather than more than 1000.

The actual story might be more complicated than this - e.g., what some people really might be interested in is having a higher standard than the mean, or the the eight first deciles, or what-not. But generally it seems to me intuitive that if parts of the population lower their standards, then this should mean that those who want to consume consipiciously will also lower their standards.

Also, if you give out goods rather than money, you're going to have to provide a huge range of different goods/services, because otherwise there will be whole categories of products that people who legitimately can't work (elderly, disabled etc) won't have access to.

I don't see this as a comprehensive system: rather, you would just use it for some important goods and services: food, housing, education, health, public transport (in fact, the system is already used in the three latter; possibly housing too, though most subsidized housing is means-tested which it wouldn't be under this system). The system would be too complicated otherwise. Possibly it could be combined with a low UBI.

Comment author: chaosmage 20 May 2014 05:38:52PM 10 points [-]

A sharp divide between basic, subsidized, no-frills good and services and other ones didn't work in the socialist German Democratic Republic (long story, reply if you need it). What does seem to be for various countries is different rates of value-added tax depending on the good or service - the greater the difference in taxation, the closer you get to the system you've described, but it is more gradual and can be fine-tuned. Maybe that could work for sales tax, too?

Comment author: [deleted] 20 May 2014 09:26:13PM 10 points [-]

A sharp divide between basic, subsidized, no-frills good and services and other ones didn't work in the socialist German Democratic Republic (long story, reply if you need it).

I'd be interested in hearing about this.

Comment author: chaosmage 21 May 2014 01:17:46PM *  25 points [-]

I'm no economist, but as a former citizen of that former country, this is what I could see.

There was a divide of basic goods and services and luxury ones. Basic ones would get subsidies and be sold pretty much at cost, luxury ones would get taxed extra to finance those subsidies.

The (practically entirely state-owned) industries that provided the basic type of goods and services were making very little profit and had no real incentive to improve their products, except to produce them cheaper and more numerously. Nobody was doing comparison shopping on those, after all. (Products from imperalist countries were expected to be better in every way, but that would often be explained away by capitalist exploitation, not seen as evidence homemade ones could be better.) So for example, the country's standard (and almost only) car did not see significant improvements for decades, although the manufacturer had many ideas for new models. The old model had been defined as sufficient, so to improve it was considered wasteful and all such plans were rejected by the economy planners.

The basic goods were of course popular, and due to their low price, demand was frequently not met. People would chance upon a shop that happened to have gotten a shipment of something rare and stand in line for hours to buy as much of that thing as they would be permitted to buy, to trade later. In the case of the (Trabant) car, you could register to buy one at a seriously discounted price if you went via an ever-growing waiting list that, near the end, might have you wait for more than 15 years. Of course many who got a car this way sold it afterwards, and pocketed a premium the buyer paid for not waiting.

Arguably more importantly, money was a lot better at getting you basic goods than luxury ones. So people tended to use money mostly for basic goods and services, and would naturally compare a luxury buy's value with those. When you can buy a (luxury) color TV at ten times the price of a (basic) black-and-white TV, it feels like you'd pay nine basic TVs for adding color to the one you use. Empirically, people often simply saved their money and thus kept it out of circulation.

Housing was a mess, too. Any rent was decreed to have to be very small. So there was no profit in renting out apartments, which again created a shortage of supply. (Private landownership was considered bourgeouis and thus not subsidized.) It got so bad many young couples decided to have child as early as possible, because that'd help them in the application to receive a flat of their own, and move out from their parents. And of course most buildings fell into disrepair - after all, there was no incentive to invest in providing higher quality for renters. This demonstrates again that to be making a basic good or service meant you'd always have demand, but that demand wouldn't benefit you much.

The production of luxury goods went better, partly because these were often exported for hard currency. The GDR had some industries that were fairly skilled at stealing capitalist innovations and producing products that had them, for sale at fairly competitive prices. Artificially low prices and subsidies for certain goods and products made pretty sure most of domestic consumption never benefitted from that skill.

Comment author: [deleted] 20 May 2014 07:20:23PM 2 points [-]

A sharp divide between basic, subsidized, no-frills good and services and other ones didn't work in the socialist German Democratic Republic

Nor did it in other Soviet block countries, e.g. People's Republic of Poland.

Comment author: DanielLC 20 May 2014 06:17:37PM 5 points [-]

If the government would give each person say 30 % of GDP per capita to each person (not a very high standard of living), then that would force them to raise 30 % of GDP in taxes to cover for that.

In 2002, total U.S. social welfare expenditure constitutes over 35% of GDP

I think that would be too high anyway. Since anyone who bothers to work can make more than that, and the reduction in labor supply would increase pay, and any money you save will last you longer, there's little reason to make it enough for people to be well off, as opposed to getting just enough to scrape by.

It's also worth noting that most people will get a significant portion of that money back. If you make below the mean income (which most people do, since it's positively skewed) you will end up getting all of it back.

It seems unfair to charge people the entire price to get slightly better goods. Thus, if you want to get slightly better goods, the government should still reimburse you for the price of the cheap goods. At this point, it's just unconditional basic income with the government selling cheap goods.

As a minor point, Soylent as it is now can't be considered no-frills food. If you buy it ready-made, it costs around $10 a day.

Comment author: NancyLebovitz 20 May 2014 04:03:55PM 5 points [-]

If a government produces goods, the results tend to be low quality (education may be an exception in some places).

The cost of a guaranteed minimum income may not be quite as high as you think-- it would replace a lot of more complicated government support. Also, it might be possible to build in some social rewards for not taking it if you don't need it.

Comment author: gmzamz 20 May 2014 04:23:09AM *  5 points [-]

Regarding networks; is there a colloquially accepted term for when one has a ton of descriptive words (furry, bread sized, purrs when you pet them, claws, domesticated, hunts mice, etc) but you do not have the colloquially accepted term (cat) for the network? I have searched high and low and the most I have found is reverse defintion search, but no actual term.

Comment author: knb 21 May 2014 01:31:44AM 3 points [-]

Sounds kind of like the Tip of the Tongue Effect

Comment author: [deleted] 21 May 2014 05:17:58PM 1 point [-]

That's a particular subcase of it, when you know that there's a word for that concept and you've heard it but you can't remember it. But other times it's more like “there should be a word for this”.

Comment author: satt 21 May 2014 10:36:16PM 1 point [-]

But other times it's more like “there should be a word for this”.

However, that's distinct from what gmzamz asked about: occasions when "you do not have the colloquially accepted term" for something.

Comment author: satt 21 May 2014 12:59:16AM 1 point [-]

I've heard "anomia" and "being able to talk all around the idea of an [X] but not the word [X] itself".

Comment author: Emily 20 May 2014 09:34:19AM 4 points [-]

Not quite what you're looking for I think, but if someone is having that problem they might have anomic aphasia.

Comment author: RichardKennaway 20 May 2014 05:59:14AM 4 points [-]

"Not having a word for it"? Or in the technical vocabulary of linguistics, the concept is not "lexicalised".

Comment author: wnoise 20 May 2014 03:35:55AM 4 points [-]

A video of Daniel Dennett giving an excellent talk on free will at the Santa Fe Institute: https://www.youtube.com/watch?v=wGPIzSe5cAU It largely follows the general Less Wrong consensus, but dives into how this construction is useful in the punishment and moral agent contexts more than I've seen developed here.

Comment author: Transfuturist 20 May 2014 07:50:20AM *  1 point [-]

I've posted this before but I want to make it more clear that I want feedback.

I've written an essay on the effects of interactive computation as an improvement for Solomonoff-like induction. (It was written in two all-nighters for an English class, so it probably still needs proofreading. It isn't well-sourced, either.)

I want to build a better formalization of naturalized induction than Solomonoff's, one designed to be usable by space-, time-, and rate-limited agents, and interactive computation was a necessary first step. AIXI is by no means an ideal inductive agent.

Comment author: Manfred 26 May 2014 05:53:36AM *  1 point [-]

Your essay was interesting. What did you think of a similar post I recently wrote?

Feedback (entirely on the writing): The first goal when editing this should be to eliminate words from sentences. Use short and familiar words whenever possible. Change around a paragraph's structure to get it shorter. Since this is for English class, cut out every bit of jargon you can. If there's a length requirement, you can always fill it with story.

The best lesson of my dreadful college writing class was that nonfiction can have a story too - and the primary way you engage with a nontechnical audience is with this story. Solomonoff induction practically gets a character arc - the hope for a universal solution, the impotence at having to check every possible hypothesis, then being built back up by hard work and ancient wisdom to operate in the real world.

When you shift gears, e.g. to talk about science, you can make it easier on the reader by cutting technical explanations for historical or personal anecdotes. This only works once or twice per essay, though.

You can make your paragraphs more exciting. Rather than starting with "An issue similar in cause to separability is the idea of the frontier," and then have the reader go in with the mindset that they have to hear about a definition (English professors hate reading about definitions), try to give the reader a very concise big-picture view of the idea and immediately move on to the exciting applications, which is where they'll learn the concept.

Comment author: Transfuturist 26 May 2014 12:50:42PM *  0 points [-]

Thanks for the in-depth critique! I haven't read your post yet, but it piqued my interest.

Also, moving on to the "exciting applications" isn't very effective when there aren't any. :I

Comment author: Manfred 26 May 2014 09:24:59PM 0 points [-]

Also, moving on to the "exciting applications" isn't very effective when there aren't any. :I

Bah humbug.

Comment author: tgb 19 May 2014 03:33:11PM 13 points [-]

This just struck me: people always credit WWII as being the thing that got the US out of the great depression. We've all seen the graph (like the one at the top of this paper) where standard of living drops precipitously during the great depression then more than recovers during WWII.

How in the world did that work? Why is it that suddenly pouring huge resources out of the country into a massive utility-sink that didn't exist until the start of the war rapidly brought up the standard of living? This makes no sense to me.

The only plausible explanation I can think up is that they somehow borrowed from the future using the necessities of war as justification. I feel like that would involve a dip in the growth rate after WWII - and there is one, but it just dips back down to the trend-line not below like I would expect if they genuinely borrowed enough from the future to offset such a large downturn as the great depression. The only other thing seems to be externalities.

However this goes, this seems to be a huge argument in favor of big-government spending (if we get this much utility from the government building things that literally explode themselves without providing non-military utility, then in a time of peace, we should be able to get even more by having the government build things like high-tech infrastructure, places of beauty, peaceful scientific research, large-scale engineering projects, etc.). So should we be spending 20-40% of our GDP on peace-time government mega-projects? It's either that or this piece of common knowledge is wrong (and we all know how reliable common knowledge is!).

Or I'm wrong, of course. So what is it?

(Bonus question: why didn't WWI see a similar boost in living standards?)

Comment author: knb 21 May 2014 01:16:46AM *  6 points [-]

However this goes, this seems to be a huge argument in favor of big-government spending (if we get this much utility from the government building things that literally explode themselves without providing non-military utility, then in a time of peace, we should be able to get even more by having the government build things like high-tech infrastructure, places of beauty, peaceful scientific research, large-scale engineering projects, etc.). So should we be spending 20-40% of our GDP on peace-time government mega-projects? It's either that or this piece of common knowledge is wrong (and we all know how reliable common knowledge is!).

I'm surprised no one has explained this yet, but this is wrong according to standard economic theory as I understand it.

  1. The United States suffered from terrible monetary policy during the Great Depression.
  2. Due to "animal spirits" and "sticky wages" this caused large scale unemployment and output well below our production possibilities frontier.
  3. World War II caused the government to kickstart production for the war effort.
  4. Living standards actually didn't rise, although GDP did (GDP per capita is NOT the same as living standards). Consumption was dramatically deferred during the war. People had fewer babies, bought fewer consumer products (and fewer were produced) and shifted toward home production for some necessities.
  5. There was a short recession as the end of the war lowered demand, but pent-up consumer demand quickly re-stabilized the economy.

The point is WWII helped the economy because we were well under our production possibilities frontier during the depression. Peace-time mega projects would only be helpful under recessed/depressed conditions, and fortunately, we now can use monetary policy to produce similar effects.

Anyway, the argument you were making seems pretty common among people who don't follow economics debates, and in fact is one of the major policy recommendations of the oddball Lyndon LaRouche cult.

Comment author: solipsist 21 May 2014 12:07:13AM 4 points [-]

The labor force of the 1930s was sapped by over-allocation in unproductive industries. Specifically, much of the labor share was occupied in the sitting around feeling depressed and wishing you had a job industry. Economic conditions improved as workers shifted out of that industry and into more productive ones, such as all of them.

Comment author: chaosmage 20 May 2014 12:06:22PM 8 points [-]

I'm not sure how much it influenced the overall picture, but there was quite a brain drain to the US before and during WWII (mostly Jewish refugees) as well as after (Wernher von Braun and the like). Migrating away from the Nazi and Stalinist spheres of influence demonstrates intelligence, and the ability to enter the US despite the complex “national origins quota system” that went into effect in 1929 demonstrates persistence, affluence and/or marketable skills, so I estimate these immigrants gave a significant boost to the US economy.

Comment author: [deleted] 20 May 2014 09:24:17PM 4 points [-]

Also: salt iodization in 1924. Possibly also widespread flour enrichment in the early 1940s due to both Army incentivization and the need for alternate nutrient sources during rationing.

Comment author: Vaniver 19 May 2014 06:20:09PM *  16 points [-]

How in the world did that work?

It didn't. This is the argument in image form, and you can find similar ones for employment (basically, when you conscript people, unemployment goes down. Shocking!). There are lots of libertarian articles on the subject--this might be an alright introduction--but the basic argument is that standards of living dropped (that's what happens when food is rationed and metal is used for tanks instead of cars or household appliances) but the government spending on bombs and soldiers made the GDP numbers go up, and then the post-war boost in standards of living was mostly due to deferred spending.

Comment author: solipsist 20 May 2014 01:01:37AM 3 points [-]

Note: as the article implies, the above viewpoint is not representative of mainstream economic consensus.

Comment author: knb 21 May 2014 12:44:00AM 2 points [-]

What tgb stated above was factually incorrect--WWII did not increase living standards. While most economists credit WWII with kickstarting GDP growth and cutting unemployment, I don't know anyone who would actually argue that living standards rose during WWII.

Comment author: Unnamed 19 May 2014 07:45:21PM 12 points [-]

One simple model which seems to fit the "WWII ending the depression" piece of data (and which might have some overlap with the truth) is that it's relatively difficult to put idle resources into use, and significantly easier to repurpose resources that have been in use for other uses.

During the depression, a bunch of people were unemployed, factories were not running, storefronts were empty, etc. According to this model, under those economic conditions there were significant barriers to taking those idle resources and putting them to productive use.

Then WWII came and forced the country to mobilize and put those resources to use (even if that use was just to make stuff which would be shipped off to Europe and the Pacific to be destroyed). Once the war was over, those resources which had been devoted to war could be repurposed (with relatively little friction) to uses with a much more positive effect on people's standard of living. So things became good according to meaningful metrics like living standards, not merely according to metrics like unemployment rate or total output which ignore the fact that building a tank to send to war isn't valuable in the same way as building a car for local consumers.

The glaring open question here is why there might be this asymmetry between putting idle resources to use and repurposing in-use resources. Which is closely related to the question of why recessions/depressions exist at all (as more than momentary blips): once a recession hits and bunch of people become unemployed (and other resources go idle), why doesn't the market immediately jump in to snap up those idle resources? This article gets into some of the attempts to answer those questions.

(Bonus answer: World War One did not happen during a depression, so mobilizing for war mostly involved repurposing resources which had served other uses in peacetime rather than bringing idle resources into use.)

Comment author: pcm 20 May 2014 02:25:16AM 3 points [-]

Part of it is that deflation in the early 1930s meant that workers were overpaid relative to the value of goods they produced (wages being harder to cut than prices). That caused wasteful amounts of unemployment. WWII triggered inflation, and combined with wage controls caused wages to become low relative to goods, shifting the labor supply and demand to the opposite extreme.

The people who were employed pre-war presumably had their standard of living lowered in the war (after having it increased a good deal during the deflation).

I won't try to explain here why deflation and inflation happened when they did, or why wages are hard to cut (look for "sticky wages" for info about the latter).

Comment author: Daniel_Burfoot 19 May 2014 09:09:10PM 4 points [-]

Are there any math/stats/CS theory types out there who are interested in suggestions for new problems?

I am finding that my large scale lossless data compression work is generating some mathematical problems that I don't have time to solve in their full generality. I could write up the problem definition and post to LW if people are interested.

Comment author: Punoxysm 20 May 2014 12:41:04AM 5 points [-]

Sure, lay it on us. If nothing else, writing it up clearly should help you.

Comment author: cousin_it 19 May 2014 10:35:50PM 3 points [-]

Try posting some problems in the open threads here. MathOverflow has also worked really well for me.

Comment author: Markas 20 May 2014 12:33:39AM 2 points [-]

I buy a lot of berries, and I've heard conflicting opinions on the health risks of organic vs regular berries (and produce in general). My brief Google research seems to indicate that there's little additional risk if any from non-organic prodce, but if anyone knows more about the subject, I'd appreciate some evidence.

Comment author: Punoxysm 20 May 2014 03:35:04AM 4 points [-]

Without citation: minimal "organic" labeling standards often aren't a very high or impressive barrier to clear.

Comment author: Punoxysm 20 May 2014 04:06:21AM 1 point [-]

I am thinking of doing an article digesting a handful of research papers by some researcher or on some theme that would be of interest to less-wrongers. Any suggestions for what papers/theme, and any suggestions on how to write this mini-survey?

Comment author: JoshuaFox 23 May 2014 06:44:38AM *  1 point [-]

Please write a clear layperson's intro to UDT. You can also mention TDT etc. A good way to do this is a Wiki article.

A citation of related literature would be good too. Alex Altair's paper was good, but I'd like to read about UDT in more depth yet still in an accessible form.

Comment author: Punoxysm 23 May 2014 05:12:25PM *  0 points [-]

I am not interested in UDT/TDT. And people already write tons about it here. Thank you for the suggestion though.

Comment author: Metus 19 May 2014 07:44:34PM 4 points [-]

Apparently I don't forget ideas, they just move places in my consciousness.

In the first week of last september I mused about writing a handbook of rationality for myself akin to how the ancient Stoics wrote handbooks for themselves. Nothing came from it, I plain and simply forgot about it. Next week I mused about writing a book using LaTeX and git as the git model allows to have many parallel versions of the book and there needs to be no canon for it to work, as opposed to a wiki, though still allowing collaboration. Now there already is a book written with git and writing a document with git is not a new idea at all.

Thinking about parallel legal systems or organisation forms with the explicit goal of copying the viable parts reminded me of using git to write source code. Inded, there is no difference between writing down social rules and personal maxims with this principle so I came to the obvious conclusion only a couple of hours ago: Use git to write a handbook of rationality, encourage other people to fork it and to do their own edits, keeping the viable parts and rejecting the questionable stuff.

Actions speak louder than words though lack of knowledge and other commitments can be an impediment, so I made a repository with only just the hint of a structure. Please provide your content and your thoughts about this.

Comment author: Vaniver 20 May 2014 07:31:28PM 2 points [-]

I think this is a good idea, and I'm curious to see how it goes. I'll be watching, and as I complete some of my other writing duties I think this has a good chance of becoming one.

Something else that might be interesting: this comment and the idea it's a response to in the OP.

Comment author: Viliam_Bur 19 May 2014 11:59:33AM *  15 points [-]

I'm reading the "You're Calling Who A Cult Leader?" again, and now the answer seems obvious.

"I publicly express strong admiration towards the work of Person X." -- What could possibly be wrong about this? Why are our instincts screaming at us not to do this?

Well, assigning a very high status to someone else is dangerous for pretty much the same reason as assigning a very high status to yourself. (With possible exception if the person you admire happens to be the leader of the whole tribe. Even so, who are you to speak about such topics? As if your opinion had any meaning.) You are challenging the power balance in the tribe. Only instead of saying "Down with the current tribe leader; I should be the new leader!" you say "Down with the current tribe leader; my friend here should be the new leader!"

Either way, the current tribe leader is not going to like. Neither his allies. Neither neutral people, who merely want to prevent another internal fight where they have nothing to gain. All of them will tell you to shut up.

There is nothing bad per se about suggesting that e.g. Douglas R. Hofstadter should be the king of the nonconformist tribe. Maybe we can't unite behind this king, but neither can we unite behind any competitor, so... why not. At worst, some of us will ignore him.

The problem is, we live in a context of a larger society that merely tolerates us, and we know it. Praise Hofstadter too high and someone outside of our circle may notice it. And suddenly the rest of the tribe might decide that it is going to get rid of our ill-mannered faction once and for all. (Not really, but this is what would happen in the ancient jungle.) So we better police ourselves... unless we are ready to take the fight with the current leadership.

Being a strong fan of Douglas R. Hofstadter means challenging those who are strong fans of e.g. Brad Pitt. There is only so much place at the top of the status ladder, and our group is not strong enough to nominate even the highest-status one among us. So we rather not act like we are ready for open confrontation.

The irony is that if Douglas Hofstadter or Paul Graham or Eliezer Yudkowsky actually had their small cults, if they acted like dictators within the cult and ignored the rest of the world, the rest of the world would not care about them. Maybe people would even invent rationalizations about why everything is okay, and why anyone is free to follow anyone or anything. -- The problem starts with suggesting that they could somehow be important in the outside world; that the outside world has a reason to listen to them. That upsets people; the power change that might concern them. Cultish behavior well-contained within the cult doesn't. Saying that all nerds should read Hofstadter, that's okay. -- Saying that even non-nerds lose something valuable when they don't read something written by a member of our faction... now that's a battle call. (Are you suggesting that Hofstadter deserves a similar status to e.g. Dostoyevsky? Are you insane or what? Look at the size of your faction, our faction, and think again.)

Comment author: David_Gerard 20 May 2014 04:40:16PM *  9 points [-]

I was talking to the loved one about this last night. She is going for ministry in the Church of England. (Yes, I remain a skeptical atheist.)

She is very charismatic (despite her introversion) and has the superpower of convincing people. I can just picture her standing up in front of a crowd and explaining to them how black is white, and the crowd each nodding their heads and saying "you know, when you think about it, black really is white ..." She often leads her Bible study group (the sort with several translations to hand and at least one person who can quote the original Greek) and all sorts of people - of all sorts of intelligence levels and all sorts of actual depths of thinking - get really convinced of her viewpoint on whatever the matter is.

The thing is, you can form a cult by accident. Something that looks very like one from the outside, anyway. If you have a string of odd ideas, and you're charismatic and convincing, you can explain your odd ideas to people and they'll take on your chain of logic, approximately cut'n'pasting them into their minds and then thinking of them as their own thoughts. This can result in a pile of people who have a shared set of odd beliefs, which looks pretty damn cultish from the outside. Note this requires no intention.

As I said to her, "The only thing stopping you from being L. Ron Hubbard is that you don't want to. You better hope that's enough."

(Phygs look like regular pigs, but with yellow wings.)

Comment author: Punoxysm 20 May 2014 03:27:52AM 6 points [-]

I think you're overcomplicating it. People like Eliezer Yudkowsky and Paul Graham are certainly not cult leaders, but they have many strong opinions that are well outside the mainstream; they don't believe in, and in fact actively scorn, hedging/softening their expression of these opinions; and they have many readers, a visible subset of whom uncritically pattern all their opinions, mainstream or not, after them.

And pushback against excitement over Hofstadter can stem from legitimate disagreement about the importance/interestingness of his work. The pushback is proportional to the excitement that incites it.

Comment author: philh 19 May 2014 01:33:56PM *  10 points [-]

The sanitised LW feedback survey results are here: https://docs.google.com/spreadsheet/ccc?key=0Aq1YuBYXaqWNdDhQQmQ3emNEOEc0MUFtRmd0bV9ZYUE&usp=sharing

I'll be writing up an analysis of results, but that takes time.

Locations that received feedback:

  • (1) Amsterdam, Netherlands
  • (3) Austin, TX
  • (5) Berkeley, CA
  • (1*) Berlin, Germany
  • (1*) Boston, MA
  • (4) Brussels, Belgium
  • (2) Cambridge, MA
  • (2) Cambridge, UK
  • (3) Chicago, IL
  • (1) Hamburg, Germany
  • (3) Helsinki, Finland
  • (14) London [None of them specified which London, but my current guess is that they all meant London, UK.]
  • (3) Los Angeles, CA
  • (4) Melbourne, Australia
  • (2) Montreal, QC
  • (1) Moscow, Russia
  • (1) Mountain View, CA
  • (5) New York City
  • (1**) Ottawa, ON
  • (1) Philadelphia, PA
  • (1*) Phoenix, AZ
  • (1) Portland, OR
  • (1**) Princeton, NY
  • (1*) San Diego, CA
  • (2) Seattle, WA
  • (2) Sydney, Australia
  • (1) Toronto, ON
  • (2) Utrecht, Netherlands
  • (3) Washington, DC

  • (1) No local meetup

  • (9) Not given

(*) means the feedback is from someone who hasn't attended because it's too far away, so seeing the specific response is probably not very helpful. (**) means the group name is written in the public results, so you can just search for it to find your feedback.

There were 78 responses, and four of them listed two or more cities, so these sum to 82.

If you organize one of these groups, and haven't already done so, please get in touch so I can send your feedback to you! (Or if you'd rather not receive it, it would be helpful if you could let me know that as well, so that I don't spend time trying to track you down.) I haven't yet sent anyone their feedback, and don't promise that I'll do it super quickly, but it will happen.

Comment author: sixes_and_sevens 19 May 2014 05:02:36PM 4 points [-]

A while ago I mentioned how I'd set up some regexes in my browser to alert me to certain suspicious words that might be indicative of weak points in arguments.

I still have this running. It didn't have the intended effect, but it is still slightly more useful than it is annoying. I keep on meaning to write a more sophisticated regex that can somehow distinguish the intended context of "rather" from unintended contexts. Natural language is annoying and irregular, etc., etc.

Just lately, I've been wondering if I could do this with more elaborate patterns of language. It's recently come to my attention that expressions of the form "in saying [X] (s)he is [Y]" is often indicative of sketchy value-judgement attribution. It's also very easy to capture with a regex. It's gone in the list.

So, my question: what patterns of language are (a) indicative of sloppy thinking, weak arguments, etc., and (b) reliably captured by a regex?

(In the back of my mind, I am imagining some sort of sanity-equivalent of a spelling and grammar check that you can apply to something you've just written, or something you're about to read. This is probably one of those projects I will start and then abandon, but for the time being it's fun to think about.)

Comment author: satt 26 May 2014 08:36:09PM 1 point [-]

The pair "tend to always" or "always tend to". Sometimes they come off to me as a way to exploit the rhetorical force of "always" while committing only to a hedged "tend to", in which case they can condense a two-step of terrific triviality into three words. There are likely other phrases that can provide plausibly deniable pseudo-certainty but I can't think of any.

More generally, the Unix utility diction tries to pick out "frequently misused, bad or wordy diction", which is a kinda related precedent.

Comment author: sixes_and_sevens 26 May 2014 11:12:15PM 1 point [-]

two-step of terrific triviality

When they come in the form of portentous pronouncements, Daniel Dennett calls these "deepities"; ambiguous expressions having one meaning which is trivially true but unimportant, and another that is obviously false but would be earth-shatteringly significant if it were true.

Also related in cold reading is the Rainbow Ruse.

Comment author: TsviBT 20 May 2014 03:16:42PM 1 point [-]

"[...]may be the case[...]"

Sometimes this phrase is harmless, but sometimes it is part of an important enumeration of possible outcomes/counterarguments/whatever. If "the case" does not come with either a solid plan/argument or an explanation why it is unlikely or not important, then it is often there to make the author and/or the audience feel like all the bases have been covered. E.g.,

We should implement plan X. It may be the case that [important weak point of X], but [unrelated benefit of X].

Comment author: moridinamael 19 May 2014 06:53:27PM *  1 point [-]

I had the notion a while ago to try to write a linter to aid in tasks beyond code correctness by automatically detecting the desired features in a plethora of objects. Kudos on actually doing it and in a not hare-brained fashion.

Comment author: Kaj_Sotala 19 May 2014 11:05:28AM *  9 points [-]

DragonBox has been mentioned on this site a few times, so I figured that people might be interested knowing in that its makers have come up with a new geometry game, Elements. It's currently available for Android and iOS platforms.

DragonBox Elements takes its inspiration from “Elements”, one of the most influential works in the history of mathematics.Written by the Greek mathematician Euclid, “Elements” describes the foundations of geometry using a singular and coherent framework. Its 13 volumes have served as a reference textbook for over 23 centuries. The book also introduced the axiomatic method, which is the system of argumentation that forms the basis for the scientific method we still use today. DragonBox Elements makes it possible for players to master its essential axioms and theorems after just a couple of hours playing!

Geometry used to be my least favorite part of math and as a result, I hardly remember any of it. Playing this game with that background is weird: I don't really have a clue of what I'm doing or what the different powers represent, but they do have a clear logic to them, and now that I'm not playing, I find myself automatically looking for triangles and quadrilaterals (had to look up that word!) in everything that I see. Plus figuring out what the powers do represent makes for an interesting exercise.

I'd be curious to hear comments from anyone who was already familiar with Euclid before this.

Comment author: philh 19 May 2014 01:17:29PM 1 point [-]

Not an expert, but Euclid made some mistakes, like using superposition to prove some theorems. I'm curious how they handle those. (e.g. I think Euclid attempted to prove side-angle-side congruence, but Hilbert had to include it as an axiom.)

Comment author: RichardKennaway 19 May 2014 08:13:01AM *  8 points [-]

This is a test posting to determine the time zone of the timestamps, posted at 09:13 BST / 08:13 UTC.

ETA: it's UTC.

Comment author: zedzed 19 May 2014 05:55:19AM *  10 points [-]

I have the privilege of working with a small group of young (12-14) highly gifted math students for 45 minutes a week for the next 5 weeks. I have extraordinary freedom with what we cover. Mathematically, we've covered some game theory and Bayes' theorem. I've also had a chance to discuss some non-mathy things, like Anki.

I only found out about Anki after I'd taken a bunch of courses, and I've had to spend a bunch of time restudying everything I'd previously learned and forgotten. It would have been really nice if someone had told me about Anki when I was 12.

So, what I want to ask Lesswrong, since I suspect most of you are like the kids I'm working with except older, is what blind spots did 12-14-year-old you have I could point out to the kids I'm working with?

Comment author: AspiringRationalist 25 May 2014 01:14:55AM 7 points [-]

Instill the importance of a mastery orientation (basically, optimizing for developing skills rather than proving one's innate ability). My 12-14 year old self had such a strong performance orientation as to consider things like mnemonics and study skills to be only for the lazy and stupid. Anyone stuck in the performance orientation won't even be receptive to things like Anki.

Comment author: Creutzer 26 May 2014 10:49:21AM 0 points [-]

This. My upbringing screwed me up horribly in this respect.

Comment author: CAE_Jones 23 May 2014 08:47:37PM *  9 points [-]

I never learned how to put forth effort, because I didn't need to do so until after I graduated high school.

I got into recurring suboptimal ruts, sometimes due to outside forces, sometimes due to me not being agenty enough, that eroded my conscientiousness to the point that I'm quite terrified about my power (or lack there of) to get back to the level of ability I had at 12-14.

I suppose, if I had to give my younger self advice in the form of a sound-byte, it'd be something like: "If you aren't--maybe at least monthly--frustrated, or making less progress than you'd like, you aren't really trying; you're winning at easy mode, and Hard Mode is likely to capture you unprepared. Of course, zero progress is bad, too, so pick your battles accordingly."

Also, even if you're on a reasonable difficulty setting, it pays to look ahead and make sure you aren't missing anything important. My high school calculus teacher missed some notational standards in spite of grasping the math, and her first college-level course started off painful for it; I completely missed the cross and dot products in spite of an impressive math and physics High school transcript, and it turns out those matter a good deal (and at the time, the internet was very unsympathetic when I tried researching them).

Comment author: therufs 23 May 2014 07:54:41PM 4 points [-]

I had these blind spots as a 20some year old, so I assume I had them when I was 12-14 too:

  • I assumed that if I was good at something, I would be good at it forever. Turns out skills atrophy over time. Surprise! (This seems similar to your Anki revelation.)

  • I am agenty. I had no concept of the possibility that I might be able to cause* some meaningful effect outside my immediate circle of interaction.

* I did, of course, daydream about becoming rich and famous through no fault of my own; I wouldn't say I actually expected this to happen, but I thought it was more likely than becoming rich and famous under my own steam.

Comment author: [deleted] 20 May 2014 09:19:45PM *  10 points [-]

what blind spots did 12-14-year-old you have

  • Social capital is important. Build it.
  • Peer pressure is far more common and far more powerful than you think. Find an ingroup that puts it to constructive ends.
  • Don't major in a non-STEM field. College is job training and a networking opportunity. Act accordingly.
  • Something about time management, pattern-setting, and motivation management -- none of which I've managed to learn yet.
Comment author: Viliam_Bur 21 May 2014 08:22:17AM 11 points [-]

Social capital is important. Build it.

Some actionable advice: Keep written notes about people (don't let them know about that). For every person, create a file that will contain their name, e-mail, web page, facebook link, etc., and the information about their hobbies, what you did together, whom they know, etc. Plus a photo.

This will come very useful if you haven't been in contact with the person for years, and want to reconnect. (Read the whole file before you call them, and read it again before you meet them.) Bonus points if you can make the information searchable, so you can ask queries like "Who can speak Japanese?" or "Who can program in Ruby?".

This may feel a bit creepy, but many companies and entrepreneurs do something similar, and it brings them profit. And the people on the other side like it (at least if they don't suspect you to use a system for this). Simply think about your hard disk as your extended memory. There would be nothing wrong or creepy if you simply remembered all this stuff; and there are people with better memory who would.

Maybe make some schedule to reconnect with each person once in a few years, so they don't forget you completely. This also gives you an opportunity to update the info.

If you start doing it while young, your high-school and university classmates will already make a decent database. Then add your colleagues. You will appreciate it ten years later, when you would naturally forget most of them.

When you have a decent database, you can provide useful social service by connecting people. -- Your friend X asks you: "Do you know something who can program in Ruby?" "Uhm, not sure, but let me make a note and I'll think about it." Go home, look at the database. Find Y. Ask Y whether it is okay to give their contact to someone interested in Ruby. Give X contact to Y. At this moment, your friend X owes you a favor, and if X and Y do some successful business, also Y owes you a favor. The cost of you is virtually zero; apart from costs of maintaining the database, which you would do anyway.

An important note is that of course there is a huge difference between close friends and random acquaintances, but both can be useful in some situations, so you want to keep a database for both. Don't be selective. If your database has too much people, think about better navigation, but don't remove items.

Comment author: Gunnar_Zarncke 19 May 2014 10:16:23PM 2 points [-]

I have been in such a program when I was 12-14 (run by the William Stern foudnation in Hamburg, Germany) and the curriculum consisted mostly of very diverse 'math' problems prepared in a way to make them accessible to us in a func way without introducing too much up-front terminology or notation. Examples I remember of the spot:

  • turing machines (dresses as short-sighted busy beavers)

  • generalized Nim really with lots of matches

  • tilings of the plane

  • conveys game of life (easy on paper)

More I just looked up in an older folder:

  • distance metrics on a graph

  • multi-way balances

  • continuous fractions (cool for approximations; I still use this)

  • logical derivations about beliefs of people whose dream are indistinuishable from reality

  • generalized magical squares

  • Fibinacci sequences and http://en.wikipedia.org/wiki/Missing_square_puzzle

  • Drawing fractals (the iterated function ones; with printouts of some)

In general only an exposition was given and no task to solve. Or some introductory initial questions, The patterns to be detected were the primary reward.

We were not introduced to really practical applications but I'm unsure whether that had been helpful or rather whether it had been interesting. My interest at that time stemmed from the material being systematic patterns that I could approach abstractly and symbolically and 'solve'. I'm not clear whether the Sequences would have been interesting in that way. Their patterns are clear only in hindsight.

What should work is Bayes rule - at least in the form that can be visualized (tiling of the 1/1 grid) or symbolcally derived easily.

Also guessing and calibration games should work. You can also take standard games and add some layer of complexity on them (but please not arbitrary but helpful ones; a minimum example is: Play Uno but cards don't have to match color+number but some number theoretic identity e.g. +(2,5) modulo (4,10)).

Comment author: Douglas_Knight 19 May 2014 11:27:08PM 1 point [-]

conveys game of life

I assume you mean Conway's game of life.

Comment author: Gunnar_Zarncke 20 May 2014 08:26:55AM 1 point [-]

Yes of course. That and we tried variations of the rule-set. We also discovered the flyer.

It is interesting what can come out of this seed. When I later had an Atari I wrote an optimized simulator in assembly which aggregated over multiple cells and I even tried to use the blitter reducing the number of clock cycles per cell as far as I could. This seed become a part of the mosaic of concepts that sits behind understanding complex processes now.

Comment author: Benito 19 May 2014 02:33:24PM *  6 points [-]

Speaking as a somewhat gifted seventeen year old, I'd really like to have known about AoPS, HPMOR and the Sequences.

Also, I'd like to have had in my mind the notion that my formal education is not optimised for me, and that I really need to optimise it myself. Speaking more concretely, I think that most teenagers in Britain pick their A Levels (if they do them at all) based on what classes the other people around them are doing, which isn't very useful. Speaking to a friend though, I realised that when he was picking his third A Level to study, there was no other A Level he needed to study to get into his main area of specialisation (jazz musician), and his time would be better spent not doing the A level at all; he needed to think more meta. He was just doing an A level because that's what everyone seems to think you should do. I'm about to give up a class because it's not going to help me get anywhere, I can use the time better and learn what I want to better alone anyway. So, really optimise.

Don't know if that helps. And AoPS is ridiculously useful.

Comment author: Viliam_Bur 19 May 2014 07:06:52AM *  15 points [-]

what blind spots did 12-14-year-old you have

Heh, if I was 12-14 these days, the main message I would send to me would be: Start making and publishing mobile games while you have a lot of free time, so when you finish university, you have enough passive income that you don't have to take a job, because having a job destroys your most precious resources: time and energy.

(And a hyperlink or two to some PUA blogs. Yeah, I know some people object against this, but this is what I would definitely send to myself. Sending it to other kids would be more problematic.)

I would recommend Anki only for learning languages. For other things I would recommend writing notes (text documents); although this advice may be too me-optimized. One computer directory called "knowledge", subdirectories per subject, files per topic -- that's a good starting structure; you can change it later, if you need. But making notes becomes really important at the university level.

I would stress the importance of other things than math. Gifted kids sometimes focus on their strong skills, and ignore their weak skills -- they put all their attention to where they receive praise. This is a big mistake. However, saying this without providing actionable advice does not help. For example, my weak spots were exercise and social skills. For social skills a list of recommended books could help; with emphasis that I should not only read the books, but also practice what I learned. For exercise, a simple routine plus HabitRPG could do the job. Maybe to emphasise that I should not focus on how I compare with others, but how I compare with yesterday's me.

Something about an importance of keeping contact with smart people, and insanity of the world in general. As a smart person, talking with other smart people increases your powers: both because you develop with them the ideas you understand, and because you can ask them about things you don't understand. (A stupid person will not understand what you are saying, and will give you harmful advice about things you asked.) In school you are supposed to work alone, but in real life a lot of success is achieved by teams; but the best teams are composed of good people, not of random people.

Another advice that is risky to give to other kids: Religion is bullshit and a waste of time. People will try to manipulate you, using lies and emotional pressure. Whatever other positive traits they have, try to find other people that have the same positive traits, but without the mental poison; even if it takes more time, it's worth it.

Comment author: MathiasZaman 19 May 2014 12:51:44PM 2 points [-]

I think most of my blindspots before roughly the age of 18 involved not understanding that I'm personally responsible for my success and the extent of my knowledge and that "good enough" doesn't cut it. If I were to send a message back to 14-year-old!Me, I'd tell him that he has a lot of potential, but that he can't rely on others to fulfill that potential.

Comment author: sixes_and_sevens 19 May 2014 10:19:43AM 1 point [-]

what blind spots did 12-14-year-old you have

I don't know how much of this falls under your remit, but I had quite a few educational blind-spots I inherited from my parents, who didn't come from a higher-educated background. If any of your students are in a similar position, it's worth checking they don't have any ludicrous expectations out of the next several years of education which no-one close to them is in a position to correct.

Comment author: Metus 19 May 2014 10:38:14AM 1 point [-]

Blind spots such as?

Comment author: sixes_and_sevens 19 May 2014 12:26:41PM *  1 point [-]

I'm not sure any specific examples from my own experience would generalise very well.

If I were to translate my comment into a specific piece of generally-applicable advice, it would be to give students a realistic overview of what their forthcoming formal education involves, what it expects from them, and what options they have available.

As mentioned, this may be outside of the OP's remit.

Comment author: somnicule 19 May 2014 05:32:07PM 2 points [-]

The specific examples may not be used, but would clarify what sort of thing you're talking about.

Comment author: sixes_and_sevens 19 May 2014 05:47:55PM 5 points [-]

One example: certain scholastic activities are simply less important than others. If your model is "everything given to me by an authority figure is equally important", you don't manage your workload so well.

Comment author: Viliam_Bur 19 May 2014 10:02:45AM *  3 points [-]

I have a random mathematical idea, not sure what it means, whether it is somehow useful, or whether anyone has explored this before. So I guess I'll just write it here.

Imagine the most unexpected sequence of bits. What would it look like? Well, probably not what you'd expect, by definition, right? But let's be more specific.

By "expecting" I mean this: You have a prediction machine, similar to AIXI. You show the first N bits of the sequence to the machine, and the machine tries to predict the following bit. And the most unexpected sequence is one where the machine makes the most guesses wrong; preferably all of them.

More precisely: The prediction machine starts with imagining all possible algorithms that could generate sequences of bits, and it assigns them probability according to the Solomonoff prior. (Which is impossible to do in real life, because of the infinities involved, etc.) Then it receives the first N bits of the sequence, and removes all algorithms which would not generate a sequence starting with these N bits. Now it normalizes the probabilities of the remaining algorithms, and lets them vote on whether the next bit would be 0 or 1.

However, our sequence is generated in defiance to the prediction machine. We actualy don't have any sequence in advance. We just ask the prediction machine what is the next bit (starting with the empty initial sequence), and then do the exact opposite. (There is some analogy with Cantor's diagonal proof.) Then we send the sequence with this new bit to the machine, ask it to predict the next bit, and again do the opposite. Etc.

There is this technical detail, that the prediction machine may answer "I don't know" if exactly half of the remaining algorithms predict that the next bit will be 0, and other half predicts that it will be 1. Let's say that if we receive this specific answer, we will always add 0 to the end of the sequence. (But if the machine thinks it's 0 with probability 50.000001%, and 1 with probability 49.999999%, it will output "0", and we will add 1 to the end of the sequence.)

So... at the beginning, there is no way to predict the first bit, so the machine says "I don't know" and the first bit is 0. At that moment, the prediction of the following bit is 0 (because the "only 0's" hypothesis is very simple), so the first two bits are 01. I am not sure here, but my next prediction (though I am predicting this with naive human reasoning, no math) would be 0 (as in "010101..."), so the first three bits are 011. -- And I don't dare to speculate about the following bits.

The exact sequence depends on how exactly the prediction machine defines the "algorithms that generate the sequence of bits" (the technical details of the language these algorithms are written in), but can still something be said about these "most unexpected" sequences in general? My guess is that to a human observer they would seem like a random noise. -- Which contradicts my initial words that the sequence would not be what you'd expect... but I guess the answer is that the generation process is trying to surprise the prediction machine, not me as a human.

Comment author: witzvo 20 May 2014 03:40:12AM 1 point [-]

Just in case anyone wants pointers to existing mathematical work on "unpredictable" sequences: Algorithmically random sequences (wikipedia)

Comment author: Nisan 19 May 2014 03:53:19PM 2 points [-]

In order to capture your intuition that a random sequence is "unsurprising", you want the predictor to output a distribution over {0,1} — or equivalently, a subjective probability p of the next bit being 1. The predictor tries to maximize the expectation of a proper scoring rule. In that case, the maximally unexpected sequence will be random, and the probability of the sequence will approach 2^{-n}.

Allowing the predictor to output {0, 1, ?} is kind of like restricting its outputs to {0%, 50%, 100%}.

Comment author: Viliam_Bur 19 May 2014 06:27:33PM *  1 point [-]

the maximally unexpected sequence will be random

In a random sequence, AIXI would guess on average half of the bits. My goal was to create a specific sequence, where it couldn't guess any. Not just a random sequence, but specifically... uhm... "anti-inductive"? The exact opposite of lawful, where random is merely halfway opposed. I don't care about other possible predictors, only about AIXI.

Imagine playing rock-paper-scissors against someone who beats you all the time, whatever you do. That's worse than random. This sequence would bring the mighty AIXI to tears... but I suspect to a human observer it would merely seem pseudo-random. And is probably not very useful for other goals than making fun of AIXI.

Comment author: Nisan 20 May 2014 01:53:57AM *  3 points [-]

Ok. I still think the sequence is random in the algorithmic information theory sense; i.e., it's incompressible. But I understand you're interested in the adversarial aspect of the scenario.

You only need a halting oracle to compute your adversarial sequence (because that's what it takes to run AIXI). A super-Solomonoff inductor that inducts over all Turing machines with access to halting oracles would be able to learn the sequence, I think. The adversarial sequence for that inductor would require a higher oracle to compute, and so on up the ordinal hierarchy.

Comment author: Khoth 19 May 2014 10:36:49AM *  1 point [-]

My guess is that to a human observer they would seem like a random noise. -- Which contradicts my initial words that the sequence would not be what you'd expect... but I guess the answer is that the generation process is trying to surprise the prediction machine, not me as a human.

"What is the specific pattern of bits?" and "Give a vague description that applies to both this pattern and asymptotically 100% of possible patterns of bits" are very different questions. You're asking the machine the first question and the human the second question, so I'm not surprised the answers are different.

Comment author: jaibot 19 May 2014 06:42:48AM 4 points [-]

The OpenWorm Kickstarter ends in a few hours, and they're almost to their goal! Pitch in if you want to help fund the world's first uploads.

Comment author: MathiasZaman 20 May 2014 09:26:43AM 4 points [-]

Update: They made it.

Comment author: Jayson_Virissimo 19 May 2014 05:40:17AM *  4 points [-]

Yann LeCun, head of Facebook's AI-lab, did an AMA on /r/MachineLearning/ a few days ago. You can find the thread here.

In response to someone asking "What are your biggest hopes and fears as they pertain to the future of artificial intelligence?", LeCun responds that:

Every new technology has potential benefits and potential dangers. As with nuclear technology and biotech in decades past, societies will have to come up with guidelines and safety measures to prevent misuses of AI.

One hope is that AI will transform communication between people, and between people and machines. Ai will facilitate and mediate our interactions with the digital world and with each other. It could help people access information and protect their privacy. Beyond that, AI will drive our cars and reduce traffic accidents, help our doctors make medical decisions, and do all kinds of other things.

But it will have a profound impact on society, and we have to prepare for it. We need to think about ethical questions surrounding AI and establish rules and guidelines (e.g. for privacy protection). That said, AI will not happen one day out of the blue. It will be progressive, and it will give us time to think about the right way to deal with it.

It's important to keep mind that the arrival of AI will not be any more or any less disruptive than the arrival of indoor plumbing, vaccines, the car, air travel, the television, the computer, the internet, etc.

EDIT: I didn't see this one the first time. In response to someone asking "What do you think of the Friendly AI effort led by Yudkowsky? (e.g. is it premature? or fully worth the time to reduce the related aI existential risk?)", LeCun says that:

We are still very, very far from building machines intelligent enough to present an existential risk. So, we have time to think about how to deal with it. But it's a problem we have to take very seriously, just like people did with biotech, nuclear technology, etc.

Comment author: XiXiDu 19 May 2014 08:41:07AM *  10 points [-]

I'd love to see a discussion between people like LeCun, Norvig, Yudkowsky and e.g. Russell. A discussion where they talk about what exactly they mean when they think about "AI risks", and why they disagree, if they disagree.

Right now I often have the feeling that many people mean completely different things when they talk about AI risks. One person might mean that a lot of jobs will be gone, or that AI will destroy privacy, while the other person means something along the lines of "5 people in a basement launch a seed AI, which then turns the world into computronium". These are vastly different perceptions, and I personally find myself somewhere between those positions.

LeCun and Norvig seem to disagree that there will be an uncontrollable intelligence explosion. And I am still not sure what exactly Russell believes.

Anyway, it is possible to figure this out. You just have to ask the right questions. And this never seems to happen when MIRI or FHI talk to experts. They never specifically ask about their controversial beliefs. If you e.g. ask someone if they agree that general AI could be a risk, a yes/no answer provides very little information about how much they agree with MIRI. You'll have to ask specific questions.

Comment author: Vulture 23 May 2014 02:02:01AM 1 point [-]

Is it possible that MIRI knows privately (which is good enough for their own strategic purposes) that some of these high-profile people disagree with them on key issues, but they don't want to publicly draw attention to that fact?

Comment author: JoshuaFox 19 May 2014 10:21:51AM *  1 point [-]

If you could magically stop all human-on-human violence, or stop senescence (aging) for all humans, which would it be?

Comment author: Lumifer 23 May 2014 03:30:09PM 5 points [-]

If you could magically stop all human-on-human violence, or stop senescence (aging) for all humans, which would it be?

All governments rely on an implicit or explicit threat of force, of "human-on-human violence".

If no one can apply violence to me why should I pay any taxes, or, more crudely, pay for that apple I just grabbed off a street stall?

Comment author: Benito 20 May 2014 04:40:40PM *  2 points [-]

My problem with these questions is that it sorta gets difficult quickly. If you stopped aging today, I imagine there would very quickly be overpopulation issues and many old patients in hospitals wouldn't die etc. and yet I am finding it difficult to think of major issues with the ending of violence (boxing champions would be out of a job). And even now, I'm sure someone's thought of a counter example, and then the discussion would be harder. And so even though I think that aging is more important than violence as a focus, the question asks a hypothetical that is never going to occur (being able to just make that decision, I mean) and takes us away from reality into the nitty/gritty of a literal non-problem.

Why did you ask?

Edit: I didn't mean to make a case for either side, I was trying to suggest that the question itself seems unhelpful. We'll end up with a complicated technical discussion which is unlikely to have any practical value.

Comment author: Kaj_Sotala 21 May 2014 07:23:11AM *  16 points [-]

If you stopped aging today, I imagine there would very quickly be overpopulation issues

To give a sense of proportion: suppose that tomorrow, we developed literal immortality - not just an end to aging, but also prevented anyone from dying from any cause whatsoever. Further suppose that we could make it instantly available to everyone, and nobody would be so old as to be beyond help. So the death rate would drop to zero in a day.

Even if this completely unrealistic scenario were to take place, the overall US population growth would still only be about half of what it was during the height of the 1950s baby boom! Even in such a completely, utterly unrealistic scenario, it would still take around 53 years for the US population to double - assuming no compensating drop in birth rates in that whole time.

DR. OLSHANSKY: [...] I did some basic calculations to demonstrate what would happen if we achieved immortality today. And I compared it with growth rates for the population in the middle of the 20th Century. This is an estimate of the birth rate and the death rate in the year 1000, birth rate roughly 70, death rate about 69.5. Remember when there's a growth rate of 1 percent, very much like your money, a growth rate of 1 percent leads to a doubling time at about 69 to 70 years. It's the same thing with humans. With a 1 percent growth rate, the population doubles in about 69 years. If you have the growth rate — if you double the growth rate, you have the time it takes for the population to double, so it's nothing more than the difference between the birth rate and the death rate to generate the growth rate. And here you can see in 1900, the growth rate was about 2 percent, which meant the doubling time was about five years. During the 1950s at the height of the baby boom, the growth rate was about 3 percent, which means the doubling time was about 26 years. In the year 2000, we have birth rates of about 15 per thousand, deaths of about 10 per thousand, low mortality populations, which means the growth rate is about one half of 1 percent, which means it would take about 140 years for the population to double.

Well, if we achieved immortality today, in other words, if the death rate went down to zero, then the growth rate would be defined by the birth rate. The birth rate would be about 15 per thousand, which means the doubling time would be 53 years, and more realistically, if we achieved immortality, we might anticipate a reduction in the birth rate to roughly ten per thousand, in which case the doubling time would be about 80 years. The bottom line is, is that if we achieved immortality today, the growth rate of the population would be less than what we observed during the post World War II baby boom.

Comment author: JoshuaFox 20 May 2014 05:57:21PM *  3 points [-]

[the question] sorta gets difficult

Sure does!

boxing champions would be out of a job

I don't count that as violence -- it is consensual (and there's a modicum of not-always-successful effort to prevent permanent harm).

overpopulation

This has been discussed at great depth and refuted, e.g. by Max More and de Grey.

Why did you ask

No particular reason: Every now and then a thought come to mind.

Comment author: polymathwannabe 20 May 2014 06:37:05PM 1 point [-]

If you take into account the risk of permanent brain damage, boxing (as well as rugby/football) is sacrificeable.

Comment author: JoshuaFox 23 May 2014 06:40:14AM 0 points [-]

Never did any of those myself, but I think that being consensual, they don't count as violence.

Comment author: polymathwannabe 23 May 2014 03:20:23PM 1 point [-]

It's complicated. Power dynamics at school and at home, as well as joblessness in some countries, may make a sports career less than voluntary.

Comment author: Jayson_Virissimo 20 May 2014 03:56:04AM 5 points [-]

Ending aging would almost certainly greatly diminish human-on-human violence, since increasing expected lifespans would lower time preference. Right?

Comment author: Kaj_Sotala 21 May 2014 07:07:43AM *  8 points [-]

I don't think it works that way. Currently most human-on-human violence is committed by young people (specifically young men), who by this logic should have the lowest time preference, since they can expect to have the most years left to live.

Comment author: JoshuaFox 20 May 2014 07:46:45AM 1 point [-]

Then again, if you had more to lose, maybe that would increase your incentive to protect yourself by getting the other guy before he gets you.

Comment author: NancyLebovitz 20 May 2014 03:58:33PM 2 points [-]

I would assume there's a sorting effect-- people would tend to figure out eventually that it's better to live among low-violence people.

One big question is... ok, we want anti-aging, but what age do you aim for? 17 has some advantages, but how about 25? 35? 50?

Comment author: lmm 20 May 2014 06:25:25PM 3 points [-]

I've read that cell death overtakes cell division at around 35, so perhaps a body in some longer-term equilibrium condition would look 35?

(I suspect that putting a single age on is too crude though. The optimal age for a set of lungs may not be the same as that for a liver)

Comment author: NancyLebovitz 20 May 2014 08:03:56PM *  2 points [-]

Optimal age is also relative to what you want to do-- different mental abilities peak at wildly different ages. If you stabilize your body at age 25 and then live to be 67 (edited-- was 53), will your verbal ability increase as much as if you let yourself age to 67?

Athletic abilities don't all peak at the same time, either. Strength doesn't peak at the same time as strength-to-weight ratio. Would you rather be a weightlifter or a gymnast? I believe coordination peaks late-- how do you feel about dressage?

Comment author: RichardKennaway 20 May 2014 09:34:49PM 5 points [-]

Optimal age is also relative to what you want to do-- different mental abilities peak at wildly different ages. If you stabilize your body at age 25 and then live to be 53, will your verbal ability increase as much as if you let yourself age to 67?

Staying physically 25 doesn't mean you have to stop learning or physically developing. Surely the development of abilities in adult life is the result of exercising body and mind over the years, not part and parcel of senscence?

Comment author: NancyLebovitz 20 May 2014 09:38:54PM *  1 point [-]

Surely the development of abilities in adult life is the result of exercising body and mind over the years, not part and parcel of senscence?

I don't think we know. I have no idea why verbal ability would peak so late, so I don't know whether brain changes associated with aging are part of the process.

Comment author: DanArmak 19 May 2014 06:15:28PM *  14 points [-]

I'm much more likely to die of aging than of violence; so I'd rather stop aging.

This seems to generalize well to the rest of humanity. I am surprised that most others who replied disagrees. ISTM that most existential risks are not due to deliberate violence, but rather unintended consequences.

Comment author: Metus 19 May 2014 10:39:12AM 16 points [-]

The latter. The former is already decreasing at an incredible speed but I see no trend for the latter.

Comment author: Tenoke 19 May 2014 12:42:44PM 4 points [-]

The formed is a major existential risk, while the latter is probably going to be solved soon(er), so the former.

Comment author: JoshuaFox 19 May 2014 01:17:37PM 1 point [-]

Good point! Then again, a lot of the existential risks we talk about have to do with accidental extinction, not caused by aggression per se.