Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: PhilGoetz 22 May 2015 01:29:34PM *  0 points [-]

Something I just noticed: If you click on the graph, it switches to a graph of the probability of survival over time. But that graph doesn't match up at all with the reported numbers. It shows about 16% survival for 100% stocks, and ~0% survival for 50% stocks, 50% bonds.

Comment author: fortyeridania 22 May 2015 07:31:54AM *  2 points [-]
  • I think they can only mean either "variance" or "badness of worst case"

In the context of financial markets, risk = variance from the mean (often measured using the standard deviation). My finance professor emphasized that although in everyday speech "risk" refers only to bad things, in finance we talk of both downside and upside risk.

Comment author: PhilGoetz 22 May 2015 01:10:59PM 2 points [-]

So "risk" really does mean surprise to them. Do you think this impairs their ability to reason about risk? E.g., would they try to minimize their risk because that's a good thing, for the ordinary definition of risk, but then actually minimize their variance?

Comment author: Autolykos 22 May 2015 12:20:14PM 0 points [-]

Exactly. Stocks are almost always better long-term investments than anything else (if mixed properly; single points of failure are stupid). The point of mixing in "slow" options like bonds or real estate is that it gives you something to take money out of when the stocks are low (and replenish it when the stocks are high). That may look suboptimal, but still beats the alternatives of borrowing money to live from or selling off stocks you expect to rise mid-term. The simulation probably does a poor job of reflecting that.

Comment author: PhilGoetz 22 May 2015 12:53:35PM *  0 points [-]

That's no reason to tell someone with hundreds of thousands of dollars to put half of it in bonds. The market isn't going to stay down for 10 years.

"Risk" means surprise

2 PhilGoetz 22 May 2015 04:47AM

I lost about $20,000 in 2013 because I didn't notice that a company managing some of my retirement funds had helpfully reallocated them from 100% stocks into bonds and real estate, to "avoid risk". My parents are retired, and everyone advising them tells them to put most of their money in "safe" investments like bonds.

continue reading »
Comment author: PhilGoetz 01 May 2015 07:45:47PM *  1 point [-]

There is no question that colonization will reduce the risk of many forms of Filters

Actually there is. It just hasn't been thought about AFAIK. The naive belief is that there's safety in numbers, or that catastrophes have local impact. But filters, after all, are disruptions that don't stop locally. World wars. The Black Death. AI. The Earth is already big enough to stop most things that can be stopped locally, except astronomical ones like giant asteroids.

There is a probability distribution of catastrophies of different sizes. If that's a power-law distribution, as it very likely is, and if the big catastrophes are mostly human-caused, as they probably are, it means that the more we spread out, the more likely someone somewhere in our colonized space will trigger a catastrophe that will wipe us all out.

In response to Weekly LW Meetups
Comment author: PhilGoetz 24 April 2015 10:15:35PM 2 points [-]

I have a suggestion for people near Baltimore: There's a bioprinting symposium tomorrow (April 25) from noon to 5, at the Baltimore Under Ground Science Space, 101 North Haven Street, Suite 105, Baltimore, MD 21224. It is only $75. The organizers are losing a lot of money on this.

You could organize a meetup at this event. HOWEVER, don't walk there, and don't plan to walk around there to get lunch or dinner. I haven't been there, but it looks on the map like this spot is on the edge of the biggest slum in Baltimore.

Comment author: RobertLumley 18 July 2011 05:07:53PM 0 points [-]

I'm not sure how familiar with voting theory (or cake cutting theory) the average LessWrong reader is, so I may be preaching to the choir. But Arrow's theorem (You can wiki it, I can't give a precise mathematical definition off the top of my head.) pretty much states that having a decent voting system is impossible. Of course, we use the worst one possible (plurality) so anything would be an improvement. But mathematically, any solution proposed here will not be perfect, or perhaps even any good.

Comment author: PhilGoetz 23 April 2015 02:04:35AM 0 points [-]

But Arrow's theorem (You can wiki it, I can't give a precise mathematical definition off the top of my head.) pretty much states that having a decent voting system is impossible.

In that case, we should reinstate the monarchy right now, since no system of voting is worthwhile.

Comment author: PhilGoetz 01 April 2015 03:25:57AM *  4 points [-]

An important form of strategic analysis is the search for crucial considerations. (p257) Crucial consideration: idea with the potential to change our views substantially, e.g. reversing the sign of the desirability of important interventions. (p257)

Yes, but... a "crucial consideration" is then an idea that runs counter to answers we already feel certain about to important questions. This means that we should not just be open-minded, but should specially seek out other opinions on the matters we are most confident about.

How can you do this without being obliged to consider what seem to you like crackpot theories?

Comment author: KatjaGrace 31 March 2015 04:29:27AM 4 points [-]

Are there things that someone should maybe be doing about AI risk that haven't been mentioned yet?

Comment author: PhilGoetz 01 April 2015 03:13:30AM 2 points [-]

The entire approach of planning a stable ecosystem of AIs that evolve in competition, rather than one AI to rule them all and in the darkness bind them, was dismissed in the middle of the book with a few pages amounting to "it could be difficult".

Comment author: PhilGoetz 01 April 2015 03:10:23AM *  2 points [-]

For many questions in math and philosophy, getting answers earlier does not matter much.

I disagree completely. Looking at all of the problems to solve, the one area that lags noticeably behind in its duties is philosophy. The hardest questions raised in Superintelligence are philosophical problems of value, of what we even mean by "value". I believe that philosophy must be done by scientists, since we need to find actual answers to questions. For example, one could understand nothing of ethics without first understanding evolution. So it's true that philosophical advances rely on scientific ones. But philosophers haven't even learned how to ask testable questions or frame hypotheses yet. The ideal allocation of resources, if a world dictator were inclined to reduce existential risk, would be to slow all scientific advance and wait for philosophy to catch up with it. Additionally, philosophy presents fewer existential risks than any (other?) science.

View more: Next