Comment author: ESRogs 12 March 2014 01:20:38AM 1 point [-]

On the other hand after that decade you'll be without money, without a job

Yes, true. It would probably not be a good idea to attempt to retire with only one decade's worth of funds and plan never to work again. On the other hand, you could see how things go for the first 5 years and then go back to work if needed.

The problem is that you're looking specifically at the US stock market

So would you expect a US + international market cap-weighted index fund like Vanguard's Total World Stock Index Fund (bonus: available as an ETF) to have more variance or do worse than the US stock market by itself? That would surprise me.

Or were you just saying you think the US was exceptional during the 20th century, and investors should not expect similar returns (either by diversifying across nations, or reliably picking a winning nation) in the 21st? Hmm, now I am curious what stock market returns looked like for the whole world in the 20th C.

there is the issue of survivorship bias

Unfortunately I wasn't able to determine whether that particular chart took into account survivorship bias, but I did find this blog post written by the author of the book the chart was taken from, suggesting that he's at least familiar with the issue.

Comment author: quanticle 12 March 2014 08:31:54AM 3 points [-]

On the other hand, you could see how things go for the first 5 years and then go back to work if needed.

Will you be allowed back into the labor force? Many employers, especially in the IT industry, will almost certainly turn you away if you have an unexplained hole in your resume that's 5 years wide. Basically the only reason that can cover a 5-year gap is education of some kind (usually something like graduate education). If you say, "Oh, I just retired for 5 years, but now I'm looking for a job again," that's not going to help your chances of landing a job.

Comment author: quanticle 11 March 2014 07:13:02AM 18 points [-]

It depends on what you mean by "job". It seems like you're saying that not having a job is equivalent to not working. I'd argue otherwise. You still do a lot of work. It's just that the work that you're doing doesn't fit into the traditional capitalist view of working for an employer, so you don't see it as a "job".

You bring up a number of examples: the Argentinian who left graduate economics to travel the world. Puneet Sahani. The Uruguayan couple. They don't have jobs in the traditional American sense of working for an employer for money. But I'd argue that their lifestyle is no less arduous than someone who does have a job. They still have to make arrangements for food, clothing, shelter and travel, and presumably they're doing something of value to earn those resources. That's work, even if it isn't a job, as traditionally defined.

Moreover, such a lifestyle requires a certain type of personality. It requires a personality that is willing to accept extreme levels of uncertainty, in some cases to the point of not knowing where one is going to sleep the next night. For that reason, I'd argue that getting a job is the rational decision for most people. It makes sense to trade a certain amount of freedom for the certainty of knowing that when you go home, you'll have a home to go to, with food in the fridge and clothes in the closet. The fact that some people are able to be happy without having that certainty doesn't mean that everyone will be happy in such a lifestyle, or even that you will be happy in such a lifestyle.

A job is truly an instrumental goal, and your terminal goals certainly do have chains of causation leading to them that do not contain a job for 330 days a year.

This is true, but the uncertainty around those other chains of causation is considerably higher than the chains of causation that do involve having a job. Sure, I can scrape by without a job, hitchhiking my way along to where-ever I'm trying to go. Or I can travel with relative certainty in a train or a jetliner with tickets that I purchased with money from my job. Which route you choose depends on your tolerance for uncertainty and risk. I, for one, am glad for my job. It provides me the resources by which I carve out a tiny bubble of relative certainty in an uncertain world.

Comment author: gwern 04 September 2013 05:34:44PM 6 points [-]

I can't even get past the introduction:

  1. your header should not take up an entire screen
  2. "its", not "it's"
  3. you capitalize 'i' when it's a pronoun
  4. you punctuate the end of sentences, even in parenthetical comments
  5. spaces are a Good Thing

You are the reason Paul Graham made that comment.

Comment author: quanticle 04 September 2013 06:48:28PM *  1 point [-]

6. Line breaks should go in between paragraphs, not in between sentences in a paragraph. This is prose, not free verse.

EDIT: Markdown's auto-numbering of lists is infuriating.

Comment author: diegocaleiro 31 May 2013 04:40:31AM 8 points [-]

This may be exactly what you are looking for:Minimal Reading Sequence for Philosophy of Mind and Language

Comment author: quanticle 31 May 2013 05:31:52AM 4 points [-]

That list certainly does look promising. I've read a few things on that list, and I look forward to reading the rest of them. I've also followed the link to Lukeprog's The Best Textbooks on Every Subject, which also has quite a few philosophical texts. Enough, at least, to keep me busy for at least a few months at my current rate of study. Thanks for the pointer.

Comment author: Neotenic 31 May 2013 04:56:20AM *  9 points [-]

You may want to change the title to "Analytic Philosophy" or "Contemporary Philosophy" since Modern Philosophy usually refers to something far removed from anything related to "Good and Real" by Drescher.

Comment author: quanticle 31 May 2013 05:13:00AM 5 points [-]

And that's exactly the sort of advice I'm looking for. I'm at such a low level, I don't even know what the proper name is for the thing I want to study! I've changed the title to "Contemporary philosophy". I think that's better reflects the sort of things I wish to learn more about.

Comment author: Alicorn 04 September 2012 11:04:50PM 0 points [-]

I'm not enjoying it as much as I thought I might. It seems basically competent, but the writing doesn't propel me along. (The rationalist MLP fic does so propel.)

Comment author: quanticle 05 September 2012 02:30:28AM 0 points [-]

As gwern states in a sibling post, once Littlepip starts assembling her party, the story starts proceeding along nicely. If you've gotten past the introduction of the first two party members, and you still think it's slow, then I'd suggest skipping it.

Comment author: quanticle 11 July 2012 04:22:21AM 4 points [-]

One site that was recommended to me is Trello. It's a very flexible project management/to-do/brainstorming tool. It's organized as a number of boards, each of which has one or more lists or cards. You can move these cards between lists and between boards.

The general workflow I've established is to create a board for each project I'm working on, and have three lists: to-do, doing, and done. As you might suspect, cards start out in the "to-do" list, move to the "doing" list when I start on them and go to the "done" list when I finish. However, the tool, as such, does not force you into any particular workflow. That's an important consideration for me, because I've abandoned other task management software when its theoretical workflow model failed to match my real world needs. Trello is flexible enough to allow me to easily construct my own "pipeline" for tasks, with as many or as few steps as necessary, and have different pipelines for different projects.

Trello is a hosted application. However, they have a fairly easy-to-use export function that exports your boards and cards to a JSON document, so you're free to walk away with your data at any time. They also have an API, which you can use to further automate your task management.

Comment author: snarles 03 April 2012 12:53:01PM 1 point [-]

Try to convert your non-rationalist friends.

Comment author: quanticle 03 April 2012 06:38:55PM 10 points [-]

I don't think that's a good idea, to be honest. Conversion of other individuals is one of the more difficult things you can do as an aspiring rationalist. Let's face it, a lot of irrational arguments have very very strong intuitive appeal. Unless you are very familiar with the standard arguments for rationalism, you're more likely to simply alienate those around you and further isolate yourself by attempting to convert your non-rationalist friends.

Comment author: gwern 02 March 2012 06:14:59PM *  23 points [-]

Tipler paper

Wow, that's all kinds of crazy. I'm not sure how much as I'm not a mathematical physicist - MWI and quantum mechanics implied by Newton? Really? - but one big flag for me is pg187-188 where he doggedly insists that the universe is closed, although as far as I know the current cosmological consensus is the opposite, and I trust them a heck of a lot more than a fellow who tries to prove his Christianity with his physics.

(This is actually convenient for me: a few weeks ago I was wondering on IRC what the current status of Tipler's theories were, given that he had clearly stated they were valid only if the universe were closed and if the Higgs boson was within certain values, IIRC, but I was feeling too lazy to look it all up.)

And the extraction of a transcendent system of ethics from a Feynman quote...

A moment’s thought will convince the reader that Feynman has described not only the process of science, but the process of rationality itself. Notice that the bold-faced words are all moral imperatives. Science, in other words, is fundamentally based on ethics. More generally, rational thought itself is based on ethics. It is based on a particular ethical system. A true human level intelligence program will thus of necessity have to incorporate this particular ethical system. Our human brains do, whether we like to acknowledge it or not, and whether we want to make use of this ethical system in all circumstances. When we do not make use of this system of ethics, we generate cargo cult science rather than science.

This is just too wrong for words. This is like saying that looking both ways before crossing the street is obviously a part of rational street-crossing - a moment's thought will convince the reader (Dark Arts) - and so we can collapse Hume's fork and promote looking both ways to a universal meta-ethical principal that future AIs will obey!

An AI program must incorporate this morality, otherwise it would not be an AI at all.

Show me this morality in the AIXI equation or GTFO!

After all, what is a computer program but a series of imperative sentences?

A map from range to domain, a proof in propositional logic, or a series of lambda equations and reductions all come to mind...

In fact, I claim that an ethical system that encompasses all human actions, and more generally, all actions of any set of rational beings (in particular, artificial intelligences) can be deduced from the Feynman axioms. In particular, note that destroying other rational beings would make impossible the honestly Feynman requires.

One man's modus ponens is another man's modus tollens. That the 'honestly' requires other entities is proof that this cannot be an ethical system which encompasses all rational beings.

Hence, they will be part of the community of intelligent beings deciding whether to resurrect us or not. Do not children try to see to their parents’ health and well-being? Do they not try and see their parent survive (if it doesn’t cost too much, and it the far future, it won’t)? They do, and they will, both in the future, and in the far future.

Any argument that rests on a series of rhetorical questions is untrustworthy. Specifically, sure, I can in 5 seconds come up with a reason they would not preserve us: there are X mind-states we can be in while still maintaining identity or continuity; there are Y (Y < X) that we would like or would value; with infinite computing power, we will exhaust all Y. At that point, by definition, we could choose to not be preserved. Hence, I have proven we will inevitably choose to die even if uploaded to Tipler's Singularity.

(Correct and true? Dunno. But let's say this shows Tipler is massively overreaching...)

What a terrible paper altogether. This was a peer-reviewed journal, right?

Comment author: quanticle 02 March 2012 10:41:23PM *  7 points [-]

The quote that stood out for me was the following:

The nineteenth century physicists also believed in the aether, as did Newton. There were many aether theories available, but only one was consistent with observation: H.A. Lorentz's theory, which simply asserted that the Maxwell equations were the equations for the aether. In 1904, Lorentz showed (Einstein et al., 1923) that this theory of the aether - equivalently the Maxwell equations - implied that absolute time could not exist, and he deduced the transformations between space and time that now bear his name. [...] That is, general relativity is already there in 19th century classical mechanics.

Now, all that's well and good, except for one, tiny, teensy little flaw: there is no such thing as aether. Michelson and Morley proved that quite conclusively in 1887. Tipler, in this case, appears to be basing his argument on a theory that was discredited over a century ago. Yes, some of the conclusions of aetheric theory are superficially similar to the conclusions of relativity. That, however, doesn't make the aetheric theory any less wrong.

Comment author: quanticle 02 March 2012 10:07:34PM 5 points [-]

Our reason for placing the Singularity within the lifetimes of practi- cally everyone now living who is not already retired, is the fact that our supercomputers already have sufficient power to run a Singularity level program (Tipler, 2007). We lack not the hardware, but the soft- ware. Moore’s Law insures that today’s fastest supercomputer speed will be standard laptop computer speed in roughly twenty years (Tipler, 1994).

Really? I was unaware that Moore's law was an actual physical law. Our state of the art has already hit the absolute physical limit of transistor design - we have single atom transistors in the lab. So, if you'll forgive me, I'll be taking the claim of, "Moore's law ensures that today's fastest supercomputer speed will be the standard laptop computer speed in 20 years with a bit of salt."

Now, perhaps we'll have some other technology that allows laptops twenty years hence to be as powerful as supercomputers today. But to just handwave that enormous engineering problem away by saying, "Moore's law will take care of it," is fuzzy thinking of worst sort.

View more: Next