Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: BrassLion 28 October 2014 03:03:55AM 14 points [-]

(I'd be remiss if I didn't link this Mr. Money Mustache post on index funds that explains why they are a good idea)

To buy an index fund, you buy shares of a mutual fund. That mutual fund invests in every stock in the chosen index, balanced based on whatever criteria they choose. Each share of the mutual fund is worth a portion of the underlying investment. At no point do you own separate stocks - you own shares of the fund, instead.

Toy example: You have an index fund that invests in every stock listed on the New York Stock Exchange. The fund invests in $1,000,000 of stock split evenly among every stock on the NYSE, then issues a thousand shares of the fund itself. You buy one share. Your share is worth $1,000. You can sell your shares back to the fund and they will give you $1,000. Over the next year, some stocks go up and some stocks go down. The fund doesn't buy any more stock or sell any more shares. On average, the nominal value of the NYSE will go up by about 7%. The fund now owns $1,070,000 of stocks. Your one share is now worth $1,070.

The dividends go wherever you want them to. The one share of a thousand you bought above entitles you to 1/1000 of the dividends for the underlying stocks in the fund's entire investment. If you're smart, they go to buy more shares of the fund because compound interest will make you rich. You can have them disbursed to you as money you can exchange for good and services, though.

Investing in an index fund is very easy. You will pay by direct withdrawal from a bank account, so you will have to do something to confirm you own the account, but other than that it's like buying anything else online.

Index funds cover costs - which are low, because buying more stock and re-balancing existing stock can be done by a not-that-sophisticated computer program - by charging you a small percentage of your investment. This is reflected by your shares (and dividends) not being worth quite 100% of the fund's value. Index funds are good because they have a very low expense ratio. Many normal mutual funds charge upwards of 1% annually. A good index fund can charge about 0.20%-0.05%. That means you pay your fund about $20 for the privilege of making you about $700, every year.

Opinion time: I own shares in index funds. They are amazing. For a few hours work setting up an automatic transfer and filling out paperwork, I am slowly getting rich. I don't need the money any time this decade, so even if the market crashes tomorrow in a 2008-level event, overall the occasional 1990s-style rises cancel that out, leaving real growth at about 5% assuming you use any dividends to purchase more shares.

I will let you skip the next part of this process and recommend a specific fund: The Vanguard Total Stock Market Index, VTSMX. It invests in every stock listed on the NYSE and NASDAQ. If you have $10k invested in it, the expense ratio is a super-low 0.05, and American stocks are very broad and exposed to world conditions as a whole (this is good - you want to spread out your portfolio as much as possible to reduce risk). Go to vanguard.com , you can figure it out online.

I think I could talk about the minutiae of investing all day. It's fascinating. I should write that post about investing and the Singularity one day.

Comment author: Nectanebo 28 October 2014 09:04:22AM 1 point [-]

Thanks for the detailed response. The link was very good, too.

Comment author: Nectanebo 28 October 2014 01:55:55AM 8 points [-]

Index funds have been recommended on LW before. I have a hard time understanding how it would work investing in one, though. Do you actually own the separate stocks on the index of the index fund, or do you technically own something else? Where does the dividend money go?

Comment author: Nectanebo 23 October 2014 04:45:22AM 62 points [-]

Took the survey. I always feel like I did the last one only recently.

Comment author: Lumifer 12 September 2014 03:55:01PM *  6 points [-]

Peter Thiel gave an AMA at Reddit, mentioned friendly AI and such (and even neoreaction :-D).

Comment author: Nectanebo 12 September 2014 08:03:50PM *  2 points [-]

One of the better AMAs I've read.

Peter is an interesting guy. Is his book worth reading?

Comment author: Lumifer 08 September 2014 05:28:03PM 4 points [-]

What's supposed to happen if an expanding FAI friendly to civilization X collides with an expanding FAI friendly to civilization Y?

Comment author: Nectanebo 08 September 2014 08:20:41PM *  2 points [-]

If their ideas of friendliness are incompatible with each other, perhaps a conflict? Superintelligent war? It may be the case that one will be 'stronger' than the other, and that there will be a winner-take-all(-of-the-universe?) resolution?

If there is some compatibility, perhaps a merge, a la Three Worlds Collide?

Or maybe they co-operate, try not to interfere with each other? This would be more unlikely if they are in competition for something or other (matter?), but more likely if they have difficulties assessing risks to not co-operating, or if there is mutually assured destruction?

It's a fun question, but I mean, Vinge had that event horizon idea, about how fundamentally unpredictable things are for us mere humans when we're talking about hypothetical intelligences of this caliber, and I think he had a pretty good point on that. This question is taking a few extra steps beyond that, even.

Comment author: Nectanebo 22 August 2014 05:48:01AM 3 points [-]

Isn't this kind of thing a subset of the design space of minds post? Like, we don't know exactly what kind of intelligence could end up exploding and there are lots of different possible variations?

Comment author: zzrafz 11 August 2014 04:20:42PM 0 points [-]

Playing devil's advocate here, the original poster is not that wrong. Ask any other living species on Earth and they will say their life would be better without humans around.

Comment author: Nectanebo 11 August 2014 05:26:11PM *  9 points [-]

Apart from the fact that they wouldn't say anything (because generally animals can't speak our languages ;)), nature can be pretty bloody brutal. There are plenty of situations in which our species' existence has made the lives of other animals much better than they would otherwise be. I'm thinking of veterinary clinics that often perform work on wild animals, pets that don't have to be worried about predation, that kind of thing. Also I think there are probably a lot of species that have done alright for themselves since humans showed up, animals like crows and the equivalents in their niche around the world seem to do quite well in urban environments.

As someone who cares about animal suffering, is sympathetic to vegetarianism and veganism, and even somewhat sympathetic to more radical ideas like eradicating the world's predators, I think that humanity represents a very real possibility to decrease suffering including animal suffering in the world, especially as we grow in our ability to shape the world in the way we choose. Certainly, I think that humanity's existence provides real hope in this direction, remembering that the alternative is for animals to continue to suffer on nature's whims perhaps indefinitely, rather than ours perhaps temporarily.

Comment author: [deleted] 06 August 2014 05:43:29PM 3 points [-]

That's one reason. As an example, Goertzel seems to fall somewhat in (1) with his cosmist manifesto.

But more importantly I think are issues of hard takeoff timeline and AGI design. The mainstream opinion, I think, is that a hard-takeoff would take years at the minimum, and there would be both sufficient time to recognize what is going on and to stop the experiment. Also MIRI seems for some reason to threat-model its AGI's as some sort of perfectly rational alien utility-maximizer, whereas real AGIs are implemented with all sorts of heuristic tricks that actually do a better job of emulating the quirky way humans think. Combined with the slow takeoff, projects like OpenCog intend to teach robot children in a preschool like environment, thereby value-loading them in the same way that we value-load our children.

In response to comment by [deleted] on Six Plausible Meta-Ethical Alternatives
Comment author: Nectanebo 06 August 2014 07:21:29PM *  1 point [-]

Yeah, I was thinking of Goertzel as well.

So you don't think MIRI's work is all that useful? What probability would you assign to hard-takeoff happening of the speed they're worried about?

Comment author: Nectanebo 06 August 2014 03:50:14PM 1 point [-]

So is this is roughly one aspect of why MIRI's position on AI safety concerns are different to similar parties? - that they're generally more sympathetic to possibilities futher away from 1 than their peers? I don't really know, but that's what the pebblesorters/value-is-fragile strain of thinking seems to suggest for me.

Comment author: gwern 05 August 2014 11:03:36PM 0 points [-]

Edit made months later: It turned to shit. No longer recommended.

All the more reason to try to only consume finished works. In-progress recommendations are treacherous.

Comment author: Nectanebo 06 August 2014 12:46:17PM 0 points [-]

All the more reason to try to only consume finished works.

I agree with the sentiment because it's frustrating not being able to complete something right away, but with AnH I really did enjoy following it month by month. I think that some pieces of entertainment are suited to that style of consumption and are fun to follow, even if they don't turn out to be very good in the end and aren't worth it for those who would go back and consume it all at once.

View more: Next