Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: ESRogs 12 March 2014 01:20:38AM 1 point [-]

On the other hand after that decade you'll be without money, without a job

Yes, true. It would probably not be a good idea to attempt to retire with only one decade's worth of funds and plan never to work again. On the other hand, you could see how things go for the first 5 years and then go back to work if needed.

The problem is that you're looking specifically at the US stock market

So would you expect a US + international market cap-weighted index fund like Vanguard's Total World Stock Index Fund (bonus: available as an ETF) to have more variance or do worse than the US stock market by itself? That would surprise me.

Or were you just saying you think the US was exceptional during the 20th century, and investors should not expect similar returns (either by diversifying across nations, or reliably picking a winning nation) in the 21st? Hmm, now I am curious what stock market returns looked like for the whole world in the 20th C.

there is the issue of survivorship bias

Unfortunately I wasn't able to determine whether that particular chart took into account survivorship bias, but I did find this blog post written by the author of the book the chart was taken from, suggesting that he's at least familiar with the issue.

Comment author: diegocaleiro 01 February 2018 02:50:20AM 1 point [-]

Eric Weinstein argues strongly against returns being 20century level, and says they are now vector fields, not scalars. I concur (not that I matter)

Comment author: Alicorn 17 March 2017 01:46:56AM 21 points [-]

If you like this idea but have nothing much to say please comment under this comment so there can be a record of interested parties.

Comment author: diegocaleiro 17 March 2017 03:37:35AM *  0 points [-]

This sounds cool. Somehow it reminded me of an old, old essay by Russell on architecture.

It's not that relevant, so just if people are curious

Comment author: diegocaleiro 28 March 2013 09:36:40PM 6 points [-]

California has massive selection filters, and has had for hundreds of years. Only the few and brave moved there. California definitely is my personal aim for a great place to be. Outside filtered areas, such as the universities you mentioned, I think my claim remains true.

So I predict that people in California that have not been preselected exactly because they want to work crazy hours, start a startup, or become an academic will be less productive than the same people in Boston.

I maintain, to a lesser extend, that undergrads in California also work fewer hours.

People who moved during adulthood might report about that.

Comment author: diegocaleiro 03 March 2017 08:53:32AM 0 points [-]

I am now a person who moved during adulthood, and I can report past me was right except he did not account for rent.

Comment author: RomeoStevens 23 February 2013 05:22:42AM *  1 point [-]

Why would I supplant my near self with my far self when my far self cares way less about my happiness?

edit: this is an honest question not an attempt at humor or snark.

Comment author: diegocaleiro 22 February 2017 12:42:10PM 0 points [-]

It seems to me the far self is more orthogonal to your happiness. You can try to optimize for maximal long term happiness.

Comment author: Pentashagon 05 December 2015 03:41:21AM 0 points [-]

I looked at the flowchart and saw the divergence between the two opinions into mostly separate ends: settling exoplanets and solving sociopolitical problems on Earth on the slow-takeoff path, vs focusing heavily on how to build FAI on the fast-takeoff path, but then I saw your name in the fast-takeoff bucket for conveying concepts to AI and was confused that your article was mostly about practically abandoning the fast-takeoff things and focusing on slow-takeoff things like EA. Or is the point that 2014!diego has significantly different beliefs about fast vs. slow than 2015!diego?

Comment author: diegocaleiro 05 December 2015 04:46:57AM 0 points [-]

Interesting that I conveyed that. I agree with Owen Cotton Barratt that we ought to focus efforts now into sooner paths (fast takeoff soon) and not in the other paths because more resources will be allocated to FAI in the future, even if fast takeoff soon is a low probability.

I personally work on inserting concepts and moral concepts on AGI because almost any other thing I could do there are people who will do better already, and this is an area that interpolates with a lot of my knowledge areas, while still being AGI relevant. See link in the comment above with my proposal.

Comment author: [deleted] 01 December 2015 02:43:47PM 3 points [-]

Note that the billionaires disagree on this. Thiel says that people should think more like calculus and less like probability, while Musk(the inspiration for the cook and the chef) says that people think in certainties while they should think in probabilities.

Comment author: diegocaleiro 02 December 2015 05:30:12AM 0 points [-]

Not my reading. My reading is that Musk thinks people should not consider the probability of succeding as a spacecraft startup (0% historically) but instead should reason from first principles, such as thinking what are the materials from which a rocket is made, then building the costs from the ground up.

Comment author: Algernoq 30 November 2015 07:20:05AM 4 points [-]

You sound unhappy. Do you still hold these conclusions when you are very happy?

Comment author: diegocaleiro 30 November 2015 02:04:18PM 2 points [-]

You have correctly identified that I wrote this post while very unhappy. The comments, as you can see by their lighthearted tone, I wrote pretty happy.

Yes, I stand by those words even now (that I am happy).

Comment author: passive_fist 30 November 2015 05:18:48AM 3 points [-]

All true points, but consider your V4 example. We have software that is gradually approaching mammalian-level ability for visual information processing (not human-level just yet, but our visual cortex is larger than most animals' entire cortices, so that's not surprising). So, as far as building AI is concerned, so what if we don't understand V4 yet, if we can produce software that is that good at image processing?

Comment author: diegocaleiro 30 November 2015 06:13:39AM *  1 point [-]

I am more confident that we can produce software that can classify images, music and faces correctly than I am that we can integrate multimodal aspects of these modulae into a coherent being that thinks it has a self, goals, identity, and that can reason about morality. That's what I tried to address in my FLI grant proposal, which was rejected (by the way, correctly so, it needed the latest improvements, and clearly - if they actually needed it - AI money should reach Nick, Paul and Stuart before our team.) We'll be presenting it in Oxford, tomorrow?? Shhh, don't tell anyone, here, just between us, you get it before the Oxford professors ;) https://docs.google.com/document/d/1D67pMbhOQKUWCQ6FdhYbyXSndonk9LumFZ-6K6Y73zo/edit

Comment author: passive_fist 30 November 2015 03:40:55AM 2 points [-]

Our knowledge about the brain, given our goals about the brain, is at the level of knowledge of physics of someone who found out that spraying water on a sunny day causes the rainbow. It’s not even physics yet.

I'd tend to disagree with this; we have a pretty good idea of how some areas of the brain work (V1 cortex), we are making good progress in understanding how other parts work (cortical microcircuits, etc.) and we haven't seen anything to indicate that other areas of the brain work using extremely far-fetched and alien principles to what we already know.

But I always considered 'understanding the brain' to be a bit overrated, as the brain is an evolutionary hodge-podge, a big snowball of accumulated junk that's been rolling down the slope for 500 million years. In the future we're eventually going to understand the brain for sentimental reasons, but I'd give only 1% probability that understanding it is necessary for the intelligence explosion to occur. Already we have machines that are capable of doing tasks corresponding to areas of the brain that we have no idea of how they work. In fact we aren't even sure how our machines work either! We just know they do. We're far more likely to stumble upon AI than to create it through a forced effort of brain emulation.

Comment author: diegocaleiro 30 November 2015 04:21:12AM 2 points [-]

He have non-confirmed simplified hypothesis with nice drawings for how microcircuits in the brain work. The ignore more than a million things (literally, they just have to ignore specific synapses, the multiplicity of synaptic connection etc... if you sum those things up, and look at the model, I would say it ignores about that many things). I'm fine with simplifying assumptions, but the cortical microcircuit models are a butterfly flying in a hurricane.

The only reason we understand V1 is because it is a retinotopic inverted map that has been through very few non-linear transformations - same for the tonotopic auditory areas - as soon as V4, we are already completely lost (for those who don't know, the brain has between 100-500 areas depending on how you count, and we have a medium guess of a simplified model that applies well to two of them, and medium to some 10-25). And even if you could say which functions V4 participates more in, this would not tell you how it does it.

Comment author: gjm 30 November 2015 12:43:45AM 0 points [-]

What happened? That sounds very weird.

Comment author: diegocaleiro 30 November 2015 02:29:14AM 1 point [-]

Oh, so boring..... It was actually me myself screwing up a link I think :(

Skill: being censored by people who hate censorship. Status: not yet accomplished.

View more: Next