Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: Alicorn 17 March 2017 01:46:56AM 19 points [-]

If you like this idea but have nothing much to say please comment under this comment so there can be a record of interested parties.

Comment author: diegocaleiro 17 March 2017 03:37:35AM *  0 points [-]

This sounds cool. Somehow it reminded me of an old, old essay by Russell on architecture.

It's not that relevant, so just if people are curious

Comment author: diegocaleiro 28 March 2013 09:36:40PM 6 points [-]

California has massive selection filters, and has had for hundreds of years. Only the few and brave moved there. California definitely is my personal aim for a great place to be. Outside filtered areas, such as the universities you mentioned, I think my claim remains true.

So I predict that people in California that have not been preselected exactly because they want to work crazy hours, start a startup, or become an academic will be less productive than the same people in Boston.

I maintain, to a lesser extend, that undergrads in California also work fewer hours.

People who moved during adulthood might report about that.

Comment author: diegocaleiro 03 March 2017 08:53:32AM 0 points [-]

I am now a person who moved during adulthood, and I can report past me was right except he did not account for rent.

Comment author: RomeoStevens 23 February 2013 05:22:42AM *  1 point [-]

Why would I supplant my near self with my far self when my far self cares way less about my happiness?

edit: this is an honest question not an attempt at humor or snark.

Comment author: diegocaleiro 22 February 2017 12:42:10PM 0 points [-]

It seems to me the far self is more orthogonal to your happiness. You can try to optimize for maximal long term happiness.

Comment author: Pentashagon 05 December 2015 03:41:21AM 0 points [-]

I looked at the flowchart and saw the divergence between the two opinions into mostly separate ends: settling exoplanets and solving sociopolitical problems on Earth on the slow-takeoff path, vs focusing heavily on how to build FAI on the fast-takeoff path, but then I saw your name in the fast-takeoff bucket for conveying concepts to AI and was confused that your article was mostly about practically abandoning the fast-takeoff things and focusing on slow-takeoff things like EA. Or is the point that 2014!diego has significantly different beliefs about fast vs. slow than 2015!diego?

Comment author: diegocaleiro 05 December 2015 04:46:57AM 0 points [-]

Interesting that I conveyed that. I agree with Owen Cotton Barratt that we ought to focus efforts now into sooner paths (fast takeoff soon) and not in the other paths because more resources will be allocated to FAI in the future, even if fast takeoff soon is a low probability.

I personally work on inserting concepts and moral concepts on AGI because almost any other thing I could do there are people who will do better already, and this is an area that interpolates with a lot of my knowledge areas, while still being AGI relevant. See link in the comment above with my proposal.

Comment author: [deleted] 01 December 2015 02:43:47PM 3 points [-]

Note that the billionaires disagree on this. Thiel says that people should think more like calculus and less like probability, while Musk(the inspiration for the cook and the chef) says that people think in certainties while they should think in probabilities.

Comment author: diegocaleiro 02 December 2015 05:30:12AM 0 points [-]

Not my reading. My reading is that Musk thinks people should not consider the probability of succeding as a spacecraft startup (0% historically) but instead should reason from first principles, such as thinking what are the materials from which a rocket is made, then building the costs from the ground up.

Comment author: Algernoq 30 November 2015 07:20:05AM 4 points [-]

You sound unhappy. Do you still hold these conclusions when you are very happy?

Comment author: diegocaleiro 30 November 2015 02:04:18PM 2 points [-]

You have correctly identified that I wrote this post while very unhappy. The comments, as you can see by their lighthearted tone, I wrote pretty happy.

Yes, I stand by those words even now (that I am happy).

Comment author: passive_fist 30 November 2015 05:18:48AM 3 points [-]

All true points, but consider your V4 example. We have software that is gradually approaching mammalian-level ability for visual information processing (not human-level just yet, but our visual cortex is larger than most animals' entire cortices, so that's not surprising). So, as far as building AI is concerned, so what if we don't understand V4 yet, if we can produce software that is that good at image processing?

Comment author: diegocaleiro 30 November 2015 06:13:39AM *  1 point [-]

I am more confident that we can produce software that can classify images, music and faces correctly than I am that we can integrate multimodal aspects of these modulae into a coherent being that thinks it has a self, goals, identity, and that can reason about morality. That's what I tried to address in my FLI grant proposal, which was rejected (by the way, correctly so, it needed the latest improvements, and clearly - if they actually needed it - AI money should reach Nick, Paul and Stuart before our team.) We'll be presenting it in Oxford, tomorrow?? Shhh, don't tell anyone, here, just between us, you get it before the Oxford professors ;) https://docs.google.com/document/d/1D67pMbhOQKUWCQ6FdhYbyXSndonk9LumFZ-6K6Y73zo/edit

Comment author: passive_fist 30 November 2015 03:40:55AM 2 points [-]

Our knowledge about the brain, given our goals about the brain, is at the level of knowledge of physics of someone who found out that spraying water on a sunny day causes the rainbow. It’s not even physics yet.

I'd tend to disagree with this; we have a pretty good idea of how some areas of the brain work (V1 cortex), we are making good progress in understanding how other parts work (cortical microcircuits, etc.) and we haven't seen anything to indicate that other areas of the brain work using extremely far-fetched and alien principles to what we already know.

But I always considered 'understanding the brain' to be a bit overrated, as the brain is an evolutionary hodge-podge, a big snowball of accumulated junk that's been rolling down the slope for 500 million years. In the future we're eventually going to understand the brain for sentimental reasons, but I'd give only 1% probability that understanding it is necessary for the intelligence explosion to occur. Already we have machines that are capable of doing tasks corresponding to areas of the brain that we have no idea of how they work. In fact we aren't even sure how our machines work either! We just know they do. We're far more likely to stumble upon AI than to create it through a forced effort of brain emulation.

Comment author: diegocaleiro 30 November 2015 04:21:12AM 2 points [-]

He have non-confirmed simplified hypothesis with nice drawings for how microcircuits in the brain work. The ignore more than a million things (literally, they just have to ignore specific synapses, the multiplicity of synaptic connection etc... if you sum those things up, and look at the model, I would say it ignores about that many things). I'm fine with simplifying assumptions, but the cortical microcircuit models are a butterfly flying in a hurricane.

The only reason we understand V1 is because it is a retinotopic inverted map that has been through very few non-linear transformations - same for the tonotopic auditory areas - as soon as V4, we are already completely lost (for those who don't know, the brain has between 100-500 areas depending on how you count, and we have a medium guess of a simplified model that applies well to two of them, and medium to some 10-25). And even if you could say which functions V4 participates more in, this would not tell you how it does it.

Comment author: gjm 30 November 2015 12:43:45AM 0 points [-]

What happened? That sounds very weird.

Comment author: diegocaleiro 30 November 2015 02:29:14AM 1 point [-]

Oh, so boring..... It was actually me myself screwing up a link I think :(

Skill: being censored by people who hate censorship. Status: not yet accomplished.

Comment author: diegocaleiro 29 November 2015 10:43:53PM *  0 points [-]

Basically because I never cared much for cryonics, even with the movie about me being done about it. Trailer:

https://www.youtube.com/watch?v=w-7KAOOvhAk

For me cryonics is like soap bubbles and contact improv. I like it, but you don't need to waste your time knowing about it.

But since you asked: I've tried to get rich people in contact with Robert McIntyre, because he is doing a great job and someone should throw money at him.

And me, for that matter. All my donors stopped earning to give, so I'm with no donor cashflow now, I might have to "retire" soon - Brazilian economy collapsed and they may cut my below life cost scholarship.EDIT: Yes, my scholarship was just suspended :( So I won't be just losing money, I'll be basically out of it, unfortunately. I also remind people that donating to individuals is way cheaper than to institutions - yes I think so even now that I'm launching another institution. The truth doesn't change, even if it becomes disadvantageous to me.

Comment author: diegocaleiro 29 November 2015 11:11:58PM *  0 points [-]

Wow, that's so cool! My message was censored and altered.

Lesswrong is growing an intelligentsia of it's own.

(To be fair to the censoring part, the message contained a link directly to my Patreon, which could count as advertising? Anyway, the alteration was interesting, it just made it more formal. Maybe I should write books here, and they'll sound as formal as the ones I read!)

Also fascinating that it was near instantaneous.

View more: Next