Comment author: Pentashagon 28 November 2015 11:56:38PM 0 points [-]

Is it reasonable to say that what really matters is whether there's a fast or slow takeoff? A slow takeoff or no takeoff may limit us to EA for the indefinite future, and fast takeoff means transhumanism and immortality are probably conditional on and subsequent to threading the narrow eye of the FAI needle.

Comment author: diegocaleiro 29 November 2015 10:29:49PM 0 points [-]

See the link with a flowchart on 12.

Comment author: V_V 29 November 2015 02:15:28PM *  3 points [-]

Not really. My understanding of AI is far from grandiose, I know less about it than about my fields (Philo, BioAnthro) - I've merely read all of FHI, most of MIRI, half of AIMA, Paul's blog, maybe 4 popular and two technical books on related issues, Max 60 papers on AGI per se, I don't code, and I only have the coarse grained understanding of it. - But in this little research and time I had to look into it, I saw no convincing evidence for a cap on the level of sophistication that a system's cognitive abilities can achieve. I have also not seen very robust evidence that would countenance the hypothesis of a fast takeoff.

Beware the Dunning–Kruger effect.

Looking at the big picture, you could also say that there convincing evidence for a cap on the lifespan of a biological organism. Heck, some trees have been alive for over 10,000 years! Yet, once you look at the nitty-gritty details of biomedical research, it becomes clear that even adding just a few decades to the human lifespan is a very hard problem and researchers still largely don't know how to solve it.

It's the same for AGI. Maybe truly super-human AGI is physically impossible due to complexity reasons, but even if it is possible, developing it is a very hard problem and researchers still largely don't know how to solve it.

Comment author: diegocaleiro 29 November 2015 10:23:45PM 0 points [-]

I think you misunderstood my claim for sarcasm. I actually think I don`t know much about AI (not nearly enough to make a robust assessment).

Comment author: [deleted] 28 November 2015 04:20:11PM 6 points [-]

We need a community that at once understands probability theory, doesn’t play reference class tennis, and doesn’t lose motivation by considering the base rates of other people trying to do something, because the other people were cooks, not chefs, and also because sometimes you actually need to try a one in ten thousand chance. But people are too proud of their command of Bayes to let go of the easy chance of showing off their ability to find mathematically sound reasons not to try.

Are you saying don't think probabilistically here? I'd love a specific post on just your thoughts on this.

Comment author: diegocaleiro 29 November 2015 11:04:21AM 4 points [-]

Yes I am.

Step 1: Learn Bayes

Step 2: Learn reference class

Step 3: Read 0 to 1

Step 4: Read The Cook and the Chef

Step 5: Reason why are the billionaires saying the people who do it wrong are basically reasoning probabilistically

Step 6: Find the connection between that and reasoning from first principles, or the gear hypothesis, or whichever other term you have for when you use the inside view, and actually think technically about a problem, from scratch, without looking at how anyone else did it.

Step 7: Talk to Michael Valentine about it, who has been reasoning about this recently and how to impart it at CFAR workshops.

Step 8: Find someone who can give you a recording of Geoff Anders' presentation at EAGlobal.

Step 9: Notice how all those steps above were connected, become a Chef, set out to save the world. Good luck!

Comment author: Raemon 28 November 2015 09:36:19PM 0 points [-]

He says at the end he's still a transhumanist. I think the point was that, in practice, it seemed difficult to work directly towards transhumanism/immortalism (and perhaps less likely that such a thing will be achieved in our lifetimes, although I'm less sure about that)

(Diego, curious if my model of you is accurate here)

Comment author: diegocaleiro 29 November 2015 10:53:50AM 1 point [-]

I am particularly skeptical of transhumanism when it is described as changing the human condition, and the human condition is considered to be the mental condition of humans as seen from the human's point of view.

We can make the rainbow, but we can't do physics yet. We can glimpse at where minds can go, but we have no idea how to precisely engineer them to get there.

We also know that happiness seems tighly connected to this area called the NAcc of the brain, but evolution doesn't want you to hack happiness, so it put the damn NAcc right in the medial slightly frontal area of the brain, deep inside, where fMRI is really bad, where you can't insert electrodes correctly. Also, evolution made sure that each person's NAcc develops epigenetically into different target areas, making it very, very hard to tamper with it to make you smile. And boy, do I want to make you smile.

Comment author: CellBioGuy 29 November 2015 04:50:58AM *  4 points [-]

although he was once very excited about the prospect of defeating the mechanisms of ageing, back when less than 300 thousand dollars were directly invested in it, he is now, with billions pledged against ageing, confident that the problem is substantially harder to surmount than the number of man-hours left to be invested in the problem, at least during my lifetime, or before the Intelligence Explosion.

So, after all this learning about all the niggling details that keep frustrating all these grand designs, you still think an intelligence explosion is something that matters/is likely? Why? Isn't it just as deus-ex-machina as the rest of this that you have fallen out from after learning more about it?

Comment author: diegocaleiro 29 November 2015 10:47:10AM *  0 points [-]

Not really. My understanding of AI is far from grandiose, I know less about it than about my fields (Philo, BioAnthro) - I've merely read all of FHI, most of MIRI, half of AIMA, Paul's blog, maybe 4 popular and two technical books on related issues, Max 60 papers on AGI per se, I don't code, and I only have the coarse grained understanding of it. - But in this little research and time I had to look into it, I saw no convincing evidence for a cap on the level of sophistication that a system's cognitive abilities can achieve. I have also not seen very robust evidence that would countenance the hypothesis of a fast takeoff.

The fact that we have not fully conceptually disentangled the dimensions of which intelligence is composed is mildly embarassing though, and it may be that AGI is a Deus ex-machina because actually, more as Minsky or Goertzel, less as MIRI or Lesswrong, General Intelligence will turn out to be a plethora of abilities that don't have a single denominator, ofter superimposed in a robust way.

But for now, nobody who is publishing seems to know for sure.

Comment author: RichardKennaway 29 November 2015 09:37:10AM 1 point [-]

In the section on EA, you include discussion of AGI, existential risk, and the existential risk of an AGI, which seem to me different subjects. Can you clarify what you see as the relation between these things and EA?

My picture of EA is distributing anti-malarial bed nets, or trying to improve clean water supplies. While some in the EA movement may judge existential risk or AGI to be the area they should direct their vocation towards (whether because of their rating of the risk itself or their own comparative advantage), they are not listed among, for example, Givewell's recommended charities.

Comment author: diegocaleiro 29 November 2015 10:33:34AM *  0 points [-]

EA is an intensional movement.

http://effective-altruism.com/ea/j7/effective_altruism_as_an_intensional_movement/

I concur, with many other people that when you start of from a wide sample of aggregative consequentialist values and try to do the most good, you bump into AI pretty soon. As I told Stuart Russell a while ago to explain why a Philosopher Anthropologist was auditing his course:

My PHD will likely be a book on altruism, and any respectable altruist these days is worried about AI at least 30% of his waking life.

That's how I see it anyway. Most of the arguments for it are in "Superintelligence" if you disagree with that, then you probably do disagree with me.

Comment author: gjm 28 November 2015 06:38:39PM 2 points [-]

This is entirely peripheral to any point you're actually making, but: In what possible sense is it true that Marvin Minsky "invented the computer"?

Comment author: diegocaleiro 29 November 2015 07:06:06AM 0 points [-]

Very sorry about that, I thought he held the patent for some aspect of computers that had become widespread, in the same way Wozniak holds the patent for personal computers. This was incorrect. I'll fix it.

Comment author: Vaniver 28 November 2015 09:33:54PM 1 point [-]

I'd recheck your links to the EA forum; this one was a LW link, for example.

Comment author: diegocaleiro 29 November 2015 07:01:07AM 0 points [-]

The text is posted at the EA forum too here, there all the links work.

Comment author: diegocaleiro 23 October 2015 07:40:47PM 2 points [-]

I'm looking for a sidekick if someone feels that such would be an appropriate role for them. This is me for those who don't know me:

https://docs.google.com/document/d/14pvS8GxVlRALCV0xIlHhwV0g38_CTpuFyX52_RmpBVo/edit

And this is my flowchart/life;autobiography in the last few years:

https://drive.google.com/file/d/0BxADVDGSaIVZVmdCSE1tSktneFU/view

Nice to meet you! :)

Polymathwannabe asked: What would be your sidekick's mission?

R: It feels to me like that would depend A LOT on the person, the personality, our physical distance, availability and interaction type. I feel that any response I gave would only filter valuable people away, which obviously I don't want to do. That said, I had good experiences with people a little older than me, with general interest in EA and far future, and who have more than a single undergrad as academic background, mostly because I interact with academia all the time and many activities and ways of being are academia specific.

Comment author: OrphanWilde 28 August 2015 03:10:45PM *  2 points [-]

I experience the same phenomenon in spite of not experiencing anxiety. (That's not 100% true, I did experience completely disassociated anxiety once.)

The most interesting case is that I spent about five months writing a new tabletop game after the beta of the current version of D&D made me annoyed. (It was when they started phasing feats out, eliminating yet another chunk of character customization.)

Five months and a novel's worth of writing in, I started planning ahead. As soon as I set goals for myself, I stopped enjoying working on it. I pushed through writing 250 spells over two months, and progress has been sporadic since then.

I don't think anxiety is the issue. I think it's something related to goal-oriented behaviors; the short view and long view fighting each other.

ETA: Thinking about it, I experience exactly the same thing WRT my daily work. If I receive an e-mail with something to do, I'll immediately hop on it, and wrap the task up. If I have a long-term project, I'll procrastinate. A task that enters my immediate list of things to do carries little or no internal resistance; the same task, attached to any kind of prior planning ahead on my part, requires substantial effort to undertake.

Comment author: diegocaleiro 28 August 2015 03:27:47PM 0 points [-]

see my comment.

View more: Prev | Next