Comment author: buybuydandavis 17 July 2012 02:55:23AM -1 points [-]

I like my chances with homo sapiens better than an alien intelligence designed by us.

Comment author: MatthewBaker 18 July 2012 04:44:45PM *  0 points [-]

If its an alien intelligence and doesn't have a global association table like Starmap-AI we are already doomed.

Comment author: [deleted] 14 July 2012 03:04:55AM 1 point [-]

I took about 7 mg of it.

As an aside, I used modafinil to go without sleep last night, and it was AMAZING! I'll write more on it later, after it's over.

In response to comment by [deleted] on Group rationality diary, 7/9/12
Comment author: MatthewBaker 18 July 2012 04:36:23PM 0 points [-]

I'm glad that you like it! I feel the same way about it and its enactiopure cousin r-modafinill which doesn't have much of an uncomfortable body high even in higher doses like 300mg.

Comment author: Eliezer_Yudkowsky 18 July 2012 02:12:12PM 27 points [-]

So first a quick note: I wasn't trying to say that the difficulties of AIXI are universal and everything goes analogously to AIXI, I was just stating why AIXI couldn't represent the suggestion you were trying to make. The general lesson to be learned is not that everything else works like AIXI, but that you need to look a lot harder at an equation before thinking that it does what you want.

On a procedural level, I worry a bit that the discussion is trying to proceed by analogy to Google Maps. Let it first be noted that Google Maps simply is not playing in the same league as, say, the human brain, in terms of complexity; and that if we were to look at the winning "algorithm" of the million-dollar Netflix Prize competition, which was in fact a blend of 107 different algorithms, you would have a considerably harder time figuring out why it claimed anything it claimed.

But to return to the meta-point, I worry about conversations that go into "But X is like Y, which does Z, so X should do reinterpreted-Z". Usually, in my experience, that goes into what I call "reference class tennis" or "I'm taking my reference class and going home". The trouble is that there's an unlimited number of possible analogies and reference classes, and everyone has a different one. I was just browsing old LW posts today (to find a URL of a quick summary of why group-selection arguments don't work in mammals) and ran across a quotation from Perry Metzger to the effect that so long as the laws of physics apply, there will always be evolution, hence nature red in tooth and claw will continue into the future - to him, the obvious analogy for the advent of AI was "nature red in tooth and claw", and people who see things this way tend to want to cling to that analogy even if you delve into some basic evolutionary biology with math to show how much it isn't like intelligent design. For Robin Hanson, the one true analogy is to the industrial revolution and farming revolutions, meaning that there will be lots of AIs in a highly competitive economic situation with standards of living tending toward the bare minimum, and this is so absolutely inevitable and consonant with The Way Things Should Be as to not be worth fighting at all. That's his one true analogy and I've never been able to persuade him otherwise. For Kurzweil, the fact that many different things proceed at a Moore's Law rate to the benefit of humanity means that all these things are destined to continue and converge into the future, also to the benefit of humanity. For him, "things that go by Moore's Law" is his favorite reference class.

I can have a back-and-forth conversation with Nick Bostrom, who looks much more favorably on Oracle AI in general than I do, because we're not playing reference class tennis with "But surely that will be just like all the previous X-in-my-favorite-reference-class", nor saying, "But surely this is the inevitable trend of technology"; instead we lay out particular, "Suppose we do this?" and try to discuss how it will work, not with any added language about how surely anyone will do it that way, or how it's got to be like Z because all previous Y were like Z, etcetera.

My own FAI development plans call for trying to maintain programmer-understandability of some parts of the AI during development. I expect this to be a huge headache, possibly 30% of total headache, possibly the critical point on which my plans fail, because it doesn't happen naturally. Go look at the source code of the human brain and try to figure out what a gene does. Go ask the Netflix Prize winner for a movie recommendation and try to figure out "why" it thinks you'll like watching it. Go train a neural network and then ask why it classified something as positive or negative. Try to keep track of all the memory allocations inside your operating system - that part is humanly understandable, but it flies past so fast you can only monitor a tiny fraction of what goes on, and if you want to look at just the most "significant" parts, you would need an automated algorithm to tell you what's significant. Most AI algorithms are not humanly understandable. Part of Bayesianism's appeal in AI is that Bayesian programs tend to be more understandable than non-Bayesian AI algorithms. I have hopeful plans to try and constrain early FAI content to humanly comprehensible ontologies, prefer algorithms with humanly comprehensible reasons-for-outputs, carefully weigh up which parts of the AI can safely be less comprehensible, monitor significant events, slow down the AI so that this monitoring can occur, and so on. That's all Friendly AI stuff, and I'm talking about it because I'm an FAI guy. I don't think I've ever heard any other AGI project express such plans; and in mainstream AI, human-comprehensibility is considered a nice feature, but rarely a necessary one.

It should finally be noted that AI famously does not result from generalizing normal software development. If you start with a map-route program and then try to program it to plan more and more things until it becomes an AI... you're doomed, and all the experienced people know you're doomed. I think there's an entry or two in the old Jargon File aka Hacker's Dictionary to this effect. There's a qualitative jump to writing a different sort of software - from normal programming where you create a program conjugate to the problem you're trying to solve, to AI where you try to solve cognitive-science problems so the AI can solve the object-level problem. I've personally met a programmer or two who've generalized their code in interesting ways, and who feel like they ought to be able to generalize it even further until it becomes intelligent. This is a famous illusion among aspiring young brilliant hackers who haven't studied AI. Machine learning is a separate discipline and involves algorithms and problems that look quite different from "normal" programming.

Comment author: MatthewBaker 18 July 2012 04:27:27PM *  -1 points [-]

Your prospective AI plans for programmer-understandability seems very close to Starmap-AI by which I mean

It's called the Global Association Table. The points or stars represent concepts, and the lines are the links between them.

The best story I've read about a not so failed utopia involves this kind of accountability over the FAI. While I hate to generalize from fictional evidence it definitely seems like a necessary step to not becoming a galaxy that tiles over the aliens with happy faces instead of just freezing them in place to prevent human harm.

Comment author: David_Gerard 13 July 2012 08:44:06AM 7 points [-]

In general - never earmark donations. It's a stupendous pain in the arse to deal with. If you trust an organisation enough to donate to them, trust them enough to use the money for whatever they see a need for. Contrapositive: If you don't trust them enough to use the money for whatever they see a need for, don't donate to them.

Comment author: MatthewBaker 13 July 2012 06:01:02PM 2 points [-]

I never have before but this CPA Audit seemed like a logical thing that would encourage my wealthy parents to donate :)

Comment author: hamnox 12 July 2012 09:14:58PM 1 point [-]

Last Time I wrote about completing 1 rationality checklist item a day. That... has not panned out. Some of the habits are reactionary and can't be sought out very readily, and I've also just been forgetting to do it while I'm at work and school.

Comment author: MatthewBaker 12 July 2012 11:36:48PM 0 points [-]

Keep Trying!

Comment author: woodside 10 July 2012 07:11:05AM 1 point [-]

Well, I agree with you that I should buy cryonics at very high prices and I plan on doing so. For the last few years I've spent the majority of my time in places where being signed up for cryonics wouldn't make a difference (9 months out of the year on a submarine, and now overseas in a place where there aren't any cryonics companies set up).

You should probably still upvote because the < 1/4 of the time I've spent in situations where it would matter still more than justify it. I should also never eat an icecream snickers again. I'll be the first to admit I don't behave perfectly rationally. :)

Comment author: MatthewBaker 11 July 2012 11:22:36PM 0 points [-]

more people have died from cryocrastinating than cryonics ;)

Comment author: lukeprog 11 July 2012 12:36:49AM 9 points [-]

If I earmark my donations for "HPMOR Finale or CPA Audit whichever comes first" would that act as positive or negative pressure towards Eliezer's fiction creation complex?

I think the issue is that we need a successful SPARC and an "Open Problems in Friendly AI" sequence more urgently than we need an HPMOR finale.

Comment author: MatthewBaker 11 July 2012 10:21:43PM 0 points [-]

I think our values our positively maximized by delaying the HPMOR finale as long as possible, my post was more out of curiosity to see what would be most helpful to Eliezer.

Comment author: [deleted] 11 July 2012 08:21:14AM 1 point [-]

I was fed up with my insomnia, so I started taking melatonin (5mg pills about an hour before going to bed). It doesn't seem to work for me.

In response to comment by [deleted] on Group rationality diary, 7/9/12
Comment author: MatthewBaker 11 July 2012 09:39:30PM 2 points [-]

Melatonin has a U shaped dose-response curve. I have found that lower doses will always work better until below 1mg with no tolerance.

Comment author: [deleted] 11 July 2012 06:19:48PM 3 points [-]

I tried adderall. It was terrible. My reaction to it was the complete opposite of what it should have been. It put me in a negative emotional state. I felt depressed and stressed, like there was something constricting my chest and throat. I felt on the verge of crying/screaming for large portions of the evening.

It made it harder to concentrate on a single task. I was trying to do reading, and I had to also be singing along with music as well (I normally work with headphones on, but singing along is NOT a norm.) or I would get antsy. I was reading something that I would normally find really interesting, but on adderall it couldn't hold my attention at all. It (and everything else) seemed boring and like an undesirable task. My motivation to do things like "take the dog outside," or "do ANYTHING" also plummeted. (Don't worry about my pup. Of course I DID take him outside. It just took more effort.)

My reaction was weird enough that I double-checked that the pill wasn't a placebo, and that my dosage was strong enough that it shouldn't be explainable by "I just happened to have a really bad day that day."

So, adderall....never again.

In response to comment by [deleted] on Group rationality diary, 7/9/12
Comment author: MatthewBaker 11 July 2012 09:34:23PM *  2 points [-]

My advice is try a safer stimulant like modafinil. Mymodafinil.com is a good souce that will also help you try the enactiopure version which has been know to produce less body high like effects you described above.

Comment author: MatthewBaker 11 July 2012 12:14:33AM 5 points [-]

If I earmark my donations for "HPMOR Finale or CPA Audit whichever comes first" would that act as positive or negative pressure towards Eliezer's fiction creation complex? (I only ask because bugging him for an update has been previously suggested to reduce update speed)

Furthermore. Oracle AI/Nanny AI seem to both fail the heuristic of "other country is about to beat us in a war, should we remove the safety programming" that I use quite often with nearly everyone I debate AI about from outside the LW community. Thank you both for writing such concise yet detailed responses that helped me understand the problem areas of Tool AI better.

View more: Prev | Next