Comment author: reup 10 June 2012 09:30:56PM 8 points [-]

I think that some of the issue is that while Eliezer's conception of these issues has continued to evolve, we continue to both point and be pointed back to posts that he only partially agrees with. We might chart a more accurate position by winding through a thousand comments, but that's a difficult thing to do.

To pick one example from a recent thread, here he adjusts (or flags for adjustment) his thinking on Oracle AI, but someone who missed that would have no idea from reading older articles.

It seems like our local SI representatives recognize the need for an up to date summary document to point people to. Until then, our current refrain of "read the sequences" will grow increasingly misleading as more and more updates and revisions are spread across years of comments (that said, I still think people should read the sequences :) ).

Comment author: Kaj_Sotala 08 June 2012 06:36:35AM 4 points [-]

On the other hand, SI might get taken more seriously if it is able to demonstrate that it actually does know something about AGI design and isn't just a bunch of outsiders to the field doing idle philosophizing.

Of course, this requires that SI is ready to publish part of its AGI research.

Comment author: reup 08 June 2012 09:39:19AM *  9 points [-]

I agree but, as I've understood it, they're explicitly saying they won't release any AGI advances they make. What will it do to their credibility to be funding a "secret" AI project?

I honestly worry that this could kill funding for the organization which doesn't seem optimal in any scenario.

Potential Donor: I've been impressed with your work on AI risk. Now, I hear you're also trying to build an AI yourselves. Who do you have working on your team?

SI: Well, we decided to train high schoolers since we couldn't find any researchers we could trust.

PD: Hm, so what about the project lead?

SI: Well, he's done brilliant work on rationality training and wrote a really fantastic Harry Potter fanfic that helped us recruit the high schoolers.

PD: Huh. So, how has the work gone so far?

SI: That's the best part, we're keeping it all secret so that our advances don't fall into the wrong hands. You wouldn't want that, would you?

PD: [backing away slowly] No, of course not... Well, I need to do a little more reading about your organization, but this sounds, um, good...

Comment author: radical_negative_one 08 June 2012 05:32:16AM 1 point [-]

I remember reading, on the topic of optimal charity, that it's only rational to select a single cause to donate to... until the point of giving enough money to noticeably change the marginal utility of each additional dollar. (Thiel has that much money, of course.) This information-gathering strategy could be a new reason for spreading donations at the level of large-scale donations, if it hasn't been discussed before.

Comment author: reup 08 June 2012 06:41:46AM 1 point [-]

I remember reading and enjoying that article (this one, I think).

I would think that the same argument would apply regardless of the scale of the donations (assuming there aren't fixed transaction costs (which might not be valid)). My read would be that it comes down to the question of risk versus uncertainty. If there is actual uncertainty, investing widely might make sense if you believe that those investments will provide useful information to clarify the actual problem structure so that you can accurately target future giving.

Comment author: ChristianKl 07 June 2012 10:54:10PM 12 points [-]

Deeply committed to AI risk reduction. (It would be risky to have people who could be pulled off the team—with all their potentially dangerous knowledge—by offers from hedge funds or Google.)

To me this seems naive. Having someone with actually worked in SI on FAI going to Google might be a good thing. It creates connection between Google and SI. If he sees major issues inside Google that invalidate your work on FAI he might be able to alert you. If Google does something that dangerous according to the SI consensus then he's around to tell them about the danger.

Being open is a good thing.

Comment author: reup 08 June 2012 03:31:43AM 1 point [-]

And, if they're relying on perfect secrecy/commitment over a group of even a half-dozen researchers as the key to their safety strategy, then by their own standards they should not be trying to build an FAI.

Comment author: Dr_Manhattan 08 June 2012 01:43:05AM 0 points [-]

Only if he thinks he can only weakly affect outcomes, or exert large amount of control as the evidence starts coming in.

Comment author: reup 08 June 2012 03:27:33AM 2 points [-]

Remember he's playing an iterated game. So, if we assume that right now he has very little information about which area is the most important to invest in or which areas are most likely to produce the best return, playing a wider distribution in order to gain information in order maximize the utility of later rounds of donations/investments seems rational.

Comment author: reup 08 June 2012 01:13:23AM 1 point [-]

Is there a post on the relative strengths/weaknesses of UDT and TDT? I've searched but haven't found one.

Comment author: Slackson 05 June 2012 11:11:51PM *  2 points [-]

I'm trying to learn to program. Again.

In my previous attempts I became frustrated by my slow progress, but now I've finished Learn Python the Hard Way and I'm reading through The Django Book while working on a prediction market webapp that uses points instead of real money. Not a particularly original or groundbreaking project, but it's good to actually be making something that might be useful at some point.

An example that it could be useful for could be gaming communities like Minecraft, when it comes to prioritising the implementation of features requested by the users. I'd like to create a reddit-like structure for it, but if it ends up being something I actually launch with user-controlled sub-communities whoever has control over judging the outcome of a bet will have a controversy-causing amount of control.

I'm useless at HTML so I'm either going to have to learn that properly too for the front end or I'll have to enlist my already overly-busy friend to help with that side of things. Or I'll just have something really minimalistic copied from online HTML tutorials and whichever free WYSIWYG editors I can find.

Comment author: reup 07 June 2012 11:40:33PM 1 point [-]

On the html side, grab a free template (quite a few sites out there offer nice ones). I find that it's easier to keep working when my project at least looks decent. Also, at least for me, I feel more comfortable showing it to friends for advice when there's some superficial polish.

Also, when you see something (a button, control or effect) on a site, open the source. A decent percent of the time you'll find it's actually open source already (lots of js frameworks out there) and you can just copy directly. If not, you'll still learn how it's done.

Good luck!

Comment author: John_Maxwell_IV 07 June 2012 08:46:51PM *  6 points [-]

I'm going to open my clueless mouth again: Many of the problems associated with FAI haven't been defined to that well yet. Maybe solving them will require new math, but it seems possible that existing math already provides the necessary tools. Perhaps it would be a good idea to have a generalist who has limited familiarity with a large variety of mathematical tools and can direct the team towards existing tools that might solve their problem. See the section called "The Right Way To Learn Math" in this post for more:

http://steve-yegge.blogspot.com/2006/03/math-for-programmers.html

And a metalevel comment: Presumably folks at SI are discussing these issues independently of the discussion on Less Wrong; they don't seem to be posting here much. I'm curious why this is considered optimal. It seems to me that posting your arguments on Internet is a good way to get falsifying evidence for them. If the box does not contain a diamond, I wish to believe the box does not contain a diamond and whatnot.

Comment author: reup 07 June 2012 11:28:30PM 0 points [-]

Maybe solving them will require new math, but it seems possible that existing math already provides the necessary tools.

There seems to be far more commitment to a particular approach than is justified by the evidence (at least what they've publicly revealed).

Comment author: jsteinhardt 07 June 2012 10:44:28PM 4 points [-]

Are you sure about this? I don't know of that many people who did super-well in contests as a result of being tutored from an early age (although I would agree that many that do well in contests took advanced math classes at an early age; however, others did not). Many top-scorers train on their own or in local communities. Now that there are websites like AoPS, it is easier to do well even without a local community, although I agree that being in a better socioeconomic situation is likely to help.

Comment author: reup 07 June 2012 11:12:38PM 0 points [-]

I think we can safely stipulate that there is no universal route to contest success or Luke's other example of 800 math SATs.

But, I can answer your question that, yes, I'm sure that at least some of the students are receiving supplemental tutoring. Not necessarily contest-focused, but still.

Anecdotally: the two friends I had from undergrad who were IMO medalists (about 10 years ago) had both gone through early math tutoring programs (and both had a parent who was a math professor). All of my undergrad friends who had 800 math SAT had either received tutoring or had their parents buy them study materials (most of them did not look back fondly on the experience).

Remember, for any of these tests, there's a point where even a small amount of training to the test overwhelms a good deal of talent. Familiarity with problem types, patterns, etc can vastly improve performance.

I have no way to evaluate the scope of your restrictions on doing "super-well" or the particular that the tutoring start at an "early age" (although at least one of the anecdotal IMO cases did a Kumon-type program that started at pre-school).

Are there some people who don't follow that route? Certainly. However, I do think that it's important to be aware of other factors that may be present.

Comment author: Dr_Manhattan 07 June 2012 01:22:09PM 0 points [-]

In light of the discussion he seems to be hedging bets in this area. I'm not sure if that the right strategy from the ex-risk perspective. At the very least it seems inconsistent.

Comment author: reup 07 June 2012 09:32:29PM 3 points [-]

I think it could be consistent if you treat his efforts as designed to gather information.

View more: Next