I agree but, as I've understood it, they're explicitly saying they won't release any AGI advances they make. What will it do to their credibility to be funding a "secret" AI project?
I honestly worry that this could kill funding for the organization which doesn't seem optimal in any scenario.
Potential Donor: I've been impressed with your work on AI risk. Now, I hear you're also trying to build an AI yourselves. Who do you have working on your team?
SI: Well, we decided to train high schoolers since we couldn't find any researchers we could trust.
PD...
I remember reading and enjoying that article (this one, I think).
I would think that the same argument would apply regardless of the scale of the donations (assuming there aren't fixed transaction costs (which might not be valid)). My read would be that it comes down to the question of risk versus uncertainty. If there is actual uncertainty, investing widely might make sense if you believe that those investments will provide useful information to clarify the actual problem structure so that you can accurately target future giving.
And, if they're relying on perfect secrecy/commitment over a group of even a half-dozen researchers as the key to their safety strategy, then by their own standards they should not be trying to build an FAI.
Remember he's playing an iterated game. So, if we assume that right now he has very little information about which area is the most important to invest in or which areas are most likely to produce the best return, playing a wider distribution in order to gain information in order maximize the utility of later rounds of donations/investments seems rational.
Is there a post on the relative strengths/weaknesses of UDT and TDT? I've searched but haven't found one.
On the html side, grab a free template (quite a few sites out there offer nice ones). I find that it's easier to keep working when my project at least looks decent. Also, at least for me, I feel more comfortable showing it to friends for advice when there's some superficial polish.
Also, when you see something (a button, control or effect) on a site, open the source. A decent percent of the time you'll find it's actually open source already (lots of js frameworks out there) and you can just copy directly. If not, you'll still learn how it's done.
Good luck!
Maybe solving them will require new math, but it seems possible that existing math already provides the necessary tools.
There seems to be far more commitment to a particular approach than is justified by the evidence (at least what they've publicly revealed).
I think we can safely stipulate that there is no universal route to contest success or Luke's other example of 800 math SATs.
But, I can answer your question that, yes, I'm sure that at least some of the students are receiving supplemental tutoring. Not necessarily contest-focused, but still.
Anecdotally: the two friends I had from undergrad who were IMO medalists (about 10 years ago) had both gone through early math tutoring programs (and both had a parent who was a math professor). All of my undergrad friends who had 800 math SAT had either received tutori...
I think it could be consistent if you treat his efforts as designed to gather information.
Another version of this is to offer to go talk with a priest/pastor yourself. One thing this does is to buy you time while your mom adjusts. If you find a decent one to talk with (iIf your church has one, sometimes youth pastors are a bit more open), the conversation won't be too unpleasant (don't view it as convincing them, just lay out your reasoning).
Your mom may be pleased that someone "higher up" is dealing with you. Also, when they fail to convince you, it helps her to let go of the idea that there was something more she could have done.
This. It comes off as amateurish, not knowing which details are important to include. But hopefully these semi-informal discussions help with refining the pitch and presentation before they're standing in front of potential donors.
Either way, I think that building toward an FAI team is good for AI risk reduction, even if we decide (later) that an SI-hosted FAI team is not the best thing to do.
I question this assumption. I think that building an FAI team may damage your overall goal of AI risk reduction for several reasons:
By setting yourself up as a competitor to other AGI research efforts, you strongly decrease the chance that they will listen to you. It will be far easier for them to write off your calls for consideration of friendliness issues as self-serving.
You risk unde
The fact that you are looking for "raw" math ability seems questionable. If their most recent achievements are IMO/SAT, you're looking at high schoolers or early undergrads (Putnam winners have their tickets punched at top grad schools and will be very hard to recruit). Given that, you'll have at least a 5-10 year lag while they continue learning enough to do basic research.
One somewhat close quote that popped to mind (from lukeprog's article on philosophy):
Second, if you want to contribute to cutting-edge problems, even ones that seem philosophical, it's far more productive to study math and science than it is to study philosophy. You'll learn more in math and science, and your learning will be of a higher quality.
One other issue is that a near precondition for IMO-type recognition is coming from at least a middle class family and having either an immediate family member or early teacher able to recognize and direct that talent. Worse, as these competitions have increased in stature, you have an increasing number of the students pushed by parents and provided regular tutoring and preparation. Those sorts of hothouse personalities would seem to be some of the more risky to put on an FAI team.
I think that some of the issue is that while Eliezer's conception of these issues has continued to evolve, we continue to both point and be pointed back to posts that he only partially agrees with. We might chart a more accurate position by winding through a thousand comments, but that's a difficult thing to do.
To pick one example from a recent thread, here he adjusts (or flags for adjustment) his thinking on Oracle AI, but someone who missed that would have no idea from reading older articles.
It seems like our local SI representatives recognize the need f... (read more)