All of reup's Comments + Replies

reup90

I think that some of the issue is that while Eliezer's conception of these issues has continued to evolve, we continue to both point and be pointed back to posts that he only partially agrees with. We might chart a more accurate position by winding through a thousand comments, but that's a difficult thing to do.

To pick one example from a recent thread, here he adjusts (or flags for adjustment) his thinking on Oracle AI, but someone who missed that would have no idea from reading older articles.

It seems like our local SI representatives recognize the need f... (read more)

0witzvo
Maybe this is what you're implying is already in progress, but if the main issue is that parts of the sequence are out of date, maybe Eliezer could commission a set of people who've been following the discussion all along to write review pieces, drawing on all the best comments, that describe how they would "rediscover" the conclusions of the aspect of the sequence they are responsible for themselves (with links back to original discussion). Ideally these reviewers would work out between themselves how to make a clean and succinct narrative without lots of repetition; e.g. how to collapse issues that get revisited later or that crosscut into a clear narrative. Then Eliezer and the rest of us could comment on those summaries, as a peer review. Of course, it's fine if he wants to write the new material himself, but frankly I want to know what's going to happen in HPMOR. :)
reup120

I agree but, as I've understood it, they're explicitly saying they won't release any AGI advances they make. What will it do to their credibility to be funding a "secret" AI project?

I honestly worry that this could kill funding for the organization which doesn't seem optimal in any scenario.

Potential Donor: I've been impressed with your work on AI risk. Now, I hear you're also trying to build an AI yourselves. Who do you have working on your team?

SI: Well, we decided to train high schoolers since we couldn't find any researchers we could trust.

PD... (read more)

0Kaj_Sotala
Indeed.
reup10

I remember reading and enjoying that article (this one, I think).

I would think that the same argument would apply regardless of the scale of the donations (assuming there aren't fixed transaction costs (which might not be valid)). My read would be that it comes down to the question of risk versus uncertainty. If there is actual uncertainty, investing widely might make sense if you believe that those investments will provide useful information to clarify the actual problem structure so that you can accurately target future giving.

reup10

And, if they're relying on perfect secrecy/commitment over a group of even a half-dozen researchers as the key to their safety strategy, then by their own standards they should not be trying to build an FAI.

reup20

Remember he's playing an iterated game. So, if we assume that right now he has very little information about which area is the most important to invest in or which areas are most likely to produce the best return, playing a wider distribution in order to gain information in order maximize the utility of later rounds of donations/investments seems rational.

1radical_negative_one
I remember reading, on the topic of optimal charity, that it's only rational to select a single cause to donate to... until the point of giving enough money to noticeably change the marginal utility of each additional dollar. (Thiel has that much money, of course.) This information-gathering strategy could be a new reason for spreading donations at the level of large-scale donations, if it hasn't been discussed before.
reup10

Is there a post on the relative strengths/weaknesses of UDT and TDT? I've searched but haven't found one.

reup10

On the html side, grab a free template (quite a few sites out there offer nice ones). I find that it's easier to keep working when my project at least looks decent. Also, at least for me, I feel more comfortable showing it to friends for advice when there's some superficial polish.

Also, when you see something (a button, control or effect) on a site, open the source. A decent percent of the time you'll find it's actually open source already (lots of js frameworks out there) and you can just copy directly. If not, you'll still learn how it's done.

Good luck!

reup00

Maybe solving them will require new math, but it seems possible that existing math already provides the necessary tools.

There seems to be far more commitment to a particular approach than is justified by the evidence (at least what they've publicly revealed).

reup-10

I think we can safely stipulate that there is no universal route to contest success or Luke's other example of 800 math SATs.

But, I can answer your question that, yes, I'm sure that at least some of the students are receiving supplemental tutoring. Not necessarily contest-focused, but still.

Anecdotally: the two friends I had from undergrad who were IMO medalists (about 10 years ago) had both gone through early math tutoring programs (and both had a parent who was a math professor). All of my undergrad friends who had 800 math SAT had either received tutori... (read more)

reup40

I think it could be consistent if you treat his efforts as designed to gather information.

0Dr_Manhattan
Only if he thinks he can only weakly affect outcomes, or exert large amount of control as the evidence starts coming in.
reup50

Another version of this is to offer to go talk with a priest/pastor yourself. One thing this does is to buy you time while your mom adjusts. If you find a decent one to talk with (iIf your church has one, sometimes youth pastors are a bit more open), the conversation won't be too unpleasant (don't view it as convincing them, just lay out your reasoning).

Your mom may be pleased that someone "higher up" is dealing with you. Also, when they fail to convince you, it helps her to let go of the idea that there was something more she could have done.

reup00

This. It comes off as amateurish, not knowing which details are important to include. But hopefully these semi-informal discussions help with refining the pitch and presentation before they're standing in front of potential donors.

reup70

Either way, I think that building toward an FAI team is good for AI risk reduction, even if we decide (later) that an SI-hosted FAI team is not the best thing to do.

I question this assumption. I think that building an FAI team may damage your overall goal of AI risk reduction for several reasons:

  1. By setting yourself up as a competitor to other AGI research efforts, you strongly decrease the chance that they will listen to you. It will be far easier for them to write off your calls for consideration of friendliness issues as self-serving.

  2. You risk unde

... (read more)
6Kaj_Sotala
On the other hand, SI might get taken more seriously if it is able to demonstrate that it actually does know something about AGI design and isn't just a bunch of outsiders to the field doing idle philosophizing. Of course, this requires that SI is ready to publish part of its AGI research.
reup50

The fact that you are looking for "raw" math ability seems questionable. If their most recent achievements are IMO/SAT, you're looking at high schoolers or early undergrads (Putnam winners have their tickets punched at top grad schools and will be very hard to recruit). Given that, you'll have at least a 5-10 year lag while they continue learning enough to do basic research.

3Qiaochu_Yuan
Yes. So? During that time, you can get them interested in rationality and x-risk.
reup50

One somewhat close quote that popped to mind (from lukeprog's article on philosophy):

Second, if you want to contribute to cutting-edge problems, even ones that seem philosophical, it's far more productive to study math and science than it is to study philosophy. You'll learn more in math and science, and your learning will be of a higher quality.

8Wei Dai
My view is that if you take someone with philosophical talents and interests (presumably inherited or caused by the environment in a hard-to-control manner) , you can make a better philosopher out of them by having them study more math and science than the typical education for a philosopher. But if you take someone with little philosophical talent and interest and do the same, they'll just become mathematicians and scientists. I think this is probably similar to the views of SIAI people, and your quote doesn't contradict my understanding.
1private_messaging
Unfortunately none of core singinst guys seem to have any interesting accomplishments in math or have actually studied that math in depth; it is a very insightful remark by Luke but it'd be great if they have applied it to themselves; otherwise it just looks like Dunning-Kruger effect. I don't see any reason to think that the elite math references are anything but lame signaling that is usually done by those whom don't know math enough to properly signal the knowledge (by actually doing something new in math). Sadly it works: if you use jargon and you say something like what Luke said, then some of the people whom can't independently evaluate your math skills, will assume it must be very high. Meanwhile I will assume it to be rather low because those with genuinely high skill will signal such skill in different way.
reup50

One other issue is that a near precondition for IMO-type recognition is coming from at least a middle class family and having either an immediate family member or early teacher able to recognize and direct that talent. Worse, as these competitions have increased in stature, you have an increasing number of the students pushed by parents and provided regular tutoring and preparation. Those sorts of hothouse personalities would seem to be some of the more risky to put on an FAI team.

7jsteinhardt
Are you sure about this? I don't know of that many people who did super-well in contests as a result of being tutored from an early age (although I would agree that many that do well in contests took advanced math classes at an early age; however, others did not). Many top-scorers train on their own or in local communities. Now that there are websites like AoPS, it is easier to do well even without a local community, although I agree that being in a better socioeconomic situation is likely to help.