A tricky thing about feedback on LW (or maybe just human nature or webforum nature):
Basically, if you try to actually do a thing or be particularly specific/concrete then you are held to a much higher standard.
There are some counterexamples. And LW is better than lots of sites.
Nonetheless, I feel here like I have a warm welcome to talk bullshit around the water cooler but angry stares when I try to mortar a few bricks.
I feel like this is almost a good site for getting your hands dirty and getting feedback and such. Just a more positive culture towards actual shots on target would be sufficient I think. Not sure how that could be achieved.
Maybe this is like publication culture vs workshop culture or something.
Unpolished first thoughts:
It's not perfect, but one approach I saw on here and liked a lot was @turntrout's MATS team's approach for some of the initial shard theory work, where they made an initial post outlining the problem and soliciting predictions on a set of concrete questions (which gave a nice affordance for engagement, namely "make predictions and maybe comment on your predictions), and then they made a follow-up post with their actual results. Seemed to get quite good engagement.
A confounding factor, though, was that was also an unusually impressive bit of research.
At least as far as safety research goes, concrete empirical safety research is often well received.
I think you're directionally correct and would like to see lesswrong reward concrete work more. But I think your analysis is suffering from survivorship bias. Lots of "look at the target" posts die on the vine so you never see their low karma, and decent arrow-shot posts tend to get more like 50 even when the comments section is empty.
I don't have a witty, insightful, neutral-sounding way to say this. The grantmakers should let the money flow. There are thousands of talented young safety researchers with decent ideas and exceptional minds, but they probably can't prove it to you. They only need one thing and it is money.
They will be 10x less productive in a big nonprofit and they certainly won't find the next big breakthrough there.
(Meanwhile, there are becoming much better ways to make money that don't involve any good deeds at all.)
My friends were a good deal sharper and more motivated at 18 than now at 25. None of them had any chance at getting grants back then, but they have an ok shot now. At 35, their resumes will be much better and their minds much duller. And it will be too late to shape AGI at all.
I can't find a good LW voice for this point but I feel this is incredibly important. Managers will find all the big nonprofits and eat their gooey centers and leave behind empty husks. They will do this quickly, within a couple years of each nonprofit being founded. The founders themselves will not be spared. Look how the writing of Altman or Demis changed over the years.
The funding situation needs to change very much and very quickly. If a man has an idea just give him money and don't ask questions. (No, I don't mean me.)
I think I disagree. This is a bandit problem, and grantmakers have tried pulling that lever a bunch of times. There hasn't been any field-changing research (yet). They knew it had a low chance of success so it's not a big update. But it is a small update.
Probably the optimal move isn't cutting early-career support entirely, but having a higher bar seems correct. There are other levers that are worth trying, and we don't have the resources to try every lever.
Also there are more grifters now that the word is out, so the EV is also declining that way.
(I feel bad saying this as someone who benefited a lot from early-career financial support).
grantmakers have tried pulling that lever a bunch of times
What do you mean by this? I can think of lots of things that seem in some broad class of pulling some lever that kinda looks like this, but most of the ones I'm aware of fall greatly short of being an appropriate attempt to leverage smart young creative motivated would-be AGI alignment insight-havers. So the update should be much smaller (or there's a bunch of stuff I'm not aware of).
upskilling or career transition grants, especially from LTFF, in the last couple of years
Interesting; I'm less aware of these.
How are they falling short?
I'll answer as though I know what's going on in various private processes, but I don't, and therefore could easily be wrong. I assume some of these are sort of done somewhere, but not enough and not together enough.
Just wanted to flag quickly that Open Philanthropy's GCR Capacity Building team (where I work) has a career development and transition funding program.
The program aims to provide support—in the form of funding for graduate study, unpaid internships, self-study, career transition and exploration periods, and other activities relevant to building career capital—for individuals at any career stage who want to pursue careers that could help reduce global catastrophic risks (esp. AI risks). It’s open globally and operates on a rolling basis.
I realize that this is quite different from what lemonhope is advocating for here, but nevertheless thought it would be useful context for this discussion (and potential applicants).
I would mostly advise people against making large career transitions on the basis of Open Phil funding, or if you do, I would be very conservative with it. Like, don't quit your job because of a promise of 1 year of funding, because it is quite possible your second year will only be given conditional on you aligning with the political priorities of OP funders or OP reputational management, and career transitions usually take longer than a year. To be clear, I think it often makes sense to accept funding from almost anyone, but in the case of OP it is funding with unusually hard-to-notice strings attached that might bite you when you are particularly weak-willed or vulnerable.
Also, if OP staff tells you they will give you future grants, or guarantee you some kind of "exit grant" I would largely discount that, at least at the moment. This is true for many, if not most, funders, but my sense is people tend to be particularly miscalibrated for OP (who aren't particularly more or less trustworthy in their forecasts than random foundations and philanthropists, but I do think often get perceived as much more).
Of course, different people's risk appetite might differ, and mileage might vary, but if you can, I would try to negotiate for a 2-3 year grant, or find another funder to backstop you for another year or two, even if OP has said they would keep funding you, before pursuing some kind of substantial career pivot.
Yeah, I was thinking of PhD programs as one of the most common longer-term grants.
Agree that it's reasonable for a lot of this funding to be shorter, but also think that given the shifting funding landscape where most good research by my lights can no longer get funding, I would be quite hesitant for people to substantially sacrifice career capital in the hopes of getting funding later (or more concretely, I think it's the right choice for people to choose a path where they end up with a lot of slack to think about what directions to pursue, instead of being particularly vulnerable to economic incentives while trying to orient towards the very high-stakes feeling and difficult to navigate existential risk reduction landscape, which tends to result in the best people predictably working for big capability companies).
This includes the constraints of "finding paid positions after <1 year", where the set of organizations that have funding to sponsor good work is also very small these days (though I do think that has a decent chance of changing again within a year or two, so it's not a crazy bet to make).
Given these recent shifts and the harsher economic incentives of transitioning into the space, I think it would make sense for people to negotiate with OP about getting longer grants than OP has historically granted (which I think aligns with what I think OP staff makes sense as well, based on conversations I've had).
Yes, I do not believe OP funding constraints are well-described by either limitations on grants specifically to "rationality community" or "conservative/republican-coded activities".
Just as an illustration, if you start thinking or directing your career towards making sure we don't torture AI systems despite them maybe having moral value, that is also a domain where OP has withdrawn funding from. Same if you want to work on any wild animal or invertebrate suffering. I also know of multiple other grantees which do not straightforwardly fall into any domains that OP has announced they are withdrawing funding from that cannot receive funding.[1]
I think the best description for predicting what OP is avoiding funding right now, and will continue to avoid funding into the future is broadly "things that might make Dustin or OP look weird, and are not in a very small set of domains where OP is OK with taking reputational hits or defending people who want to be open about their beliefs, or might otherwise cost them political capital with potential allies (which includes but is not exclusive to the democratic party, AI capability companies, various US government departments, and a vague conc...
My friends were a good deal sharper and more motivated at 18 than now at 25.
How do you tell that there were sharper back then?
It sounds pretty implausible to me, intellectual productivity is usually at its peak from mid-20s to mid-30s(for high fluid-intelligence fields like math and physics)
Where has the "rights of the living vs rights of the unborn" debate already been had? In the context of longevity. (Presuming that at some point an exponentially increasing population consumes its cubically increasing resources.)
I wonder how many recent trans people tried/considered doubling down on their assigned sex (eg males taking more testosterone) instead first. Maybe (for some people) either end of gender spectrum is comfortable and being in the middle feels bad¿ Anybody know? Don't want to ask my friends because this Q will certainly anger them
Is there a good like uh "intro to China" book or YouTube channel? Like something that teaches me (possibly indirectly) what things are valued, how people think and act, extremely basic history, how politics works, how factories get put up, etc etc. Could be about government, industry, the common person, or whatever.. I wish I could be asking for something more specific, but I honestly do not even know the basics.
All I've read is Shenzhen: A Travelogue from China which was quite good although very obsolete. Also it is a comic book.
I'm not much of a reader ...
What is the current popular (or ideally wise) wisdom wrt publishing demos of scary/spooky AI capabilities? I've heard the argument that moderately scary demos drive capability development into secrecy. Maybe it's just all in the details of who you show what when and what you say. But has someone written a good post about this question?
It's hard to grasp just how good backprop is. Normally in science you estimate the effect of 1-3 variables on 1-3 outcomes. With backprop you can estimate the effect of a trillion variables on an outcome. You don't even need more samples! Around 100 is typical for both (n vs batch_size)
I wonder how a workshop that teaches participants how to love easy victory and despise hard-fought battles could work
I wonder if a chat loop like this would be effective at shortcutting years of confused effort maybe in research andor engineering. (The AI just asks the questions and the person answers.)
Questions like that can be surprisingly easy to answer. Just hard to remember to ask.
I notice I strong upvote on LW mobile a lot more than desktop because double-tap is more natural than long-click. Maybe mobile should have a min delay between the two taps?
Is it rude to make a new tag without also tagging a handful of posts for it? A few tags I kinda want:
Zettelkasten in five seconds with no tooling
Have one big textfile with every thought you ever have. Number the thoughts and don't make each thought too long. Reference thoughts with a pound (e.g. #456) for easy search.
I can only find capabilities jobs right now. I would be interested in starting a tiny applied research org or something. How hard is it to get funding for that? I don't have a strong relevant public record, but I did quite a lot of work at METR and elsewhere.
LW mods, please pay somebody to turn every post with 20+ karma into a diagram. Diagrams are just so vastly superior to words.
maybe you die young so you don't get your descendants sick
I've always wondered why evolution didn't select for longer lifespans more strongly. Like, surely a mouse that lives twice as long would have more kids and better knowledge of safe food sources. (And lead their descendants to the same food sources.) I have googled for an explanation a few times but not found one yet.
I thought of a potential explanation the other day. The older you get, the more pathogens you take on. (Especially if you're a mouse.) If you share a den with your grandkids then you mig...
I wonder how well a water cooled stovetop thermoelectric backup generator could work.
This is only 30W but air cooled https://www.tegmart.com/thermoelectric-generators/wood-stove-air-cooled-30w-teg
You could use a fish tank water pump to bring water to/from the sink. Just fill up a bowl of water with the faucet and stick the tube in it. Leave the faucet running. Put a filter on the bowl. Float switch to detect low water, run wire with the water tube
Normal natural gas generator like $5k-10k and you have to be homeowner
I think really wide kettle with coily ...
(Quoting my recent comment)
Apparently in the US we are too ashamed to say we have "worms" or "parasites", so instead we say we have "helminths". Using this keyword makes google work. This article estimates at least 5 million people (possibly far more) in the US have one of the 6 considered parasites. Other parasites may also be around. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7847297/ (table 1)
This is way more infections than I thought!!
Note the weird symptoms. Blurry vision, headache, respiratory illness, blindness, impaired cognition, fever... Not j...
I was working on this cute math notation the other day. Curious if anybody knows a better way or if I am overcomplicating this.
Say you have . And you want to be some particular value.
Sometimes you can control , sometimes you can control , and you can always easily measure . So you might use these forms of the equation:
It's kind of confusing that seems proportional to both and . So here's where the notation comes in. Can write above like
...
Seems it is easier / more streamlined / more googlable now for a teenage male to get testosterone blockers than testosterone. Latter is very frowned upon — I guess because it is cheating in sports. Try googling eg "get testosterone prescription high school reddit -trans -ftm". The results are exclusively people shaming the cheaters. Whereas of course googling "get testosterone blockers high school reddit" gives tons of love & support & practical advice.
Females however retain easy access to hormones via birth control.
I wonder what experiments physicists have dreamed up to find floating point errors in physics. Anybody know? Or can you run physics with large ints? Would you need like int256?
I wonder how much testosterone during puberty lowers IQ. Most of my high school math/CS friends seemed low-T and 3/4 of them transitioned since high school. They still seem smart as shit. The higher-T among us seem significantly brain damaged since high school (myself included). I wonder what the mechanism would be here...
Like 40% of my math/cs Twitter is trans women and another 30% is scrawny nerds and only like 9% big bald men.