I have found it! This was the one:
https://www.lesswrong.com/posts/qvNrmTqywWqYY8rsP/solutions-to-problems-with-bayesianism
Seems to have seen better reception at: https://forum.effectivealtruism.org/posts/3z9acGc5sspAdKenr/solutions-to-problems-with-bayesianism
The winning search strategy was quite interesting as well I think:
I took the history of all LW articles I have roughly ever read, I had easy access to all such titles and URLs, but not article contents. I fed them one by one into a 7B LLM asking it to rate how likely based on the title alone the unsee
Hey, can anyone help me find this LW (likely but could be diaspora) article, especially if you might have read it too?
My vague memory: It was talking about (among other things?) some potential ways of extending point estimate probability predictions and calibration curves. I.e. in a situation where making a prediction in one way affects what the outcome will be, i.e. if there is a mind-reader/accurate-simulator involved that bases its actions on your prediction. And in this case, a two dimensional probability estimate might be more appropriate: If 40% is p...
I have found it! This was the one:
https://www.lesswrong.com/posts/qvNrmTqywWqYY8rsP/solutions-to-problems-with-bayesianism
Seems to have seen better reception at: https://forum.effectivealtruism.org/posts/3z9acGc5sspAdKenr/solutions-to-problems-with-bayesianism
The winning search strategy was quite interesting as well I think:
I took the history of all LW articles I have roughly ever read, I had easy access to all such titles and URLs, but not article contents. I fed them one by one into a 7B LLM asking it to rate how likely based on the title alone the unsee
I'm interested in variants of this from both sides. Feel free to shoot me a DM and let's see if we can set something up.
I haven't had a good label to put on things like this but I've gravitated towards similar ways of work over the last 10-20 years, and I've very often found very good performance boosting effects, especially where compatibility and trust could be achieved.
If anyone reading this feel like they missed out, or this sparked their curiosity, or they are bummed that they might have to wait 11 months for a chance at something similar, or they feel like that so many cool things happen in North America and so few things in Europe, (all preceding "or"s are inclusive) then I can heartily recommend you to come to LessWrong Community Weekend 2024 [Applications Open] in Berlin in about 2 months over the weekend of 13 September. Applications are open as of now.
I've attended it a couple of times so far, and I quite liked i...
In a not-too-fast and therefore requisitely stealthy ASI takeover scenario, if the intelligence explosion is not too steep, this could be a main meta-method by which the system gains increasing influence and power while fully remaining under the radar and avoiding detection until it is reasonably sure that it can no longer be opposed. This could be happening without anyone knowing or maybe even being able to know. Frightening.
The employees of the RAND corporation, in charge of nuclear strategic planning, famously did not contribute to their retirement accounts because they did not expect to live long enough to need them.
Any sources for this? I tried searching around without avail yet, which is surprising if this is indeed famously known.
I expect that until I find a satisfactory resolution to this topic, I might come back to it a few times, and potentially keep a bit of a log here of what I find in case it does add up to something. So far this is one of the things I found:
https://www.lesswrong.com/posts/JnDEAmNhSpBRpjD8L/resolutions-to-the-challenge-of-resolving-forecasts
This seems very relevant to a part of what I was pondering about, but not sure how actionable are the takeaways yet.
I strong-upvoted this, but I fear you won't see a lot of traction on this forum for this idea.
I have a vague understanding of why, but I don't think I heard compelling enough reasons from other LWers yet. If someone has some, I'd be happy to read them or be pointed towards them.
I value empiricism highly, i.e. putting ideas into action to be tested against the universe; but I think I've read EY state somewhere that a superintelligence would need to perform very few or even zero experiments to find out a lot (or even most? all?) true things about our univers...
I think I've read EY state somewhere that a superintelligence would need to perform very few or even zero experiments to find out a lot (or even most? all?) true things about our universe that we humans need painstaking effort and experiments for.
EY is probably wrong. While more intelligence allows performing deeper analysis, which can sometimes extract the independent variables from a complex problem, or find the right action, from less data, there are limits. When there are thousands of variables and finite and noisy data (like most medical d...
I think this is a nice write-up, let me add some nuance in two directions:
Indeed these are quick-and-dirty heuristics that can be subpar, but you may or may not be surprised just how often decisions don't reach even this bar. In my work, when we are about to make a decision, I sometimes explicitly have to ask: do we have even a single reason to pick the option that we were about to pick over one or more others? And I find myself saying that (one of) those other options actually have reason(s) for us to pick them--I didn't bring up the question for nothing ... (read more)