You can publish it, including the output of a standard hash function applied to the secret password. "Any real note will contain a preimage of this hash."
Let me reask a subset of the question that doesn't use the word "lie". When he convinced you to not mention Olivia, if you had known that he had also been trying to keep information about Olivia's involvement in related events siloed away (from whoever), would that have raised a red flag for you like "hey, maybe something group-epistemically anti-truth-seeking is happening here"? Such that e.g. that might have tilted you to make a different decision. I ask because it seems like relevant debugging info.
I admit this was a biased omission, though I don't think it was a lie
Would you acknowledge that if JDP did this a couple times, then this is a lie-by-proxy, i.e. JDP lied through you?
That's a big question, like asking a doctor "how do you make people healthy", except I'm not a doctor and there's basically no medical science, metaphorically. My literal answer is "make smarter babies" https://www.lesswrong.com/posts/jTiSWHKAtnyA723LE/overview-of-strong-human-intelligence-amplification-methods , but I assume you mean augmenting adults using computer software. For the latter: the only thing I think I know is that you'd have to all of the following steps, in order:
Yes, but this also happens within one person over time, and the habit (of either investing, or not, in long-term costly high-quality efforts) can gain Steam in the one person.
If you keep updating such that you always "think AGI is <10 years away" then you will never work on things that take longer than 15 years to help. This is absolutely a mistake, and it should at least be corrected after the first round of "let's not work on things that take too long because AGI is coming in the next 10 years". I will definitely be collecting my Bayes points https://www.lesswrong.com/posts/sTDfraZab47KiRMmT/views-on-when-agi-comes-and-on-strategy-to-reduce
I have been very critical of cover ups in lesswrong. I'm not going to name names and maybe you don't trust me. But I have observed this all directly
Can you give whatever more information you can, e.g. to help people know whether you're referring to the same or different events that they already know about? E.g., are you talking about this that have already been mentioned on the public internet? What time period/s did the events you're talking about happen in?
In theory, possibly, but it's not clear how to save the world given such restricted access. See e.g. https://www.lesswrong.com/posts/NojipcrFFMzNx6Grc/sudo-s-shortform?commentId=onKfTrunn2Q2Gc4Pw
In practice no, because you can't deal with a superintelligence safely. E.g.
Less concerned about PR risks than most funders
Just so it's said somewhere, LTFF is probably still too concerned with PR. (I don't necessarily mean that people working at LTFF are doing something wrong / making a mistake. I don't have enough information to make a guess like that. E.g., they may be constrained by other people, etc. Also, I don't claim there's another major grant maker that's less constrained like this.) What I mean is, there are probably projects that are feasibly-knowably good but that LTFF can't/won't fund because of PR. So for funders with especially high tolerance for PR and/or ability / interest in investigating PR risks that seem bad from far away, I would recommend against LTFF, in favor of making more specific use of that special status, unless you truly don't have the bandwidth to do so, even by delegating.
FWIW I agree that personality traits are important. A clear case is that you'd want to avoid combining very low conscientiousness with very high disagreeability, because that's something like antisocial personality disorder or something. But, you don't want to just select against those traits, because weaker forms might be associated with creative achievement. However, IQ, and more broadly cognitive capacity / problem-solving ability, will not become much less valuable soon.