TsviBT

Wiki Contributions

Comments

Sorted by
TsviBT83

FWIW I agree that personality traits are important. A clear case is that you'd want to avoid combining very low conscientiousness with very high disagreeability, because that's something like antisocial personality disorder or something. But, you don't want to just select against those traits, because weaker forms might be associated with creative achievement. However, IQ, and more broadly cognitive capacity / problem-solving ability, will not become much less valuable soon.

TsviBT140

You can publish it, including the output of a standard hash function applied to the secret password. "Any real note will contain a preimage of this hash."

TsviBT41

Let me reask a subset of the question that doesn't use the word "lie". When he convinced you to not mention Olivia, if you had known that he had also been trying to keep information about Olivia's involvement in related events siloed away (from whoever), would that have raised a red flag for you like "hey, maybe something group-epistemically anti-truth-seeking is happening here"? Such that e.g. that might have tilted you to make a different decision. I ask because it seems like relevant debugging info.

TsviBT61

I admit this was a biased omission, though I don't think it was a lie

Would you acknowledge that if JDP did this a couple times, then this is a lie-by-proxy, i.e. JDP lied through you?

TsviBT60

That's a big question, like asking a doctor "how do you make people healthy", except I'm not a doctor and there's basically no medical science, metaphorically. My literal answer is "make smarter babies" https://www.lesswrong.com/posts/jTiSWHKAtnyA723LE/overview-of-strong-human-intelligence-amplification-methods , but I assume you mean augmenting adults using computer software. For the latter: the only thing I think I know is that you'd have to all of the following steps, in order:

  1. Become really good at watching your own thinking processes, including/especially the murky / inexplicit / difficult / pretheoretic / learning-based parts.
  2. Become really really good at thinking. Like, publish technical research that many people acknowledge is high quality, or something like that (maybe without the acknowledgement, but good luck self-grading). Apply 0.
  3. Figure out what key processes from 1. could have been accelerated with software.
TsviBT51

Yes, but this also happens within one person over time, and the habit (of either investing, or not, in long-term costly high-quality efforts) can gain Steam in the one person.

TsviBT60

If you keep updating such that you always "think AGI is <10 years away" then you will never work on things that take longer than 15 years to help. This is absolutely a mistake, and it should at least be corrected after the first round of "let's not work on things that take too long because AGI is coming in the next 10 years". I will definitely be collecting my Bayes points https://www.lesswrong.com/posts/sTDfraZab47KiRMmT/views-on-when-agi-comes-and-on-strategy-to-reduce

TsviBT50

I have been very critical of cover ups in lesswrong. I'm not going to name names and maybe you don't trust me. But I have observed this all directly

Can you give whatever more information you can, e.g. to help people know whether you're referring to the same or different events that they already know about? E.g., are you talking about this that have already been mentioned on the public internet? What time period/s did the events you're talking about happen in?

Answer by TsviBT70

In theory, possibly, but it's not clear how to save the world given such restricted access. See e.g. https://www.lesswrong.com/posts/NojipcrFFMzNx6Grc/sudo-s-shortform?commentId=onKfTrunn2Q2Gc4Pw

In practice no, because you can't deal with a superintelligence safely. E.g.

  • You can't build a computer system that's robust to auto-exfiltration. I mean, maybe you can, but you're taking on a whole bunch more cost, and also hoping you didn't screw up.
  • You can't develop this tech without other people stealing it and running it unsafely.
  • You can't develop this tech safely at all, because in order to develop it you have to do a lot more than just get a few outputs, you have to, like, debug your code and stuff.
  • And so forth. Mainly and so forth.
TsviBT119

Less concerned about PR risks than most funders

Just so it's said somewhere, LTFF is probably still too concerned with PR. (I don't necessarily mean that people working at LTFF are doing something wrong / making a mistake. I don't have enough information to make a guess like that. E.g., they may be constrained by other people, etc. Also, I don't claim there's another major grant maker that's less constrained like this.) What I mean is, there are probably projects that are feasibly-knowably good but that LTFF can't/won't fund because of PR. So for funders with especially high tolerance for PR and/or ability / interest in investigating PR risks that seem bad from far away, I would recommend against LTFF, in favor of making more specific use of that special status, unless you truly don't have the bandwidth to do so, even by delegating.

Load More