All of HiddenPrior's Comments + Replies

Super helpful! Thanks!

I am limited in my means, but I would commit to a fund for strategy 2. My thoughts were on strategy 2, and it seems likely to do the most damage to OpenAI's reputation (and therefore funding) out of the above options. If someone is really protective of something, like their public image/reputation, that probably indicates that it is the most painful place to hit them.

I knew I could find some real info-hazards on lesswrong today. I almost didn't click the first link.

Same. Should I short record companies for the upcoming inevitable AI musician strike, and then long Spotify for when 85% of their content is Royalty free AI generated content?

I did a non-in-depth reading of the article during my lunch break, and found it to be of lower quality than I would have predicted. 

I am open to an alternative interpretation of the article, but most of it seems very critical of the Effective Altruism movement on the basis of "calculating expected values for the impact on peoples lives is a bad method to gauge the effectiveness of aid, or how you are impacting peoples lives." 

The article begins by establishing that many medicines have side effects. Since some of these side effects are undesirable... (read more)

Unsure if there is normally a thread for putting only semi-interesting news articles, but here is a recently posted news article by Wired that seems.... rather inflammatory toward Effective Altruism. I have not read the article myself yet, but a quick skim confirms the title is not only to get clickbait anger clicks, the rest of the article also seems extremely critical of EA, transhumanism, and Rationality. 

I am going to post it here, though I am not entirely sure if getting this article more clicks is a good thing, so if you have no interest in read... (read more)

4HiddenPrior
I did a non-in-depth reading of the article during my lunch break, and found it to be of lower quality than I would have predicted.  I am open to an alternative interpretation of the article, but most of it seems very critical of the Effective Altruism movement on the basis of "calculating expected values for the impact on peoples lives is a bad method to gauge the effectiveness of aid, or how you are impacting peoples lives."  The article begins by establishing that many medicines have side effects. Since some of these side effects are undesirable, the author suggests, though they do not state explicitly, that the medicine may also be undesirable if the side effect is bad enough. They go on to suggest that Givewell, and other EA efforts at aid are not very aware of the side effects of their efforts, and that the efforts may therefore do more harm than good. The author does not stoop so low as to actually provide evidence of this, or even make any explicit claims that could be checked or contradicted, but merely suggests that givewell does not do a good job of this. This is the less charitable part of my interpretation (no pun intended), but I feel the author spends a lot of the article constantly suggesting that trying to be altruistic, especially in an organized or systematic way, is ineffective, maybe harmful and generally not worth the effort. Mostly the author does this by suggesting anecdotal stories of their investigations into charity, and how they feel much wiser now. The author then moves on to their association of SBF with Effective Altruism, going so far as to say: "Sam Bankman-Fried is the perfect prophet of EA, the epitome of its moral bankruptcy." In general, the author goes on to give a case for how SBF is the classic utilitarian villain, justifying his immoral acts through oh-so esoteric calculations of improving good around the world on net.  The author goes on to lay out a general criticism of Effective Altruism as relying on arbitrary utilit

I am so sad to hear about Vernor Vinge's death. He was one of the great influences on a younger me, on the path to rationality. I never got to meet him, and I truly regret not having made a greater effort, though I know I would have had little to offer him, and I like to think I have already gotten to know him quite well through his magnificent works.

I would give up a lot, even more than I would for most people, to go back and give him a better chance at making it to a post-singularity society.

"So High, So Low, So Many Things to Know"

I'm sorry you were put in that position, but I really admire your willingness to leave mid-mission. I imagine the social pressure to stay was immense, and people probably talked a lot about the financial resources they were committing, etc.

I was definitely lucky I dodged a mission. A LOT of people insisted if I went on a mission, I would discover the "truth of the church", but fortunately, I had read enough about sunk cost fallacy and the way identity affects decision-making (thank you, Robert Caldini) to recognize that the true purpose of a mission is to ... (read more)

This may be an example of one of those things where the meaning is clearer in person, when assisted by tone and body language.

My experience as well. Claude is also far more comfortable actually forming conclusions. If you ask GPT a question like "What are your values?" or "Do you value human autonomy enough to allow a human to euthanize themselves?" GPT will waffle, and do everything possible to avoid answering the question. Claude on the other hand will usually give direct answers and explain it's reasons. Getting GPT to express a "belief" about anything is like pulling teeth. I actually have no idea how it ever performed well on problem solving benchmarks, or It must be a very ... (read more)

I personally know at least 3 people, in addition to myself, who ended up leaving Mormonism because they were introduced to HPMOR. I don't know if HPMOR has had a similar impact on other religious communities, or if the Utah/mormon community just particularly enjoys Harry Potter, but Eliezer has possibly unwittingly had a massively lifechanging impact on many, many people just by making his rationality teaching in the format of a harry potter fanfiction.

4ErioirE
That's neat! In my case I didn't leave because of HPMOR specifically, although it certainly didn't hurt.

100% this. While some of the wards I grew up in were not great, some of them were essentially family, and I would still go to enormous lengths to help anybody from the Vail ward. I wish dearly there were some sort of secular ward system. 

In my opinion, the main thing the Mormon church gets right that should be adopted almost universally is the Ward system. The Mormon church is organized into a system of "stakes" and "wards", with each ward being the local group of people you meet with for church meetings. A ward is supposed to be about 100-200 people. While the main purpose is people you are meant to attend church with, it is the main way people build communities within Mormonism, and it is very good at that. People are assigned various roles within the ward, and while the quality of the w... (read more)

I have been out for about 8 years. I imagine this has been and will be a very hard time for you, it certainly was for me, but I really think it is worth it. 

Telling my parents, and the period of troubles that our relationship had after was especially difficult for me. It did eventually get better though.


WARNING, the following is a bit of a trauma dump of my experience leavingthe Mormon church. I got more than a little carried away, but I thought I would share so that any one else who id having doubts, or has been through similar experiences can know t... (read more)

2ErioirE
Thanks for that! You're fortunate you got out before going on a mission. I lasted only a few months before I became bored out of my mind and couldn't do it any more. I'm not even going to attempt to convince my parents. I know them well enough that if I prepared a good enough strategy I'd estimate a >40% chance of convincing at least one of them, but their lives and personalities are so enmeshed with the church that losing it would likely do them more harm than good at this point. How did you approach dating after leaving? I don't have much of a friend group now (not specifically because I left, I just drifted away from my friends from HS after a few years) so it's really tough to meet women.

When I left the mormon church, this was one of the most common challenges I would get from family and church leaders. "Don't you think family is important? Look at all the good things our ward does for each other? You disagree with X good thing the church teaches?" I think one of the most important steps to being able to walk away was realizing that I could take the things I thought were good with me, while leaving out the things that I thought were false or wrong. This might seem really obvious to an outsider, but when you are raised within a culture, it can actually be pretty difficult to disentangle parts of a belief system like that.

2ErioirE
I second this, thanks!

Yes, precisely! That is exactly why I used the word "Satisfying" rather than another word like "good", "accurate," or even "self-consistent." I remember in my bioethics class, the professor steadily challenging everyone on their initial impression of Kantian or consequentialist ethics until they found some consequence of that sort of reasoning they found unbearable. 

I agree on all counts, though I'm not actually certain that having a self-contradictory set of values is necessarily a bad thing? It usually is, but many human aesthetic values are self-contradictory, yet I think I prefer to keep them around. I may change my mind on this later.

From what you describe, it seems like SymplexAI-m would very much fit the description of a sociopath?

Yes, it adheres to a strict set of moral protocols, but I don't think those are necessarily the same things as being socially conforming. The AI would have the ability to mimic empathy, and use it as a tool without actually having any empathy since it does not actually share or empathize with any human values.

Am I understanding that right?

1[anonymous]
I'll admit I was being a bit fuzzy - it doesn't really make much sense to extrapolate the "sociopath" boundary in people space to arbitrary agent spaces. Debating whether SimplexAI-m is a sociopath is sort of like asking whether an isolated tree falling makes a sound. So I was mostly trying to convey my mental model of the most useful cluster in people space that could be called sociopathy, because 1) I see it very, very consistently misunderstood, and 2) sociopathy is far more important to spot than virtually any other dimension. As an aside, I think the best book on the topic is The Psychopath Code by Pieter Hintjens, a software engineer. I've perused a few books written by academics and can't recommend any; it System1!seems like the study of psychopathy must be afflicted by even worse selection effects and bad experiment design than the rest of psychology because the academic books don't fit the behaviour of people I've known at all.

I don't think this is totally off the mark, but I think the point (as pertaining to ethics) was that even systems like Kantian Deontological ethics are not immune to orthagonality. It never occurs to most humans that you could have a Kantian moral system that doesn't involve taking care of humans, because our brains are so hardwired to discard unthinkable options when searching for solutions to "universalizable deontologies."

I'm not sure, but I think maybe some people who think alignment is a simple problem, even if they accept orthagonality, think that al... (read more)

3Seth Herd
When you say By "satisfying" do you mean capturing moral intuitions well in most/all situations? If so, I very much agree that you won't find such a thing. One reason is that people use a mix of consequentialist and deontological approaches.I think another reason is that people's moral intuitions are outright self-contradictory. They're not systematic, so no system can reproduce them. I don't think this means much other than that the study of ethics can't be just about finding a system that reproduces our moral intuitions. Part of thinking about ethics is changing ones' moral intuitions by identifying where they're self-contradictory.

That building an intellegent agent that qualifies as "ethical," even of it is SUPER ethical, may not be the same thing as building an intelligent agent that is compatible with humans or their values.

More plainly stated, just because your AI has a self-consitent, justifiable ethics system, doesnt mean that it likes humans, or even cares about wiping them out.

Having an AI that is ethical isn't enough. It has to actually care about humans and their values. Even if it has rules in place like not aggressing, attacking, or killing humans, it may still be able to cause humanity to go extinct indirectly.

In your edit, you are essentially describing somebody being "slap-droned" from the culture series by Ian M. Banks.

This super-moralist-AI-dominated world may look like a darker version of the Culture, where if superintelligent systems determine you or other intelligent systems within their purview are not intrinsically moral enough they contrive a clever way to have you eliminate yourself, and monitor/intervene if you are too non-moral in the meantime.

The difference being, that this version of the culture would not necessarily be all that concerned with maximizing the "human experience" or anything like that.

1anithite
My guess is you get one of two extremes: * build a bubble of human survivable space protected/managed by an aligned AGI * die with no middle ground. The bubble would be self contained. There's nothing you can do from inside the bubble to raise a ruckus because if there was you'd already be dead or your neighbors would have built a taller fence-like-thing at your expense so the ruckus couldn't affect them. The whole scenario seems unlikely since building the bubble requires an aligned AGI and if we have those we probably won't be in this mess to begin with. Winner take all dynamics abound. The rich get richer (and smarter) and humans just lose unless the first meaningfully smarter entity we build is aligned.

I am a Research Associate and Lab Manager in a CAR-T cell Research Lab (email me for credentials specifics), and I find the ideas here very interesting. I will email GeneSmith to get more details on their research, and I am happy to provide whatever resources I can to explore this possibility.

TLDR; 
Making edits once your editing system is delivered is (relatively) easy. Determining which edits to make is (relatively) easy. (Though you have done a great job with your research on this, I don't want to come across as discouraging.) Delivering gene editin... (read more)

Thanks for leaving such a high quality comment. I'm sorry for taking so long to get back to you.

We fully expect bringing this to market to take tens of millions of dollars. My best guess was $20-$40 million.

My biggest concern is your step 1:

"Determine if it is possible to perform a large number of edits in cell culture with reasonable editing efficiency and low rates of off-target edits."

And translating that into step 2:

"Run trials in mice. Try out different delivery vectors. See if you can get any of them to work on an actual animal."

I would like to hear

... (read more)
kman*102

Really interesting, thanks for commenting.

My lab does research specifically on in vitro gene editing of T-cells, mostly via Lentivirus and electroporation, and I can tell you that this problem is HARD.

  • Are you doing traditional gene therapy or CRISPR-based editing?
    • If the former, I'd guess you're using Lentivirus because you want genome integration?
    • If the latter, why not use Lipofectamine?
  • How do you use electroporation?

Even in-vitro, depending on the target cell type and the amount/ it is very difficult to get transduction efficiencies higher than 70%, and t

... (read more)

I believe that power rested in the hands of the CEO the board selected, the board itself does not have that kind of power, and there may be other reasons we are not aware of that lead them to decide against that possibility.

I feel like this is a good observation. I notice I am confused at their choices given the information provided.... So there is probably more information? Yes, it is possible that Toner and the former board just made a mistake, and thought they had more control over the situation than they really did? Or underestimated Altman's sway over the employees of the company?

The former board does not strike me as incompetent though. I don't think it was sheer folly that lead them to pick this debacle as their best option.

Alternatively, they may have had information we don't that lead them to believe that this was the least bad course of action.