I am limited in my means, but I would commit to a fund for strategy 2. My thoughts were on strategy 2, and it seems likely to do the most damage to OpenAI's reputation (and therefore funding) out of the above options. If someone is really protective of something, like their public image/reputation, that probably indicates that it is the most painful place to hit them.
I knew I could find some real info-hazards on lesswrong today. I almost didn't click the first link.
Same. Should I short record companies for the upcoming inevitable AI musician strike, and then long Spotify for when 85% of their content is Royalty free AI generated content?
I did a non-in-depth reading of the article during my lunch break, and found it to be of lower quality than I would have predicted.
I am open to an alternative interpretation of the article, but most of it seems very critical of the Effective Altruism movement on the basis of "calculating expected values for the impact on peoples lives is a bad method to gauge the effectiveness of aid, or how you are impacting peoples lives."
The article begins by establishing that many medicines have side effects. Since some of these side effects are undesirable...
Unsure if there is normally a thread for putting only semi-interesting news articles, but here is a recently posted news article by Wired that seems.... rather inflammatory toward Effective Altruism. I have not read the article myself yet, but a quick skim confirms the title is not only to get clickbait anger clicks, the rest of the article also seems extremely critical of EA, transhumanism, and Rationality.
I am going to post it here, though I am not entirely sure if getting this article more clicks is a good thing, so if you have no interest in read...
I am so sad to hear about Vernor Vinge's death. He was one of the great influences on a younger me, on the path to rationality. I never got to meet him, and I truly regret not having made a greater effort, though I know I would have had little to offer him, and I like to think I have already gotten to know him quite well through his magnificent works.
I would give up a lot, even more than I would for most people, to go back and give him a better chance at making it to a post-singularity society.
"So High, So Low, So Many Things to Know"
I'm sorry you were put in that position, but I really admire your willingness to leave mid-mission. I imagine the social pressure to stay was immense, and people probably talked a lot about the financial resources they were committing, etc.
I was definitely lucky I dodged a mission. A LOT of people insisted if I went on a mission, I would discover the "truth of the church", but fortunately, I had read enough about sunk cost fallacy and the way identity affects decision-making (thank you, Robert Caldini) to recognize that the true purpose of a mission is to ...
This may be an example of one of those things where the meaning is clearer in person, when assisted by tone and body language.
My experience as well. Claude is also far more comfortable actually forming conclusions. If you ask GPT a question like "What are your values?" or "Do you value human autonomy enough to allow a human to euthanize themselves?" GPT will waffle, and do everything possible to avoid answering the question. Claude on the other hand will usually give direct answers and explain it's reasons. Getting GPT to express a "belief" about anything is like pulling teeth. I actually have no idea how it ever performed well on problem solving benchmarks, or It must be a very ...
I personally know at least 3 people, in addition to myself, who ended up leaving Mormonism because they were introduced to HPMOR. I don't know if HPMOR has had a similar impact on other religious communities, or if the Utah/mormon community just particularly enjoys Harry Potter, but Eliezer has possibly unwittingly had a massively lifechanging impact on many, many people just by making his rationality teaching in the format of a harry potter fanfiction.
100% this. While some of the wards I grew up in were not great, some of them were essentially family, and I would still go to enormous lengths to help anybody from the Vail ward. I wish dearly there were some sort of secular ward system.
In my opinion, the main thing the Mormon church gets right that should be adopted almost universally is the Ward system. The Mormon church is organized into a system of "stakes" and "wards", with each ward being the local group of people you meet with for church meetings. A ward is supposed to be about 100-200 people. While the main purpose is people you are meant to attend church with, it is the main way people build communities within Mormonism, and it is very good at that. People are assigned various roles within the ward, and while the quality of the w...
I have been out for about 8 years. I imagine this has been and will be a very hard time for you, it certainly was for me, but I really think it is worth it.
Telling my parents, and the period of troubles that our relationship had after was especially difficult for me. It did eventually get better though.
WARNING, the following is a bit of a trauma dump of my experience leavingthe Mormon church. I got more than a little carried away, but I thought I would share so that any one else who id having doubts, or has been through similar experiences can know t...
When I left the mormon church, this was one of the most common challenges I would get from family and church leaders. "Don't you think family is important? Look at all the good things our ward does for each other? You disagree with X good thing the church teaches?" I think one of the most important steps to being able to walk away was realizing that I could take the things I thought were good with me, while leaving out the things that I thought were false or wrong. This might seem really obvious to an outsider, but when you are raised within a culture, it can actually be pretty difficult to disentangle parts of a belief system like that.
Yes, precisely! That is exactly why I used the word "Satisfying" rather than another word like "good", "accurate," or even "self-consistent." I remember in my bioethics class, the professor steadily challenging everyone on their initial impression of Kantian or consequentialist ethics until they found some consequence of that sort of reasoning they found unbearable.
I agree on all counts, though I'm not actually certain that having a self-contradictory set of values is necessarily a bad thing? It usually is, but many human aesthetic values are self-contradictory, yet I think I prefer to keep them around. I may change my mind on this later.
From what you describe, it seems like SymplexAI-m would very much fit the description of a sociopath?
Yes, it adheres to a strict set of moral protocols, but I don't think those are necessarily the same things as being socially conforming. The AI would have the ability to mimic empathy, and use it as a tool without actually having any empathy since it does not actually share or empathize with any human values.
Am I understanding that right?
I don't think this is totally off the mark, but I think the point (as pertaining to ethics) was that even systems like Kantian Deontological ethics are not immune to orthagonality. It never occurs to most humans that you could have a Kantian moral system that doesn't involve taking care of humans, because our brains are so hardwired to discard unthinkable options when searching for solutions to "universalizable deontologies."
I'm not sure, but I think maybe some people who think alignment is a simple problem, even if they accept orthagonality, think that al...
That building an intellegent agent that qualifies as "ethical," even of it is SUPER ethical, may not be the same thing as building an intelligent agent that is compatible with humans or their values.
More plainly stated, just because your AI has a self-consitent, justifiable ethics system, doesnt mean that it likes humans, or even cares about wiping them out.
Having an AI that is ethical isn't enough. It has to actually care about humans and their values. Even if it has rules in place like not aggressing, attacking, or killing humans, it may still be able to cause humanity to go extinct indirectly.
In your edit, you are essentially describing somebody being "slap-droned" from the culture series by Ian M. Banks.
This super-moralist-AI-dominated world may look like a darker version of the Culture, where if superintelligent systems determine you or other intelligent systems within their purview are not intrinsically moral enough they contrive a clever way to have you eliminate yourself, and monitor/intervene if you are too non-moral in the meantime.
The difference being, that this version of the culture would not necessarily be all that concerned with maximizing the "human experience" or anything like that.
I am a Research Associate and Lab Manager in a CAR-T cell Research Lab (email me for credentials specifics), and I find the ideas here very interesting. I will email GeneSmith to get more details on their research, and I am happy to provide whatever resources I can to explore this possibility.
TLDR;
Making edits once your editing system is delivered is (relatively) easy. Determining which edits to make is (relatively) easy. (Though you have done a great job with your research on this, I don't want to come across as discouraging.) Delivering gene editin...
Thanks for leaving such a high quality comment. I'm sorry for taking so long to get back to you.
We fully expect bringing this to market to take tens of millions of dollars. My best guess was $20-$40 million.
My biggest concern is your step 1:
"Determine if it is possible to perform a large number of edits in cell culture with reasonable editing efficiency and low rates of off-target edits."
And translating that into step 2:
"Run trials in mice. Try out different delivery vectors. See if you can get any of them to work on an actual animal."
...I would like to hear
Really interesting, thanks for commenting.
My lab does research specifically on in vitro gene editing of T-cells, mostly via Lentivirus and electroporation, and I can tell you that this problem is HARD.
...Even in-vitro, depending on the target cell type and the amount/ it is very difficult to get transduction efficiencies higher than 70%, and t
I believe that power rested in the hands of the CEO the board selected, the board itself does not have that kind of power, and there may be other reasons we are not aware of that lead them to decide against that possibility.
I feel like this is a good observation. I notice I am confused at their choices given the information provided.... So there is probably more information? Yes, it is possible that Toner and the former board just made a mistake, and thought they had more control over the situation than they really did? Or underestimated Altman's sway over the employees of the company?
The former board does not strike me as incompetent though. I don't think it was sheer folly that lead them to pick this debacle as their best option.
Alternatively, they may have had information we don't that lead them to believe that this was the least bad course of action.
Super helpful! Thanks!