Our experience has been that talking to Congressional staffers about a ban or pause on superintelligence research tends to result in blank stares and a rapid end to the meeting. [...] A global moratorium [....] we don’t see anything that we can do to help make that happen.
Ok. Thank you for the info. Would you speculate a bit about what might change this, that other people might be able to do? E.g. what number of call-ins to their offices from constituents, or how many protests, or what industry testimony, or how much campaign funding, etc.
Fair question -- although it would be speculation on my part, since we haven't been actively studying attitudes toward a moratorium. You might do better to ask Pause AI. That said, I would think you'd need something on the scale of the Vietnam War protests to get a blanket unilateral moratorium on advanced AI inside the Overton window -- protests large enough to generate major headlines on most days that continue for months on end.
If you can get very credible evidence that the UK, the EU, and China would all agree to join such a moratorium, then a sm...
I mean, I'm not familiar with the whole variety of different ways and reasons that people attack other people as "racist". I'm just saying that only saying true statements is not conclusive evidence that you're not a racist, or that you're not having the effect of supporting racist coalitions. I guess this furthermore implies that it can be justified to attack Bob even if Bob only says true statements, assuming it's sometimes justified to attack people for racist action-stances, apart from any propositional statements they make--but yeah, in that case you'd have to attack Bob for something other than "Bob says false statements", e.g. "Bob implicitly argues for false statements via emphasis" or "Bob has bad action-stances".
The term "racist" usually carries the implication or implicature of an attitude that is merely based on an irrational prejudice, not an empirical hypothesis with reference to a significant amount of statistical and other evidence.
It is also possible that Bob is racist in the sense of successfully working to cause unjust ethnic conflict of some kind, but also Bob only says true things. Bob could selectively emphasize some true propositions and deemphasize others. The richer the area, the more you can pick and choose, and paint a more and more outrage-ind...
It is also possible that Bob is racist in the sense of successfully working to cause unjust ethnic conflict of some kind, but also Bob only says true things. Bob could selectively emphasize some true propositions and deemphasize others.
Sure, though this is equally possible for the opposite: When Alice is shunning or shaming or cancelling people for expressing or defending a taboo hypothesis, without her explicitly arguing that the hypothesis is false or disfavored by the evidence. In fact, this is usually much easier to do than the former, since defendi...
See Jessica's comment. Yeah it's primitive recursive assuming that your deductive process is primitive recursive. (Also assuming that your traders are primitive recursive; e.g. if they are polytime as in the paper.) There's probably some other parameters not necessarily set in the implementation described in the paper, e.g. the enumerator of trader-machines, but you can make those primrec.
I wish more people were interested in lexicogenesis as a serious/shared craft. See:
The possible shared Craft of deliberate Lexicogenesis: https://tsvibt.blogspot.com/2023/05/the-possible-shared-craft-of-deliberate.html (lengthy meditation--recommend skipping around; maybe specifically look at https://tsvibt.blogspot.com/2023/05/the-possible-shared-craft-of-deliberate.html#seeds-of-the-shared-craft)
Sidespeak: https://tsvibt.github.io/theory/pages/bl_25_04_25_23_19_30_300996.html
Tiny community: https://lexicogenesis.zulipchat.com/ Maybe it should be a discor...
These arguments are so nonsensical that I don't know how to respond to them without further clarification, and so far the people I've talked to about them haven't provided that clarification. "Programming" is not a type of cognitive activity any more than "moving your left hand in some manner" is. You could try writing out the reasoning, trying to avoid enthymemes, and then I could critique it / ask followup questions. Or we could have a conversation that we record and publish.
For its performances, current AI can pick up to 2 of 3 from:
AlphaFold's outputs are interesting and superhuman, but not general. Likewise other Alphas.
LLM outputs are a mix. There's a large swath of things that it can do superhumanly, e.g. generating sentences really fast or various kinds of search. Search is, we could say, weakly novel in a sense; LLMs are superhumanly fast at doing a ...
I'd like to see more intellectual scenes that seriously think about AGI and its implications. There are surely holes in our existing frameworks, and it can be hard for people operating within them to spot. Creating new spaces with different sets of shared assumptions seems like it could help.
Absolutely not, no, we need much better discovery mechanisms for niche ideas that only isolated people talk about, so that the correct ideas can be formed.
Hm. I super like the notion and would like to see it implemented well. The very first example was bad enough to make me lose interest: https://russellconjugations.com/conj/1eaace137d74861f123219595a275f82 (Text from https://www.thenewatlantis.com/publications/the-anti-theology-of-the-body)
So I tried the same thing but with more surrounding text... and it was much better!... though not actually for the subset I'd already tried above. https://russellconjugations.com/conj/3a749159e066ebc4119a3871721f24fc
Yes, and this also applies to your version! For difficult or subtle thoughts, short sentences have to come strictly after the long sentences. If you're having enough such thoughts, it doesn't make sense to restrict long sentences out of communication channels; how else are you supposed to have the thoughts?
On second/third thought, I think you're making a good point, though also I think you're missing a different important point. And I'm not sure what the right answers are. Thanks for your engagement... If you'd be interested in thinking through this stuff in a more exploratory way on a recorded call to be maybe published, hopefully I'll be set up for that in a week or two, LMK.
On the "self-governing" model, it might be that the blind community would want to disallow propagating blindness, while the deaf community would not disallow it:
https://pmc.ncbi.nlm.nih.gov/articles/PMC4059844/
Judy: And how’s… I mean, I know we’re talking about the blind community now, but in a DEAF (person’s own emphasis) community, some deaf couples are actually disappointed when they have an able bodied… child.
William: I believe that’s right.
Paul: I think the majority are.
Judy: Yes. Because then …
Margaret: Do they?
Judy: Oh, yes! It’s well known down at ...
you are influencing them at the stage of being an embryo
I'm mainly talking about engineering that happens before the embryo stage.
That's just not a morally coherent distinction, nor is it one the law makes
Of course it's one the law makes. IIUC it's not even illegal for a pregnant woman to drink alcohol.
If you want to start a campaign to legalize the blinding of children, well, we have a free speech clause, you are entitled to do that. Have you considered maybe doing it separately from the genetic engineering thing?
I can't tell if you're strawman...
Whether you do it by genetic engineering or surgically or through some other means is entirely beside the point. Genetic engineering isn't special.
I'm not especially distinguishing the methods, I'm mainly distinguishing whether it's being done to a living person. See my comment upthread https://www.lesswrong.com/posts/rxcGvPrQsqoCHndwG/the-principle-of-genomic-liberty?commentId=qnafba5dx6gwoFX4a
...We get genetic engineering by showing people that it is just another technology, and we can use it to do good and not evil, applying the same notions of good a
i think i'm in the wrong universe, can someone at tech support reboot the servers or something? it's not reasonable for you to screw up something as simple as putting paying customers in the right simulation . and then you're like "here's some picolightcones". if you actually cared it would be micro or at least nano
Yeah, cognitive diversity is one of those aspects that could be subject to some collapse. Anomaly et al.[1] discuss this, though ultimately suggest regulatory parsimony, which I'd take even further and enshrine as a right to genomic liberty.
I feel only sort-of worried about this, though. There's a few reasons (note: this is a biased list where I only list reasons I'm less worried; a better treatment would make the opposite case too, think about bad outcomes, investigate determinative facts, and then make judgements, etc.):
So I guess one direction this line of thinking could go is how we can get the society-level benefits of a cognitive diversity of minds without necessarily having cognitively-uneven kids grow up in pain.
Absolutely, yeah. A sort of drop-dead basic thing, which I suppose is hard to implement for some reason, is just not putting so much pressure on kids--or more precisely, not acting as though everything ought to be easy for every kid. Better would be skill at teaching individual kids by paying attention to the individual's shape of cognition. That's diffic...
IDK what to say... I guess I'm glad you're not in charge? @JuliaHP I've updated a little bit that AGI aligned to one person would be bad in practice lol.
I am the Law, the Night Watchman State, the protector of innocents who cannot protect themselves. Your children cannot prevent you from editing their genes in a way that harms them, but the law can and should.
I do think this is in interesting and important consideration here; possibly the crux is quite simply trust in the state, but maybe that's not a crux for me, not sure.
if we had a highly competent government that could be trusted to reasonably interpret the rules,
Yeah, if this is the sort of thing you're imagining, we're just making a big different background assumption here.
I don't think we have enough evidence to determine that removing the emotion of fear is "unambiguous net harm", but it would be prohibited under your "no removing a core aspect of humanity" exception.
Yeah, on a methodological level, you're trying to do a naive straightforward utilitarian consequentialist thing, maybe? And I'm like, this isn't ...
I'm genuinely unsure whether or not they would. Would be interesting to know.
One example, from "ASAN Statement on Genetic Research and Autism" https://autisticadvocacy.org/wp-content/uploads/2022/03/genetic-statement-recommendations.pdf :
...ASAN opposes germline gene editing in all cases. Germline gene editing is editing a person’s genes that they pass down to their children. We do not think scientists should be able to make gene edits that can be passed down to a person’s children. The practice could prevent future generations of people with any gene-relat
Yes, blind people are the experts here. If 95% of blind people wish they weren't blind, then (unless there is good reason to believe that a specific child will be in the 5%) gene editing for blindness should be illegal.
This is absolutely not what I'm suggesting. I'm suggesting (something in the genre of) the possibility of having that if 95% of blind people decide that gene editing for blindness should be illegal, then gene editing for blindness should be illegal. It's their autonomy that's at issue here.
Oh, I guess, why haven't I said this already: If you would, consider some trait that:
I'l go first:
In my case, besides being Jewish lol, I'm maybe a little schizoid, meaning I have trouble forming connections / I tend to not maintain friendships / I tend to keep people at a distance, in a systematic / intentional way, somewhat to my detriment. (If this is right, it's sort of mild or doesn't fully fit the wiki page, but still.) So let's say I'm...
I think "edited children will wish the edits had not been made" should be added to the list of exceptions.
So to be clear, your proposal is for people who aren't blind to decide what hypothetical future blind children will think of their parents's decisions, and that judgement should override the judgement of the blind parents themselves? This seems wild to me.
... Ok possibly I could see some sort of scheme where all the blind people get to decide whether to regulate genomic choice to make blind children? Haven't thought about this, but it seems pretty m...
I would prefer a "do no harm" principle
I'm still unclear how much we're talking past each other. In this part, are you suggesting this as law enforced by the state? Note that this is NOT the same as
For instance, humans near-universally value intelligence, happiness, and health. If an intervention decreases these, without corresponding benefits to other things humans value, then the intervention is unambiguously bad.
because you could have an intervention that does result in less happiness on average, but also has some other real benefit; but isn't th...
The topic has drifted from my initial point, which is that there exist some unambiguous "good" directions for genomes to go. After reading your proposed policy it looks like you concede this point, since you are happy to ban gene editing that causes severe mental disability, major depression, etc. Therefore, you seem to agree that going from "chronic severe depression" to "typical happiness set point" is an unambiguous good change. (Correct me if I am wrong here.)
I haven't thought through the policy questions at any great length. Actually, I made up all my...
To you, "the principle of genomic liberty" is the best policy
No! Happy to hear alternatives. But I do think it's better than "prospectively ban genomic choices for traits that our cost-benefit analysis said are bad". I think that's genuinely unjust, partly because you shouldn't be the judge of whether another person's way of being should exist.
...I think the future opinion of the gene-edited children is important. Suppose 99% of genetically deafened children are happy about being deaf as adults, but only 8% of genetically blinded children are happy about
I would be happy to accept "the principle of genomic liberty" over status-quo, since it is reasonably likely that lawmakers will create far worse laws than that.
Ok. Then I'm not sure we even disagree, though we might. If we do, it would be about "ideal policies". My post about Thurston (which was about as successful as I expected at making the point, which is to say, medium at best) is trying to strike some doubt in your heart about the ideal policy, because you don't know what it's like to be other people and you don't know what sort of weird ways they...
So it seems your argument is "even if all reasonable cost-benefit analyses agree, things are still ambiguous". Is that really your position?
Yeah. Well, we're being vague about "reasonable".
If by "reasonable" you mean "in practice, no one could, given a whole month of discussion, argue me into thinking otherwise", then I think it's still ambiguous even if all reasonable CBAs agree.
If by "reasonable" you mean "anyone of sound mind doing a CBA would come to this conclusion", then no, it wouldn't be ambiguous. But I also wouldn't say that it should be prote...
Down syndrome, or Tay-Sachs disease
I'd have to learn more, but many forms of these conditions (and therefore the condition simpliciter, prospectively) would probably prevent the child from expressing their state of wellbeing, through death or unsound mind. Therefore these would fall under the recognized permanent silencing exception to the principle of genomic liberty, and wouldn't be protected forms of propagation. Further, my impression is that living to adulthood with Tay-Sachs is quite rare; most people with Tay-Sachs variants wouldn't be passing on...
But really my objection is why the fuck would you think causality works like that?
Not sure why you're saying "causality" here, but I'll try to answer: I'm trying to construct an agreement between several parties. If the agreement is bad, then it doesn't and shouldn't go through, and we don't get germline engineering (or get a clumsy, rich-person-only version, or something).
Many parties have worries that route through game-theory-ish things like slippery slopes, e.g. around eugenics. If the agreement involves a bunch of groups having their reproduction m...
Why? A human body without a meaningful nervous system inside of it isn't a morally relevant entity, and it could be used to save people who are morally relevant.
Isn't that what I just said? Not sure whether or not we disagree. I'm saying that if you just stunt the growth of the prefrontal cortex, maybe you can argue that this makes the person much less conscious or something, but that's not remotely enough for this to not be abhorrent with a nonnegligible probability; but if you prevent almost all of the CNS from growing in the first place, maybe this i...
I predict any reasonable cost-benefit analysis will find that intelligence and health and high happiness-set-point are good, and blindness and dwarfism are bad.
This is irrelevant to what I'm trying to communicate. I'm saying that you should doubt your valuations of other people's ways of being--NOT so much that you don't make choices for your own children based on your judgements about what would be good for them and for the world, or advocate for others to do similarly, but YES so much that you hesitate quite a lot (like years, or "I'd have to deeply i...
Excellent! Thank you for researching and writing up this article.
A few notes, from my discussion with Morpheus:
Thanks.
If one is going to create an organ donor, removing consciousness and self-awareness seems essential.
If you can't do it without removing almost all of the nervous system, I think it would be bad!
These are all worth doing if we can figure out how.
Possibly. I think all your examples are quite alarming in part because they remove a core aspect. Possibly we could rightly decide to do some of them, but that would require much more knowledge and deliberation. More to the point: I'm not making a strong statement like "prohibit these uses". I'm a wea...
(BTW I think you asking about entanglement sequencing caused me to a few days later realize that for chromosome selection, you can do at least index sensing by taking 1 chromosome randomly from a cell, and then sequencing/staining the remaining 22 (or 45), and seeing which index is missing. So thanks :) )
IMO a not yet fully understood but important aspect of this situation is that what someone writes is in part testimony--they're asserting something that others may or may not be able to verify themselves easy, or even at all. This is how communication usually works, and it has goods (you get independent information) and bads (people can lie/distort/troll/mislead). If a person is posting AIgen stuff, it's much less so testimony from that person. It's more correlated with other stuff that's already in the water, and it's not revealing as much about the perso...
If there are some skilled/smart/motivated/curious ML people seeing this, who want to work on something really cool and/or that could massively help the world, I hope you'll consider reaching out to Tabula.
I chatted with Michael and Ammon. This made me somewhat more hopeful about this effort, because their plan wasn't on the less-sensible end of what I uncertainly imagined from the post (e.g. they're not going to just train a big very-nonlinear map from genomes to phenotypes, which by default would make the data problem worse not better).
I have lots of (somewhat layman) question marks about the plan, but it seems exciting/worth trying. I hope that if there are some skilled/smart/motivated/curious ML people seeing this, who want to work on something really co...
(Thank you, seems valuable!)