All of [deactivated]'s Comments + Replies

I wish you would have just directly made this post about this specific thing that happened rather than try to generalize from one example. Or found more examples to show a pattern we could engage with.

There are so, so many examples of things like this that happen all the time in the world. I used two hypothetical examples in the post. I thought that would suffice.

It's also false; there were lots of replies.


There were comments on Facebook, to be sure, but I never saw anyone (except me) reply to my comment here on LessWrong, ever after (what felt like) several days. 

For anyone curious, you can view the original comment here.

2Gordon Seidoh Worley
Thanks. I wish you would have just directly made this post about this specific thing that happened rather than try to generalize from one example. Or found more examples to show a pattern we could engage with. Now my read on your post is you just wanted to vent, which seems fine but I don't really want to read that in a LessWrong post. Seems better for Twitter or short form. Also your example in this post is different than the thing that actually happened, though similar I guess? I also don't totally understand why your comment was downvoted. Lacking context it seems like it was likely just irrelevant to the discussion, but then I would have expected most people to just ignore it. But then again people get pretty annoyed with language policing, especially when there's not serious confusion about what's being discussed or clear harms being caused (here the harm, as explained, seems indirect and speculative).

You might be surprised!

You might want to add (1.5) also evaluate whether what's going on is that some group of people wants to be referred to differently, and then (2') generally don't resist in that case even if no harm is apparent, because (a) maybe there's harm you haven't noticed and (b) giving people what they want is usually good. I'd certainly be on board with that. (I suspect Scott would too.)

I think this is pretty much my argument. I think Scott wouldn't agree because he wrote:

On the other hand, the people who want to be the first person in a new cascade, like USC’s soc

... (read more)
2gjm
I don't think those paragraphs indicate that Scott wouldn't agree. (I don't know for sure that he would agree either, but I don't think those paragraphs tell us much.)

The roots of "Black" go back further than 1966. For example, here are two excerpts from Martin Luther King's "I Have a Dream" speech in 1963 (emphasis mine):

When the architects of our republic wrote the magnificent words of the Constitution and the Declaration of Independence, they were signing a promissory note to which every American was to fall heir. This note was a promise that all men — yes, Black men as well as white men — would be guaranteed the unalienable rights of life, liberty and the pursuit of happiness.

 

And when this happens, and when we

... (read more)
2gjm
I wrote a lengthy reply, but I find that I also want to say something briefer. The specific claims you made in the great-grandparent of this comment were that "this [sc. previously inoffensive words becoming taboo] has never actually happened in history" and that "Words become taboo because they are used offensively". And the specific thing I challenged you on is whether that is in fact why "black" became taboo. (Of course on Scott's account the final stage of the process is "because they are used offensively". What you disagree with is whether that's how it starts.) Your comment doesn't offer any evidence that the switch from "negro" to "black" happened because "negro" was being used offensively. I would be very interested in evidence that it did.
2gjm
I don't think anyone is claiming that SC/KT invented the term "black"! (It goes back much much further than MLK in 1963. The earliest citation in the OED that's clearly basically the same usage as we have now is from 1667; there are others that might be basically the same usage as we have now from centuries before.) But I think it's generally held that it was SC/KT's activism beginning in 1966 that led to the change from "negro" being the usual term and "black" being the usual term. I agree that the relevant question is something like "which was used more for racism". More precisely, something like "for which was the (racist use) / (non-racist use) ratio higher", or maybe something harder to express that gives more weight to more severely racist uses. (Consider The Word Which Is Not Exactly "Negro"; that may actually have a rather low racist/nonracist ratio because of its use within black communities and its extreme taboo-ness outside, but when someone uses it for racism they're probably being more drastically racist than, say, someone who is slightly less inclined to hire black candidates[1].) [1] I'm not, to be clear, saying that the less-drastic racism doesn't matter. It may well, in the aggregate, be more of a problem than the more-drastic racism, if there's enough more of it. When I say things like "more severe" I am referring to the severity of one particular instance. Do you have communicable reasons for your guess that "black" was less often used in a racist way? I don't have strong opinions on that point myself. It doesn't seem at all true that "black" was a label of endogenous origin. White people were using "black" to talk about black people at least as far back as the 17th century, and I don't see any reason to think that they got that usage by listening to how black people talked about one another. Of course, once SC/KT persuaded a lot of people that they should use "black", it became a label of endogenous origin, not in the sense that the word or

I think this is the crux of the matter:

Was the process in this case a bad thing overall, as we should probably expect on Scott's model? (Bad: risk of mis-classifying people as racist whose only sin was not to adjust their language quickly enough; inconvenience during the transition; awkwardness after the transition of reading material written before it. Good: morale-boosting effects on black people of feeling that they were using a term of their own choosing and taking more control of their own destiny; if SC/KT was correct about "negro" bringing along unw

... (read more)
2gjm
I don't think Scott ever claims that changing the words we use for minority groups is a bad thing overall. His post is not only about changing the words for minority groups, and he explicitly says that the sort of change he's talking about sometimes happens for excellent reasons (he gives the example of how "Jap" became offensive in the 1950s).

I commented on Scott’s blog post with a link to this post.

The post makes the claim hyperstitious cascades are bad, where previously innocent words that noone took offense to become taboo

A major claim I’m making is that this has never actually happened in history, and certainly not in any of the examples Scott uses. Words become taboo because they are used offensively.

2gjm
As mentioned in another comment, I think it's pretty plausible that various things in Scott's account of what happened with "negro" and "black" are wrong. But it doesn't currently look plausible to me that the switch between them happened because "negro" was being used offensively. Do you disagree? I don't have strong evidence and I think could be readily persuaded if you happen to have some. I mostly think the switch wasn't caused by widespread offensive use of "negro" because none of the things I've seen written about it says anything of the kind. To be clear, I'm sure that many racists used the term "negro" while being racist before 1966, just as many racists use the term "black" while being racist now. But I'm not aware of reason to think that "negro" was preferentially used by racists before the n->b shift. (It may well be that racists did preferentially use "negro" rather than "black" in the later parts of the process, as a result of the mechanism Scott describes, but obviously that wouldn't be an argument against his position. So I take it you mean that there was widespread offensive use of "negro" before the switch from "negro" to "black", rather than only once the switch was underway as predicted by Scott's analysis.)

Out of the two options, this is closer to my view:

"this is completely and horribly incorrect in approach and model"

I think Scott’s model of how changes in the words we use for minority groups happen is just factually inaccurate and unrealistic. Changes are generally slow, gradual, long-lasting, and are primarily advocated for in good faith by conscientious members of the minority group in question.

My contention is that this model of the process is basically just wrong for the examples of minority group labels that have actually caught on.

There's a difference between "labels should never change" (which you say Scott is saying, but he isn't) and "the thing where a previously harmless label becomes offensive, not because it was really offensive all along, but because someone has decided to try to make it offensive and then there's a positive feedback loop, generally does more harm than good" (which he is saying).

Per (4) in the OP, I think this process that Scott describes is simply an incorrect model of why some words for minority groups come to be seen as derogatory and why the acceptable... (read more)

2gjm
My own cursory reading mostly leaves me aware that I don't really know a lot of important things about how the process happened. It seems clearly correct that before about 1966 pretty much everyone, of every race and political persuasion, was using "Negro" as the default term. Scott says "fifty years ago" and I think he must have meant sixty (50 years ago was 1973, by which time the process was mostly complete) but otherwise I think he's plainly right about this. It seems generally agreed that Stokely Carmichael / Kwame Ture was the key mover in getting "Negro" toppled and "black" replacing it, and that this process started in 1966 with his famous "Black Power" speech (in which I don't think he makes any particular argument about "black" versus "Negro", but he uses "black" throughout). In his book, SC/KT claims that "there is a growing resentment" of the term "Negro". Maybe that was true and he was more a symptom than a cause of the shift in preferences. Or maybe he just said it for the same reason as Donald Trump loves to say "a lot of people are saying ...". It's hard to tell what the actual mechanism was. It must have been some combination of (1) people being convinced by SC/KT's arguments that the term "Negro" was "the invention of our oppressor" and therefore describes "_his_ image of us", and that "black" is therefore better; (2) black people trying out "black" and, separately from any arguments about the theoretical merits, just liking it better than "negro"; (3) people just imitating other people because that's a thing people do; (4) white people switching to "black" because (in reality and/or their perception) black people preferred it; (5a) people wanting to use a term that was increasingly a signifier of being in favour of social justice and civil rights; and (5b) people not wanting to use a term that was increasingly a signifier of not being in favour of social justice and civil rights. Scott's post is about both branches of mechanism 5. It's hard f

Potential solutions to foreseeable problems with biological superintelligence include: a) only upgrading particularly moral and trustworthy humans or b) ensuring that upgrading is widely accessible, so that lots of people can do it.

3mishka
b) does not solve it without a lot of successful work on multipolar safety (it's almost an equivalent of giving nuclear weapons to lots of people, making them widely accessible; and yes, giving gain-of-function labs equipment too) a) is indeed very reasonable, but we should keep in mind that upgrade is a potentially stronger impact than any psychoactive drugs, a potentially stronger impact than any most radical psychedelic experiences. Here the usual "AI alignment problem" one is normally dealing with is replaced by the problem of conservation of one's values and character. In fact these problems are closely related. The most intractable part of AI safety is what happens when AI ecosystems starts to rapidly recursively self-improve, perhaps with significant acceleration. We might have current members of AI ecosystem behave in a reasonably safe and beneficial way, but would future members (or same members after they self-improve) behave safely, or would "a sharp left turn" happen? Here it is the same problem for a rapidly improving and changing "enhanced human", would that person continue to maintain the original character and values while undergoing radical changes and enhancements, or would drastic new realizations (potentially more radical than any psychedelic revelations) lead to unpredictable revisions of that original character and values? It might be the case that it's easier to smooth these changes for a human (compared to AI), but the success is not automatic by any means.

I guess we could say governance remains a problem with biological superintelligence? As it does with normal humans, just more so.

5mishka
Yes, nevertheless the S-risk and X-risk problems don't go away. There are humans who like causing suffering. There are human advocating for human extinction (and some of them might act on that given the capabilities). There are humans who are ready to fight wars with weapons which might cause extinction, or would be ready to undertake projects which might cause extinction or widespread suffering. Stepping back, we know that Eliezer was very much against anthropomorphic superintelligences in 2011. He thought we needed much higher levels of safety ("provably friendly AI", which would not be possible with something as messy as human-like systems). Since then he strongly updated towards pessimism regarding our chances to create beneficial artificial superintelligence, and he arrived at the conclusion that our chances with biological superintelligence might be higher. But it would be good to try to articulate what are the reasons for our chances with biological superintelligence to be higher. One aspect is that we do have an intuition that biology-based systems are likely to self-improve slower, and thus would have more time to ponder solutions to various issues as they get smarter. So they might be not superintelligent, but just very smart for quite a while, and during that period they would decide what to do next. Another aspect is that biology-based systems are more likely to be automatically sentient, and their sentience is more likely to be at least somewhat similar to ours, and so even if things go badly initially, the chances for having a lot of value in the future lightcone are higher, because it is more likely that there would be first-person experiencers. But it would be good to pause and think whether we are sure. Also speaking of these devices can also lead to the hybrid human-AI systems, and that might be a more technologically likely route. The hybrid system becomes smarter, both because of its biological part working better, but also because of a tigh

Beautifully written! Great job! I really enjoyed reading this story. 

in comparison to a morally purified version of SimplexAI, we might be the baddies."

Did you link to the wrong thing here or is there some reference to generative grammar I'm not getting?

2jessicata
Thanks! Good catch, will fix.

Most of the writing on ethics in our society and its cultural forebears over the last few millennia has been written by moral absolutists: people who believed that there is one, true, and correct set of ethics, and were either trying to figure out what it is, or more often thought they already have, and are now trying to persuade others.

 

This is not my understanding of moral absolutism. One definition from a University of Texas blog:

Moral absolutism asserts that there are certain universal moral principles by which all peoples’ actions may be judged.

... (read more)
3RogerDearnaley
On doing a little reading on the terminology, I believe you are correct. I was looking for a phrase that meant the opposite of relativism, and came up with absolutism. But it appears that moral philosophers generally use that in a way slightly different than I intended, and that what I actually meant was closer to what is generally called "moral universalism" (or sometimes "moral objectivism"). I have updated my post accordingly.

It seems too long for a comment. Also, it uses markdown formatting.

Again that's not Scotts point. Scott is concerned about deliberate attempts to rapidly make a perfectly innocent word taboo, causing bother and potential ostracism to everyone for no reason, not natural long term evolution of words.

I don't think such an attempt has ever happened and succeeded. I'm open to counterexamples, though.

The problem isn't that most of the black people in the USA got together and said they prefer to be called black. It's that due to a single bad actor making up a fake history for an innocent word, lots of old grandpas get ostra

... (read more)

If someone in the US uses the word Jew and they're not obviously Jewish, they sound antisemitic.

This seems not universal and highly context-dependent.

Honestly, saying his examples ("asian" and "field work") are worse than yours ("black" and "gay") is very close to strawman arguing.

Well, my examples are both real and non-fringe, whereas "Asian" and "field work" are fictional and fringe, respectively. So, I think "gay" and "Black" are more central examples.

Scott also seems annoyed by "Black", but doesn't explain why he's (seemingly) annoyed.

There's a bit more here than I can readily respond to right now, but let me know if you think I've avoided the crux of the matter and you'd like me to address it in a future comment.

This doesn't apply to more central cases like "gay" and "Black".

Fair! I should have said 1,000 years to make the point more clear-cut.

It would be much more helpful if Scott used a real example rather than a fictional one. I don't think his fictional example is very realistic.

Thanks for posting this. I am still a bit fuzzy on what exactly the Superalignment plan is, or if there even is a firm plan at this stage. Hope we can learn more soon.

3mishka
I think they had a reasonably detailed (but unfortunately unrealistic) plan for aligning superintelligence before Ilya became a co-lead of the Superalignment team. That had been published, in multiple installments. The early July text https://openai.com/blog/introducing-superalignment was the last of those installments, and most of its technical content was pre-Ilya (as far as I knew), but it also introduced Ilya as a co-lead. But the problem with most such alignment plans including this one had always been that they didn't have much chance of working for a self-improving superintelligent AI or ecosystem of AIs, that is, exactly when we start really needing them to work. I think Ilya understood this very well, and he started to revise plans and to work in new directions in this sense, and we were seeing various bits of his thoughts on that in his various interviews (in addition to what he said here, one other motif he was returning to in recent months was that it is desirable that superintelligent AIs would think about themselves as something like parents, and about us as something like their children, so one of the questions is what should we do to achieve that). But I don't know if he would want to publish details going forward (successful AI safety research is capability research, there is no way to separate them, and the overall situation might be getting too close to the endgame). He will certainly share something, but the core novel technical stuff will more and more be produced via intellectual collaboration with cutting edge advanced (pre-public-release in-house) AI systems, and they would probably want to at least introduce a delay before sharing something as sensitive as this.
3Mitchell_Porter
Jan Leike is head of superalignment. He blogged about a version of Eliezer's CEV. 

I think that me not wearing shoes at university is evidence that I might also disdain sports, but not evidence that I might steal.

 

it is not actually the case that violating one specific social norm for specific reason is a substantial update that someone is a Breaking Social Boundaries Type Pokemon in general.

If I can attempt to synthesize these two points into a single point: don't assume weird people are evil. 

If someone walks around barefoot in an urban environment, that's a good clue they might also be weird in other ways. But weird ≠ evil.&... (read more)

5Ben Pace
Here's two sentences that I think are both probably true. 1. In order to do what is right, at some point in a person's life they will have to covertly break certain widespread social norms. 2. Most people who covertly break widespread social norms are untrustworthy people. (As a note on my epistemic state: I assign a higher probability to the first claim being true than the second.) One of the things I read the OP as saying is "lots of widespread social norms are very poorly justified by using extreme cases and silencing all the fine cases (and you should fix this faulty reasoning in your own mind)". I can get behind this. I think it's also saying "Most people are actually covertly violating widespread social norms in some way". I am genuinely much more confused about this. Many of the examples in the OP are more about persistent facts about people's builds (e.g. whether they have violent impulses or whether they are homosexual) than about their active choices (e.g. whether they carry out violence or whether they had homosexual sex). For instance I find myself sympathetic to arguments where people say that many people would prefer to receive corporal punishment than be imprisoned for a decade, but if I were to find out that one particular prison was secretly beating the prisoners and then releasing them, I would be extremely freaked out by this. (This example doesn't quite make sense because that just isn't a state of affairs that you could keep quiet, but hopefully it conveys the gist of what I mean.)

Longtermism question: has anyone ever proposed a discount rate on the moral value of future lives? By analogy to discount rates used in finance and investing.

This could account for the uncertainty in predicting the existence of future people. Or serve as a compromise between views like neartermism and longtermism, or pro-natalism and anti-natalism.

2mako yass
The kinds of discount rates that you see in finance will imply that moral value goes to about zero after like 1000 years, which we basically know couldn't be true (total life tends to grow over time, not shrink). Discount rates are a pretty crude heuristic for estimating value over time and will be inapplicable to many situations.
5Jay Bailey
Yes, this is an argument people have made. Longtermists tend to reject it. First off, applying a discount rate on the moral value of lives in order to account for the uncertainty of the future is...not a good idea. These two things are totally different, and shouldn't be conflated like that imo. If you want to apply a discount rate to account for the uncertainty of the future, just do that directly. So, for the rest of the post I'll assume a discount rate on moral value actually applies to moral value. So, that leaves us with the moral argument. A fairly good argument, and the one I subscribe to, is this: * Let's say we apply a conservative discount rate, say, 1% per year, to the moral value of future lives. * Given that, one life now is worth approximately 500 million lives two millenia from now. (0.99^2000 = approximately 2e-9) * But would that have been reasonably true in the past? Would it have been morally correct to save a life in 0 BC at the cost of 500 million lives today? * If the answer is "no" to that, it should also be considered "no" in the present. This is, again, different from a discount rate on future lives based on uncertainty. It's entirely reasonable to say "If there's only a 50% chance this person ever exists, I should treat it as 50% as valuable." I think that this is a position that wouldn't be controversial among longtermists.

Now he’s free to run for governor of California in 2026:

I was thinking about it because I think the state is in a very bad place, particularly when it comes to the cost of living and specifically the cost of housing. And if that doesn’t get fixed, I think the state is going to devolve into a very unpleasant place. Like one thing that I have really come to believe is that you cannot have social justice without economic justice, and economic justice in California feels unattainable. And I think it would take someone with no loyalties to sort of very powerful

... (read more)

William Nordhaus estimates that firms recover maybe 2% of the value they create by developing new technologies.

Isn’t this the wrong metric? 2% of the value of a new technology might be a lot of money, far in excess of the R&D cost required to create it.

I think you are way overestimating your ability to tell who is trans and way underestimating the ability of trans people to pass as cis. Sometimes, you just can’t tell.

5Valentine
While your point is technically true, it's not relevant here. Bezzi's point stands even if we just talk about trans folk whom most people can readily tell are trans.

What on Earth? Why does it require being “devious” to be in the closet? If you were given a choice between lifelong celibacy and loneliness, on the one hand, or, on the other hand, seriously endangering yourself, risking being imprisoned or institutionalized, and ruining your life (economically and socially) by having relationships and disclosing them, would it make you “devious” to choose a third option and keep your relationships secret?

Were Jews who hid from the Nazis “devious”? Were people who helped them hide “devious”? Only in a sense that drains the... (read more)

4Ben Pace
Appreciate the link. I'm updating from some of the people and their stories toward it not generally correlating with a broader disregard for decency to strategically break certain strongly enforced norms. I think I'm also substantially updating about how much homosexual recognition/acceptance there was in the early 1900s — there was a very successful theater production called The Captive about a lesbian that had famous actors and ~160 showings (until it was cancelled due to its subject being scandalous). Curious quote 8 mins into the documentary about Speakeasies. I'm not sure what to make of it directionally about rule-breakers at the time and how to update about their motives.

Most 'trans-women' I see online (in videos etc) who expliciltly identify themselves as such do not, to my eye, visually pass as women, even if I try and account for the fact that knowledge of their identity could skew my perception. It would be weird that every one of them I encounter IRL would fall into the minority of who clearly pass.

 Bezzi also said "meet", not just "see", which implies talking to the person, and I find that the numebr of 'trans-women' I see online who have voices that pass is even smaller than those who visually pass. It  therefore seems extremely unlikely that hundreds of the people I've met in my city who I thought were women were actually trans-women. 

2Bezzi
I can concede that maybe I've walked near trans passers-by who didn't obviously look like trans people, but I'm still confident that 100% of people I interacted with verbally more than once are not trans. I suppose that homosexuals could pass as straight more easily than trans could pass as cis, but I did meet gays and lesbians nonetheless (indeed, most of them don't obviously look like homosexuals).   To the best of my knowledge, there are no LGBT organizations in my town, but there are certainly some in the bigger city where my workplace is located. I've no doubt I would find a trans person there if I went looking. My point was that, despite having interacted with hundreds of people at this point, I've never met one by chance in the same way I met gays and lesbians.

He said:

At a high level you can think of Gemini as combining some of the strengths of AlphaGo-type systems with the amazing language capabilities of the large models

What do you think he meant by "AlphaGo-type systems"? I could be wrong, but I interpreted that as a reference to RL.

3Seth Herd
I missed that. I agree that firmly implies the use of RL.

This seems super important to the argument! Do you know if it's been discussed in detail anywhere else?

We are on track to build many superhuman AI systems. Unless something unexpectedly good happens, eventually we will build one that has a failure of inner alignment. And then it will kill us all. Does the probability of any given system failing inner alignment really matter?

Yes, because if the first superhuman AGI is aligned, and if it performs a pivotal act to prevent misaligned AGI from being created, then we will avert existential catastrophe.

If there is a 99.99% chance of that happening, then we should be quite sanguine about AI x-risk. On the other hand, if there is only a 0.01% chance, then we should be very worried.

1Tapatakt
It's hard to guess, but it happened when the only one known to us general intelligence was created by a hill-climbing process.

I have a question about "AGI Ruin: A List of Lethalities".

These two sentences from Section B.2 stuck out to me as the most important in the post:

...outer optimization even on a very exact, very simple loss function doesn't produce inner optimization in that direction.

 

...on the current optimization paradigm there is no general idea of how to get particular inner properties into a system, or verify that they're there, rather than just observable outer ones you can run a loss function over.

My question is: supposing this is all true, what is the probabil... (read more)

2Seth Herd
I think any reasonable estimate would be based on a more detailed plan. What types of rewards (loss function) are we providing, and what type of inner alignment we want. My intuition roughly aligns with Eliezer's on this point: I doubt this will work. When I imagine rewarding an agent for doing things humans like, as indicated by smiles, thanks, etc. I have a hard time imagining that this just generalizes to an agent that does what we want, even in very different circumstances, including when it can relatively easily gain sovereignty and do whatever it wants. Others have a different intuition. Buried in a comment somewhere from Quintin Pope, he says something to the effect of "shard theory isn't a new theory of alignment; it's the hypothesis that we dont need one". I think he and other shard theory optimists think it's entirely plausible that rewarding stuff we like will develop inner representations and alignment that's adequate for our purposes. While I share Eliezar and others' pessimism about alignment through pure RL, I don't share his overall pessimism. You've seen my alternate proposals for directly setting desirable goals out of an agent's learned knowledge.
2JBlack
It's almost certain in the narrow technical sense of "some difference no matter how small", and unknown (and currently undefinable) in any more useful sense.

I don't know if anyone still reads comments on this post from over a year ago. Here goes nothing.

I am trying to understand the argument(s) as deeply and faithfully as I can. These two sentences from Section B.2 stuck out to me as the most important in the post (from the point of view of my understanding):

...outer optimization even on a very exact, very simple loss function doesn't produce inner optimization in that direction.

 

...on the current optimization paradigm there is no general idea of how to get particular inner properties into a system, or ve

... (read more)
2Carl Feynman
Inner alignment failure is a phenomenon that has happened in existing AI systems, weak as they are.  So we know it can happen. We are on track to build many superhuman AI systems.  Unless something unexpectedly good happens, eventually we will build one that has a failure of inner alignment.  And then it will kill us all.  Does the probability of any given system failing inner alignment really matter?

There is a strong argument that the term is bad and misleading. I will concede that.

Wolfram's article is very confusing indeed.

Most important sentence:

A reward function reshapes an agent's cognition to be more like the sort of cognition that got rewarded in the training process.

Wow. That is a tremendous insight. Thank you.

On another topic: you quote Yudkowsky in 2008 expressing skepticism of deep learning. I remember him in 2016 or 2017 still expressing skepticism, though much more mildly. Does anyone else recall this? Better yet, can you link to an example? [Edit: it might have been more like 2014 or 2015. Don’t remember exactly.]

I guess sort of the point of this post is that, in the broadest sense, the political critique of so-called “TESCREAL” lacks imagination — about the possible connections between these -isms and social justice.

I don’t think there’s anything inherently disparaging about the acronym.

3rsaarelm
There might not be, but it's not a thing in vacuum, it was coined with political intent and it's tangled with that intent.

Thanks for your comment.

...note that indefinite life extension, reversing the aging process, etc, have never become a public priority in any polity.

Is this really strong evidence for anything? For example, the Methuselah Foundation was founded in 2001 and the SENS Research Foundation was founded in 2009. Calico was founded in 2013. Altos Labs was founded in 2021. All this to say, the science of radical life extension is extremely new. There hasn't been much time for life extension to become a political cause.

One motivation of the left is to lift up o

... (read more)
4Mitchell_Porter
The idea has been around for a long time.  Winwood Reade, 1872: "Disease will be extirpated; the causes of decay will be removed; immortality will be invented." George Bernard Shaw, 1921: "Our program is only that the term of human life shall be extended to three hundred years." F.M. Esfandiary, 1970: "The real revolutionaries of today fight a different battle. They want to be alive in the year 2050 and in the year 20,000 and the year 2,000,000." A religion can make unfulfilled promises, and still be believed after a thousand years. If the human race thought differently, radical life extension could have been adopted as an ideal and a goal at any time in the history of medicine, and upheld as a goal for however many centuries it took to achieve. But for whatever reasons, the idea did not take hold, and continues to not take hold.  I certainly think that the scientific and cultural zeitgeist is more promising than ever before, but we're still talking about a minority opinion. The majority of adults are just not interested, and a significant minority will actively oppose a longevity movement.  In my own opinion, the rise of AI changes everything anyway, because it foreshadows changes far more profound than human longevity. The pre-AI world was one of untold human generations repeating the same cycle of birth and death. It made some sense to say, why settle for lives being cut off in this way? Can we break out of these limits?  The rise of AI means we now share the world with mercurial nonhuman intelligence, that can certainly assist a human or transhuman agenda if it leans that way, but which will also be capable of replacing us completely. And if the normal response to longevity activism is indifference because the ordinary lifespan is natural and OK, the normal response to AI takeover is going to be, fight the machines, turn them off, so that ordinary human life can go on.  My prediction is, any movement to stop AI completely and indefinitely will fail, beca

"Someone gives a lot of compliments to me but I don't think they're being genuine"

Au contraire. This is a common tactic of manipulation and abuse.

"I feel 'low-value'"

I think the point is that they were treated as low-value by their bosses.

2lc
...Is it?

Wish I knew why this post is getting downvoted to karma hell! :( 

0rsaarelm
Blithely adopting a term that seems to have been coined just for the purposes of doing a smear job makes you look like either a useful idiot or an enemy agent.
2harfe
Some of the downvotes were probably because of the unironic use of the term TESCREAL. This term mixes a bunch of different things together, which makes your writing less clear.
2Mitchell_Porter
I can't say why the downvotes. But regarding the topic itself... The synthesis you want is intellectually possible, but I think it is unlikely to become popular.  It's unlikely to be popular, first because the philosophies bundled disparagingly under the title "TESCREAL" seem to appeal only to a minority of people, and second for reasons specific to social justice.  In support of the first point, note that indefinite life extension, reversing the aging process, etc, have never become a public priority in any polity. There is a fundamental resistance to such ideas, that transcends differences of ideology and culture.  The flip side is that the people who do become transhumanists, etc, in theory can come from any background.  In support of the second point, that there are factors specific to social justice which harden the resistance: it's simply the leveling impulse. One motivation of the left is to lift up ordinary people, but another motivation is to bring down the privileged. The second motivation is the one that easily turns against projects for transcending the human condition.  Having said all that, progressivism is not luddism, and there may be corners of it where the synthesis you want, can take hold.  

I meant "extract" more figuratively than literally. For example, GPT-4 seems to have acquired some ability to do moral reasoning in accordance with human values. This is one way to (very indirectly) "extract" information from the human brain.

3Steven Byrnes
GPT-4 is different from APTAMI. I'm not aware of any method that starts with movies of humans, or human-created internet text, or whatever, and then does some kind of ML, and winds up with a plausible human brain intrinsic cost function. If you have an idea for how that could work, then I'm skeptical, but you should tell me anyway. :)

Extract from the brain into, say, weights in an artificial neural network, lines of code, a natural language "constitution", or something of that nature.

2Steven Byrnes
“Extract from the brain” how? A human brain has like 100 billion neurons and 100 trillion synapses, and they’re generally very difficult to measure, right? (I do think certain neuroscience experiments would be helpful.) Or do you mean something else?

...I think the human brain’s intrinsic-cost-like-thing is probably hundreds of lines of pseudocode, or maybe low thousands, certainly not millions. (And the part that’s relevant for AGI is just a fraction of that.) Unfortunately, I also think nobody knows what those lines are. I would feel better if they did.

So, the human brain's pseudo-intrinsic cost is not intractably complex, on your view, but difficult to extract.

2Steven Byrnes
I would say “the human brain’s intrinsic-cost-like-thing is difficult to figure out”. I’m not sure what you mean by “…difficult to extract”. Extract from what?
Answer by [deactivated]10

I think this functionality already exists.

Next to “Latest Posts” on the front page there is a “Customize Feed” button. You can set the topics “AI” and “AI risk” to “Hidden” and set the topic “Rationality” to “Promoted”.

Hope that helps.

Load More