All of Astynax's Comments + Replies

I assume this is meant to illustrate AI risk. If so, yes, I'm terrified.

Terms I don't know: inferential gaps, general intelligence factor g, object-level thing, opinion-structure. There are other terms I can figure but I have to stop a moment: medical grade mental differences, baseline assumptions. I think that's most of it.

At the risk of going too far, I'll paraphrase one section with hopes that it'll say the same thing and be more accessible. (Since my day job is teaching college freshmen, I think about clarity a lot!)

--

"Can't I just assume my interlocutor is intelligent?"

No.

People have different basic assumptions. People ha... (read more)

2Nicholas / Heather Kross
No worries, that makes sense!
2Nicholas / Heather Kross
No worries, this is definitely helpful!

Actually, I wonder if we might try something more formal. How about if we get a principle that if a poster sees a comment saying "what's that term?" the poster edits the post to define it where it's used?

Apparently clarity is hard. Because although I agree that it's essential to communicate clearly, it took significant wrapping my head around it to digest this post, to identify its thrust. I thought I had it eventually, but looking at comments it seems I wasn't the only one not sure.

I am not saying this to be snarky. I find this to be one of the clearer posts on LessWrong; I am usually lost in jargon I don't know. (Inferential gaps? General intelligence factor g?) But despite its relative clarity, it's still a slog.

I still admire the effort, and hope everyone will listen.

3Nicholas / Heather Kross
Thank you! Out of curiosity, which parts of this post made it harder to wrap your head around? (If you're not sure, that's also fine.)
0Astynax
Actually, I wonder if we might try something more formal. How about if we get a principle that if a poster sees a comment saying "what's that term?" the poster edits the post to define it where it's used?

Conservatives are already suspicious of AI, based on ChatGPT3's political bias. AI skeptics shd target the left (which has less political reason to be suspicious) and not target the right (because if the succeed, the left will reject AI skepticism as a right-wing delusion).

1dr_s
This, especially because right now the left is on a dangerous route to "AI safety is a ruse by billionaires to make us think that AI is powerful and thus make us buy into the hype by reverse psychology and distract us from the real problems... somehow". Talk about AGI doom in the language of social justice, also because it's far from inappropriate. Some rich dude in Silicon Valley tries to become God, lots of already plenty exploited and impoverished people in formerly colonised countries fucking die for his deranged dream. If that's not a social justice issue...

(IDK what most people think abt just abt anything, so I'll content myself with many aren't ready to accept.)

Secularism is unstable. Partly because it gets its values from the religion it abandoned, so that the values no longer have foundation, but also empirically because it stops people from reproducing at replacement rate.

Overpopulation is at worst a temporary problem now; the tide has turned.

Identifying someone with lots of letters after his name and accepting his opinions is not following the science, but the opposite. Science takes no one's word, but ... (read more)

I speak as someone who teaches college freshmen.

On the one hand, I see AI writers as a disaster for classes involving writing. I tried ChatGP3 last night and gave it an assignment like one I might assign in a general studies class; it involved two dead philosophers. I would definitely have given the paper an A. It was partly wrong, but the writing was perfect and the conclusion correct and well argued.

This isn't like Grammarly, where you write and the computer suggests ways to write better. I didn't write my paper; I wrote a query. Crafting the query took ... (read more)

I'd guess it's overabundance of working-class workers relative to the need. But recently I'm seeing claims that the elite are overabundant: for example, there aren't enough elite slots for the next generation, so Harvard's acceptance rate has gone from 30-odd% to around 1%; and would-be middle-class young people are having to stay with mom and dad to save on rent while working long hours. How can there be an oversupply of all different classes of workers? If it's that automation makes us all way too efficient, shouldn't that make us rich and leisured rather than overworked and desperate?

There is also a tremendous amount of make-work. 

My uni has 2 new layers of management between professor and president (was 2, now it's 4) since 1998. Recently we noticed a scary budget shortfall. They decided to reorganize. After reorgnization...kept those extra 2 layers.

My doc's office joined a big corporation. It was ACA (Obamacare). They would have had to hire another clerical worker to handle the extra paperwork.

This blog post is about something else, but buried in it is the number of clergy for various US denominations. Whether the denomination i... (read more)

IDK where else to say this, so I'll say it here. I find many LW articles hard to follow because they use terms I don't know. I assume everyone else knows, but I'm a newbie. Ergo I request a kindness: if your article uses a term that's not common English use (GT3, alignment, etc.), define it the first time you use it.

3Viliam
Not sure if this is a good idea, but some of those links could be added automatically -- then we do not need to worry about authors forgetting. All we need is to maintain a list of [keyword, canonical link] and add a hyperlink to the first occurence of the keyword (or any of its synonyms) in the article. Perhaps these automatic links should look differently from the manually added ones (I imagine a small question mark symbol after the keyword), and perhaps the registered users should have an option to turn this off. In the meanwhile, you can find most of those things in the list of tags. Looking at your examples, "GPT" (and "ChatGPT") is there. "Alignment" is apparently considered too general -- it essentially means "making the AI want exactly what you want" (as opposed to making an AI that wants to do some random thing, and maybe obeys you while it is weak, but then starts doing its own thing when it becomes strong) -- so we only have pages on "inner/outer alignment", "deceptive alignment", and less frequently used "internal (human) alignment", "chain of thought alignment" and "general alignment properties".

I miss the old forums. (LW is on the way to this, but the format is a little more social-media.) When I moved from reading novels, and discussing things on threads, to social media posts, my concentration was shot. Maybe coincidence, but when I dumped FB (I never did Twitter) my concentration improved slightly as I recall. Point is that it seems that reading longer things helps me concentrate longer, and reading 5-second things does the opposite. FWIW.

Answer by Astynax2-3

I'm going to assume others have done an adequate job describing how to convince a rational being using reason (and I think they have). So I'll come from a different direction: how to convince a human.

What convinced me, back when, was two things:
* A long poem I found, in elementary/middle school, describing what heroin would do to your life. I think it was factually accurate. The modern equivalent might be Faces of Addiction, showing just how drugs wreck people. https://rehabs.com/explore/faces-of-addiction/ These drug users don't look healthy, or happy wit... (read more)

Answer by Astynax1-2

There is definitely a correlation! I have a handicapped child. His goals involve snacks and entertainments. His neurotypical brother's goals involve friends and getting schoolwork done (and snacks and entertainments). My goals involve making the world a better place, relating to God, loving my family, and -- snacks and entertainments. :)

And a more severely mentally handicapped person may have a goal simply of "I don't like to be upset." I'm thinking of a particular person, but I don't know her that well.

Having a handicapped family member helps me break through some ways of thinking that seem reasonable if I assume everyone's like me. 

Confused about something -- about smart people not being nicer. That fits with my theory of how the world works, but not with my observation of children and teenagers. The smart kids are (usually) way nicer to each other. My 12-y-o observed this as he went from (nasty) 2nd grade to (nice) gifted program to middle school, with the middle school being a mix of nicer smart kids and more poorly behaved, poorly performing students. This also matches my personal experience, my wife's experience, and what we see in pop culture.

Now, you could say smart kids just f... (read more)

2tailcalled
I don't know. I usually hear the opposite stereotype, of smart people being edgy and mean. I wonder to what extent people's stereotypes on this is due to noise, selection bias, or similar, but it seems hard to figure out. In this specific case, I would wonder how much the true correlation is obscured by the school environment. Schooling institutions are supposed to value learning, intellect, etc.., so smart people and conformist/authority-trusting people might be hard to distinguish in schools? I don't think the USA is an outlier with respect to this, I think most differential psychology studies are done in the US.

I'm having a disconnect. I think I'm kind of selfish too. But if it came to a choice between me dying this year and humanity dying 100 years from now, I'll take my death. It's going to happen anyway, and I'm old enough I got mine, or most of it. I'm confident I'd feel the same if I didn't have children, though less intensely. What is causing the difference in these perspectives? IDK. My 90-year-old friend would snort at the question; what difference would a year or two make? The old have less to lose. But the young are usually much more willing to risk their lives. So: IDK.

1whestler
I'm in the same boat. I'm not that worried about my own life, in the general scheme of things. I fully expect I'll die, and probably earlier than I would in a world without AI development. What really cuts me up is the idea that there will be no future to speak of, that all my efforts won't contribute to something, some small influence on other people enjoying their lives at a later time. A place people feel happy and safe and fulfilled. If I had a credible offer to guarantee that future in exchange for my life, I think I'd take it. (I'm currently healthy, more than half my life left to live, assuming average life expectancy) Sometimes I try to take comfort in many-worlds, that there exist different timelines where humanity manages to regulate AI or align it with human values (whatever those are). Given that I have no capacity to influence those timelines though, it doesn't feel like they are meaningfully there.

At one point, IIRC, I thought pain, or at least exhaustion, was meritorious. My mother and grandmother sure did. They'd have contests!

Later, I saw thru that. Play is more productive than work for the same task. Go for the joy. That sort of thing.

But think about it from the perspective of someone with chronic illness, or severely overworked, or in a great deal of emotional pain (death of a spouse or child, or other reasons). You'll have heard the analogy of spoons (https://www.healthline.com/health/spoon-theory-chronic-illness-explained-like-never-before). ... (read more)

I will definitely check out the "proofs for young earth" thing. A related issue is patching a problem: SA and Africa look like they fit together, and at the current rate of drift they haven't had time to separate in 10K years (haven't checked this, but surely it's right), so maybe they separated 6K years back in a single day. If C14 is really low in things we think are 10M y old (I'm making this up but it fits), maybe they're a few thou years old and a few thou years ago there was very little C14 around.

1jaspax
"SA and Africa look like they fit together" is a good example, because at first glance it looks just a dumb coincidence and not any kind of solid evidence. Indeed, it's partly for that reason that the theory of continental drift was rejected for a long time; you needed a bunch of other lines of evidence to come together before continental drift really looked like a solid theory. So using the continental drift argument requires you to not just demonstrate that the pieces fit, but include all of the other stuff that holds up the theory and then use that to argue for the age of the earth. Unfortunately I don't know of any other evidences for the age of the earth or universe that have shorter argument chains. It's genuinely hard! (And partly for that reason I wouldn't be too surprised if new evidence caused us to revise our estimates for the age of the universe by a factor of two in either direction.)

When Alice rejects a bad poem, that's a true-positive.

 

I think you meant true-negative?

1cartografie
yes, thanks v much. edited.

It's kind of aside, but I think this about safety systems in general. Don't give me a backup system to shut down the nuclear reactor if the water stops pumping; design it so the reaction depends on the water. Don't give me great ways to dispose of a chemical that destroys your flesh if it touches you; don't make the chemical to begin with. Don't give me a super-strong set of policies to keep the function-gained virus in the lab; don't make function-gained viruses. Wish they'd listened to that last one 3 years ago.

Admittedly it may be too late in a lot of w... (read more)

To me the biggest parallel I see in this to existing work is to that of program correctness. It is as hard IMHO to prove program correctness (as in: this program is supposed to sort records/extract every record with inconsistent ID numbers/whatever, and actually does) as it is to write the program correctly; actually, I think it's harder. So I never pursued it. Now we see a really good reason to pursue it. And even w/ conventional, non-AI programs, we have the problem of precisely defining what we want done.

1tailcalled
Proving program correctness seems closer to the MIRI approach to me.

For me, the case for doing this has not sufficiently been made. I read two sets of arguments for it. On this page, essentially, "aging is the leading cause of death," which is funny -- like engagement is the leading cause of marriage -- but more seriously: to attempt to abolish aging is largely about fighting death. Pointing out aging kills doesn't take us anywhere until we've shown death needs to go.

On the linked page about "pro-aging trance," it was that if I'm still asking that question, I must be in a trance, and that's not exactly sound.

I don't have c... (read more)

I think we can at least answer, why have 2 sexes rather than, 3, 4, or whatever.

Assuming the benefit of sex is to mix up genes with others' (seems reasonable, as that's what it does!),

In one generation 2 sexes mixes in 50% others' genes to 3 sexes mixing in 66%; not a huge difference. In 4 generations, it's 94% to 99%. 

So the benefit of the extra sex isn't huge, but the cost of getting the third may be (just as the cost of finding one mate can be high, esp. if you're somebody's prey and need to both be noticed and not be noticed at the same time).

Oops, just thought of this: he loves slurpy noises near his ears. Shouldn't that be way too stimulating? It would be for me or anyone else I know! Seems autism is both about avoiding/muting stimuli and seeking them out.

2Steven Byrnes
I was chatting about exposure therapy in this other comment. I wonder if we should say that everyday life can sometimes provide "unintentional exposure therapy", in which case it sounds like "exposure therapy" (in that broader sense) has already been working out for y'all, which would be really neat and interesting. Reminder that I'm extremely not an expert and don't trust me on anything. :-) Hmm, it might be an example of how that picture I drew—where valence is an inverted-U function of arousal—is a lousy model. Highly-stimulating things aren't always aversive. Like, the best and worst moments of my life were both highly-stimulating. Maybe it's like, highly-stimulating things can be very bad or very good, and not so much in between? And whether it's good or bad depends on other things going on in your brain—for example, how safe you feel. If you feel simultaneously safe and uncomfortable/scared/threatened, the two things mix together like baking soda and vinegar, and the result is…laughter! Or at least, that's what I've been figuring. To be clear, everything I'm saying here is purely coming from introspection / folk psychology, I don't have any more insight than you or anyone else.

My autistic child used to be terrified of the lawn mower, even if he was inside. We couldn't use the mixer, the vacuum, or even the shower without him freaking out. 

He went from terror to thinking these things were cool: if I cut the grass, he comes out to play nearby with his toy mower; he loves to vacuum. And -- glory of glories -- the shower is boring.

So I think for him at least, it's a progression from WAY TOO MUCH to fascinating and fun to bo-ring. 

This leads me to wonder: if it's possible for a stimulus to be overwhelming but not too overwhelming, if exposure therapy might help rather than just making him freak. It's worth a try.

1Astynax
Oops, just thought of this: he loves slurpy noises near his ears. Shouldn't that be way too stimulating? It would be for me or anyone else I know! Seems autism is both about avoiding/muting stimuli and seeking them out.

Definitely getting that book. I wanted something to take me past Arendt's book, which never really seemed to get to the banality of it all. Will check it out.

I never get them -- not for two decades. I have very strong teeth and everything has been fine. But I was quie confident. If I had regular cavities I would get it done. YMMV.

Likelihood of cancer -- quite low; cost of getting it -- quite high.
Likelihood of cavities -- higher; cost of getting them -- lower.
 

It's hard to figure small numbers times big numbers when you don't really have either. :)

Looks like I don't know how to do spoiler tags. Can't find it on site, alas.

2Ruby
I just took a closer look at your comment, seems you tried the right syntax but it didn't mesh well with the list. I'll look into that tomorrow.
2Ruby
https://www.lesswrong.com/faq#How_do_I_insert_spoiler_protections_

>!Certain math sequences that aren't very useful, like, to get the next number add the digits in this one. Should often get down to something stable.

The pre-Hadean earth as postulated: form oceans, suck up the CO2 into rock, cool down till the oceans freeze, stop sucking up CO2 and eventually volcanoes spit out enough it melts the oceans, etc.

Social popularity of certain things like, say, socialism, individualism/conformity, bowdlerism/pornography, anything where if you get too much of it it either blows up or at least people like it less.!<

1Astynax
Looks like I don't know how to do spoiler tags. Can't find it on site, alas.
This is from Arm in Arm: A Collection of Connections, Endless Tales, Reiterations, and Other Echolalia.  It's a children's book full of pictures suggesting logical paradox or infinite sequences.

Are the Schedule's times in Eastern Daylight Time?

1EGI
AFAIK the schedule should adjust to your time zone. Opening session is 7pm CEST.