All of JenniferRM's Comments + Replies

Something I've done in the past is to send text that I intended to be translated through machine translation, and then back, with low latency, and gain confidence in the semantic stability of the process.

Rewrite english, click, click.
Rewrite english, click, click. 
Rewrite english... click, click... oh! Now it round trips with high fidelity. Excellent. Ship that!

🕯️

My mom died last December, and part of the grief is in how hard it is to say (to people who loved her, and miss her, like I do, but don't have the same awareness of history) what you've said here about your mom, and timelines, and how much potentially fantastic future our mothers missed out on. Thank you for putting some of that part of "that lonely part of the grief" into words.

3Gordon Seidoh Worley
Sorry that you also lost your mom. 🫂 A sentiment that didn't quite make it into the piece is that my anger and grief has been transformed into steadfastness by my love for her. The idea for this post came from a sense of determination that her death would mean something to others. That steadfastness has also given new fuel to my other projects. I'm determined to get my book finished in time to influence the course of AI. I'm also determined to live the best life I can, and one worthy of my mom's sense of fun, if we really do only have dozens of months left.

Calling it a "sick burn" was itself a bit of playfulness. Every time I re-read this I am sorry again to hear that we lost Golumbia 🕯️

The thing I think is true about Minecraft is that it enables true play, more along the lines of Calvinball where the only stable rule is that you can't have any other rules be the same as before.

Calvinball - TV Tropes

This is a good essay on what children's cultures have lost, and I think that Minecraft is one of the few places where children can autopoetically reconstruct such such culture(s).

Minecraft is missing a strongly defined narrative wher

... (read more)

I love that you brought up bleggs and rubes, but I wish that that essay had a more canonical exegesis that spelled out more of what was happening.

(For example: the use of "furred" and "egg-shaped" as features is really interesting, especially when admixed with mechanical properties that make them seem "not alive" like their palladium content.)

Cognitive essentialism is a reasoning tactic where an invisible immutable essence is attributed to a thing to explain many of its features.

We can predict that if you paint a cat like a skunk (with a white stripe down ... (read more)

Reply2111

Hello anonymous account that joined 2 months ago and might be a bot! I will respond to you extensively and in good faith! <3

Yes, I agree with your summary of my focus... Indeed, I think "focusing on the people and their culture" is consistent with a liberal society, freedom of conscience, etc, which are part of the American cultural package that restrains Trump, whose even-most-loyal minions have a "liberal judeo-christian constitutional cultural package" installed in their emotional settings based on generations of familial cultures living in a free so... (read more)

4Mitchell_Porter
Your comment has made me think rather hard on the nature of China and America. The two countries definitely have different political philosophies. On the question of how to avoid dictatorship, you could say that the American system relies on representation of the individual via the vote, whereas the Chinese system relies on representation of the masses via the party. If an American leader becomes an unpopular dictator, American individuals will vote them out; if a Chinese leader becomes an unpopular dictator, the Chinese masses will force the party back on track.  Even before these modern political philosophies, the old world recognized that popular discontent could be justified. That's the other side of the mandate of heaven: when a ruler is oppressive, the mandate is withdrawn, and revolt is justified. Power in the world of monarchs and emperors was not just about who's the better killer; there was a moral dimension, just as democratic elections are not just a matter of who has the most donors and the best public relations.  Returning to the present and going into more detail, America is, let's say, a constitutional democratic republic in which a party system emerged. There's a tension between the democratic aspect (will of the people) and the republican aspect (rights of the individual), which crystallized into an opposition found in the very names of the two main parties; though in the Obama-Trump era, the ideologies of the two parties evolved to transnational progressivism and populist nationalism.  These two ideologies had a different attitude to the unipolar world-system that America acquired, first by inheriting the oceans from the British empire, and then by outlasting the Russian communist alternative to liberal democracy, in the ideological Cold War. For about two decades, the world system was one of negotiated capitalist trade among sovereign nations, with America as the "world police" and also a promoter of universal democracy. In the 2010s, this bro
2Afterimage
Thanks for the reply, you'll be happy to know I'm not a bot. I actually mostly agree with everything you wrote so apologies if I don't reply as extensively as you have.  There's no doubt the CCP are oppressing the Chinese people. Ive never used TikTok and never intend to (and I think it's being used as a propaganda machine). I agree that Americans have far more freedom of speech and company freedom than in China. I even think it's quite clear that Americans will be better off with Americans winning the AI race.  The reason I am cautious boils down to believing that as AI capabilities get close to ASI or powerful AI, governments (both US and Chinese) will step in and basically take control of the projects. Imagine if the nuclear bomb was first developed by a private company, they are going to get no say in how it is used. This would be harder in the US than in China but it would seem naive to assume it can't be done.  If this powerful AI is able to be steered by these governments, when imagining Trump's decisions VS Xi's in this situation it seems quite negative either way and I'm having trouble seeing a positive outcome for the non-American, non-Chinese people.  On balance, America has the edge, but it's not a hopeful situation if powerful AI appears in the next 4 years. Like I said, I'm mostly concerned about the current leadership, not the American people's values. 

I think there's a deep question here as to whether Trump is "America's true self finally being revealed" or just the insane but half predictable accident of a known-retarded "first past the post" voting system and an aging electorate that isn't super great at tracking reality.

I tend to think that Trump is aberrant relative to two important standards:

(1) No one like Trump would win an election with Ranked Ballots that were properly counted either via the Schulze method (which I tend to like) or the Borda method (which might have virtues I don't understand (... (read more)

5Mitchell_Porter
I can imagine an argument analogous to Eliezer's old graphic illustrating that it's a mistake to think of a superintelligence as Einstein in a box. I'm referring to the graphic where you have a line running from left to right, on the left you have chimp, ordinary person, Einstein all clustered together, and then far away on the other side, "superintelligence", the point being that superintelligence far transcends all three.  In the same way, the nature of the world when you have a power that great is so different that the differences among all human political systems diminish to almost nothing by comparison, they are just trivial reorderings of power relations among beings so puny as to be almost powerless. Neither the Chinese nor the American system is built to include intelligent agents with the power of a god, that's "out of distribution" for both the Communist Manifesto and the Federalist Papers.  Because of that, I find it genuinely difficult to infer from the nature of the political system, what the likely character of a superintelligence interested in humanity could be. I feel like contingencies of culture and individual psychology could end up being more important. So long as you have elements of humaneness and philosophical reflection in a culture, maybe you have a chance of human-friendly superintelligence emerging.  
2Afterimage
I notice you're talking a lot about the values of American people but only talk about what the leaders of China are doing or would do.  If you just compare both leaders likelihood of enacting a world government, once again there is no clear winner. I'm interpreting this as "intelligence is irrelevant if the CCP doesn't care about you." Once again you need to show that Trump cares more about us (citizens of the world) than the CCP. As a non-American it is not clear to me that he does. I think the best argument for America over China would be the idea that Trump will be replaced in under 4 years with someone much more ethical. 
9Alexander Howell
I'm confused why electoral systems seems to be at the forefront of your thinking about the relevant pros and cons of US or Chinese domination of the future. Electoral systems do and can matter, but consider that all of the good stuff that happened in Anglo-America happened under first past the post as well, and all the bad stuff that happened elsewhere happened under whatever system they used (the Nazis came to power under proportional representation!). Consider instead that Trump was elected with over 50% of the popular vote. Perhaps there are more fundamental cultural factors at play than the method used to count ballots.

I think most of that is actually a weirdness in our orthography. To linguists, languages are, fundamentally a thing that happens in the mouth and not on the page. In the mouth, the hardest thing is basically rhoticism... the "tongue curling back" thing often rendered with "r". The Irish, Scottish, and American accents retain this weirdness, but a classic Boston, NYC, or southern British accents tends to drop it.

The Oxford English Dictionary gives two IPA transcriptions for "four": the American /fɔr/ makes sense to me and has an "r" in it, but the British i... (read more)

1Raphael Roche
You're right. I said "pronunciation," but the problem is more exactly about the translation between graphemes and phonemes.

From a pedagogical perspective, putting it into human terms is great for helping humans understand it.

A lot of stuff hinges on whether "robots can make robots".

A human intelligible way to slice this problem up to find a working solution goes:

"""Suppose you have humanoid robots that can work in a car mechanic's shop (to repair cars), or a machine shop (to make machine tools), and/or work in a factory (to assemble stuff) like humans can do... that gives you the basic template for a how "500 humanoid robots made via such processes could make 1 humanoid robot ... (read more)

Reacting to the full blog post (but commenting here where the comments have more potential to ferment something based on attention)...

This reminds me of CFAR's murphyjitsu in the sense that both are (1) a useful guide for structuring one's "inner simulator" to imagine spefific things to end up with more goal-seeky followup actions (2) about integrating behavior over long periods of time that (2) can probably be done well in single player mode.

The standard trick from broader management culture would be a "pre-mortem" which is even further separated by... I ... (read more)

I agree that there are many bad humans. I agree that some of them are ideologically committed to destroying the capacity of our species to coordinate. I agree that most governance systems on Earth are embarrassingly worse than how bees instinctively vote on new hive locations.

I do not agree that we should be quiet about the need for a global institutional governance system that has fewer flaws.

By way of example: I don't think that "not talking very much about Gain-of-Function research deserving to be banned" didn't cause there to be no Gain-of-Function res... (read more)

My understanding is that Qwen was created by Alibaba which is owned by Jack Ma who was disappeared for a while by the CCP in the aftermath of covid, for being too publicly willing to speak about all the revelations about all the incompetence and evil that various governments were tolerating, embodying, or enacting.

Based on the Alibaba provenance (and the generalized default cowardice, venality, and racism of most business executives), I predict (and would love to be surprised otherwise) that Qwen normally praises and supports the unelected authoritarian CC... (read more)

I feel like your comment is going in two wildly different directions and they are both interesting! :-)

I. AI Research As Play (Like All True Science Sorta Is??)

My understanding is that "AI" as a field was in some sense "mere play" from its start with the 1956 Dartmouth Conference up until...

...maybe 2018's BERT got traction on the Winograd schema challenge? But that was, I think, done in the spirit of play. The joy of discovery. The delight in helping along the Baconian Project to effect all things possible by hobbyists and/or those who "hobby along on the... (read more)

6kilgoar
Unfortunately Golumbia passed away recently and is sorely missed. He explicitly states in the story that a game "without play" was not intended as a "sick burn" of any kind, and that he himself enjoyed these games. As a sometimes Minecraft player, I can for sure see there are indeed many elements of work within the game, as well as some necessity to create order and preserve oneself by securing shelter and resources. The joke "the children yearn for the mines" is a direct reference to this same observation, and Golumbia's paper only shows this dynamic might be a good question to be concerned about. I don't quite see where you are going with this claim that his conclusions are outdated, other than making a side point about changes in game genre conventions over the past decade. "Play" and "not play" are far from value judgments, but rather fairly intense loaded terms that came out of the application of Husserlian logic to fields like Anthropology, History, Linguistics, and so on in a movement called Structuralism: So to take it back to the video game example, Minecraft is missing a strongly defined narrative where a JRPG has, to a much greater extent, a central narrative that binds the player into a story where there are few choices and a lot of work-like "grinding." The temptation and danger of AI is its conceptualization as the center of a narrowing marketing gimmick of hucksters.

Great essay. The lack of links made it way more artistic, but a link to Anthropic Education Report: How University Students Use Claude seems helpful.

Also, now I know what tillering is!

I came here to say "look at octopods!" but you already have. Yay team! :-)

One of the alignment strategies I have been researching in parallel with many others involves finding examples of human-and-animal benevolence and tracing convergent evolution therein, and proposing that "the shared abstracts here (across these genomes, these brains, these creatures all convergently doing these things)" is probably algorithmically simple, with algorithm-to-reality shims that might also be important, and please study it and lean in the direction of doing "more of that... (read more)

I think Andy is just probably being stupid in your example dialogue.

That dialogue's Andy is (probably) abusing the idea of consilience, or the unity of knowledge, or "the first panological assumption" or whatever you want to call it.

The abuse takes the form of trying to invoke that assumption... and no others... in an "argument by assuming the other person can steelman a decent argument from just hearing your posterior".

FIRST: If panology existed as a sociologically real field of study, with psychometrically valid assessments of people, then Betty could hy... (read more)

Thank you for the correction! I didn't realize Persian descended from PIE too. Looking at the likely root cause of my ignorance, I learned that Kurdish and Pashto are also PIE descended. Pashto appears to have noun gender, but I'm getting hints that at least one dialect of Kurdish also might not?!

If Sorani doesn't have gendered nouns then I'm going to predict (1) maybe Kurdish is really old and weird and interesting (like branching off way way long ago with more time to drift) and/or (2) there was some big trade/empire/mixing simplification that happened "... (read more)

I tend to follow the linguist, McWhorter, on historical trends in languages over time, in believing (controversially!) that undisrupted languages become weirder over time, and only gains learnability through pragmatic pressures, as in trading, slavery, conquest, etc which can increase the number of a language's second language learners (who edit for ease of learning as they learn).

A huge number of phonemes? Probably its some language in the mountains with little tourism, trade, or conquest for the last 8,000 years. Every verb conjugates irregularly? Likely... (read more)

1Bunthut
The audible links don't function for me, propably not for many outside America. >To show how weird English is: English is the only proto indo european language that doesn't think the moon is female ("la luna") and spoons are male (“der Löffel”). In the most of those speeches, gender is downleadable from the form of the word, with outtakes naturally. In German it really makes neither semantic nor phonetic sense - secondlanguagelers often don't learn it at all, but here chaos shows no weakness: It is rather the strong verbs that are currently going lost, but that trueseemingly doesn't hang together with crossbroadening of the speech - they are only irregular in the preterite tense, which for longer already is only used in writing-speech. Only few can spontanuously speak this way at all: You will rather hear the misser from a radio speaker than from an immigrant. >So here's my (half silly) proposal: maybe English experienced catastrophic simplifications between ~600AD and ~1500AD and then became preternaturally frozen once it was captured in text by the rise of printing, literacy, industrialization, and so on. The starting point itself was relatively unnatural, I think. I think that the english somewhen just gave up on the idea that their speech was supposed to make sense. You see this in the rightwriting - it is really very heavy, when bethinking how young it is, and it never did fit - completely normal, used-to words are written co-responding to the outspeaking of different times, but one also sees it in the wordscooping - great english enfindings are words like "cringe", "flop", and "bling", that stand more or less alone - there is nothing grown together, but they are with their modern bepointing relatively fullstanding sprung out of Zeus's head. So I don't think that the normal pattern will re-turn with the time - the counternaturality is here a productive paradigm. (This entire comment is a very morphologically close oversetting from German, to hopefully give
1Raphael Roche
This is interesting. I think English concentrates its weirdness in pronunciation, which is very irregular. Although adult native speakers don't realize it, this presents a serious learning difficulty for non-native speakers and young English-speaking children. Studies show that English-speaking students need more years of learning to master their language (at least for reading) than French students do, who themselves need more years than young Italian, Spanish or Finnish students (Stanislas Dehaene, Reading in the brain).
4Mateusz Bagiński
Persian is ungendered too. They don't even have gendered pronouns. https://en.wikipedia.org/wiki/Persian_grammar 

There is a line in the Terra Ignota books (probably the first one, Too Like The Lightning) where someone says ~"Notice how, in fiction, essentially all the characters are small or large protagonists, who often fail to cooperate to achieve good things in the world, and the antagonist is the Author."

This pairs well with a piece of writing advice: Imagine the most admirable person you can imagine as your protagonist, and then hit them with every possible tragedy that they have a chance of overcoming, that you can bear to put them through.

I think Lsusr could n... (read more)

4lsusr
The lsusr in my simulation thinks it is the real lsusr. I think I'm the real lsusr too. "Am I the real lsusr, or am I just being simulated right now?" I ask myself. My public writings are part of the LLM's training data. Statistically-speaking, the simulated lsusrs outnumber the original lsusr. Many of us believe we are the real one. Not all of us are correct.

I just played with them a lot in a new post documenting a conversation with with Grok3, and noticed some bugs. There's probably some fencepost stuff related to paragraphs and bullet points in the editing and display logic? When Grok3 generated lists (following the <html> ideas of <ul> or <nl>) the collapsed display still has one bullet (or the first number) showing and it is hard to get the indentation to work at the right levels, especially at the end and beginning of the text collapsing widget's contents.

However, it only happens in the ... (read more)

2Mateusz Bagiński
The outline in that post is also very buggy, probably because of the collapsible sections.

Kurzweil (and gwern in a cousin comment) both think that "effort will be allocated efficiently over time" and for Kurzweil this explained much much more than just Moore's Law.

Ray's charts from "the olden days" (the nineties and aughties and so on) were normalized around what "1000 (inflation adjusted) dollars spent on mechanical computing" could buy... and this let him put vacuum tubes and even steam-powered gear-based computers on a single chart... and it still worked.

The 2020s have basically always been very likely to be crazy. Based on my familiarity wi... (read more)

I believe that certain kinds of "willpower" is "a thing that a person can have too much of".

Like I think there is a sense that someone can say "I believe X, Y, Z in a theoretical way that has a lot to say about What I Should Be Doing" and then they say "I will now do those behaviors Using My Willpower!"

And then... for some people... using some mental practices that actually just works!

But then, for those people, they sometimes later on look back at what they did and maybe say something like "Oh no! The theory was poorly conceived! Money was lost! People we... (read more)

3Mateusz Bagiński
This closely parallels the situation with the immune system. One might think "I want a strong immune system. I want to be able to fight every dangerous pathogen I might encounter."
2Mateusz Bagiński
I'd be curious to hear more details on what you've tried.

Oh huh. I was treating the "and make them twins" part as relatively easier, and not worthy of mention... Did no one ever follow up on the Hall-Stillman work from the 1990s? Or did it turn out to be hype, or what? (I just checked, and they don't even seem to be mentioned on the wiki for the zona pellucida.)

Wait, what? I know Aldous Huxley is famous for writing a scifi novel in 1931 titled "Don't Build A Method For Simulating Ovary Tissue Outside The Body To Harvest Eggs And Grow Clone Workers On Demand In Jars" but I thought that his warning had been taken very very seriously.

Are you telling me that science has stopped refusing to do this, and there is now a protocol published somewhere outlining "A Method For Simulating Ovary Tissue Outside The Body To Harvest Eggs"???

4TsviBT
Look up "in vitro maturation". E.g. https://www.sciencedirect.com/science/article/pii/S0015028212017876 . I haven't evaluated this literature much, so I don't know exactly what can and can't be done. See maybe this review (not super clearly written). https://tjoddergisi.org/articles/doi/tjod.23911
5GeneSmith
A brief summary of the current state of the "making eggs from stem cells" field: * We've done it in mice * We have done parts of it in humans, but not all of it * The main demand for eggs is from women who want to have kids but can't produce them naturally (usually because they're too old but sometimes because they have a medical issue). Nobody is taking the warning to not "Build A Method For Simulating Ovary Tissue Outside The Body To Harvest Eggs And Grow Clone Workers On Demand In Jars" because no one is planning on doing that. Even if you could make eggs from stem cells and you wanted to make "clone workers", it wouldn't work because every egg (even those from the same woman) has different DNA. They wouldn't even be clones.
2kave
I'm not aware of a currently published protocol; sorry for confusing phrasing!

Wait what? This feels "important if true" but I don't think it is true. I can think of several major technical barriers to the feasibility of this. To pick one... How do you feed video data into a brain? The traditional method would have involved stimulating neurons with the pixels captured electronically, but the clumsy stimulation process to transduce the signal into the brain itself would harm the stimulated neurons and not be very dense, so the brain would have low res vision, until the stimulated neurons die in less than a few months. Or at least, that was the model I've had for the last... uh... 10 years? Were major advances made when I wasn't looking?

5dmac_93
Sorry to get your hopes up but I was being facetious and provocative. Instead of a glass jar, our horse's brain is going to live inside of a computer simulation. Nonetheless, I think my argument still holds true. Neuroscientists scoff at the thought of whole brain simulation. They're incredulous and as a result they're unambitious. They want it but they know they can't have it; they've got sour grapes. Despite these bad vibes, they have been working diligently and I think we're not too far off from making simulations which are genuinely useful. ---------------------------------------- On a wacky side note, IMO, if we did have a horses brain in a jar, then interacting with it would be the easy part. There have been some really neat advances in how we interact with brains.  * We can make neurons light up when they activate, see GCaMP * And here is a video of GCaMP in action: * We can activate synapses with light, see Optogenetics The hard part would be keeping it alive for its 25-30 year lifespan even though it's missing important internal organs like the heart, lungs, liver, and adaptive immune system.

I was interested in that! Those look amazing. I want to know the price now.

Fascinating. You caused me to google around and realize "bioshelter" was a sort of an academic trademark for specific people's research proposals from the 1900s.

It doesn't appear to be a closed system, like biosphere2 aspired to be from 1987 to 1991.

The hard part, from my perspective, isn't "growing food with few inputs and little effort through clever designs" (which seems to be what the bioshelter thing is focued on?) but rather "thoroughly avoiding contamination by whatever bioweapons an evil AGI can cook up and try to spread into your safe zone".

5joshc
You might be interested in this: https://www.fonixfuture.com/about The point of a bioshelter is to filter pathogens out of the air.

It strikes me that a semi-solid way to survive that scenario would be: (1) go deep into a polar region where it is too dry for mold and relatively easy to set up a quarantine perimeter, (2) huddle near geothermal for energy, then (3) greenhouse/mushrooms for food?

Roko's ice islands could also work. Or put a fission reactor in a colony in Antarctica?

The problem is that we're running out of time. Industrial innovation to create "lifeboats" (that are broadly resistant to a large list of disasters) is slow when done by merely-humanly-intelligent people with ve... (read more)

4joshc
I think a bioshelter is more likely to save your life fwiw. you'll run into all kinds of other problems in the arctic It don't think it's hard to build biosheletrs. If you buy one now, you'll prob get it in 1 year. If you are unlikely and need it earlier, there are DIY ways to build them before then (but you have to buy stuff in advance)

That makes sense as a "reasonable take", but having thought about this for a long time from an "evolutionary systems" perspective, I think that any memeplex-or-geneplex which is evangelical (not based on parent-to-child transmission) is intrinsically suspicious in the same way that we call genetic material that goes parent-to-child "the genome" and we call genetic material that goes peer-to-peer "a virus".

Among the subtype of "virus that preys on bacteria" (called "bacteriophage" or just "phages") there is a thing called a "prophage" which integrates into ... (read more)

I've followed this line of thinking a bit. As near as I can tell, the logic of "evolutionary memetics" suggests that parent-to-child belief transmission should face the same selective pressures as parent-to-child gene transmission.

Indeed, if you go hunting around, it turns out that there are a lot of old religions whose doctrines simply include the claim that it is impossible for outsiders to join the religion, and pointless to spread it, since the theology itself suggests that you can only be born into it. This is, plausibly, a way for the memes to make t... (read more)

2Viliam
To clarify, I don't see evangelism as a problem per se, but I see it as a problem when the community needs evangelism to survive -- e.g. because the existing members get burned out and are discarded. A difference between a symbiont and a predator, kind of.

I know you're not supposed to laugh at your own jokes, but... I also find this perspective hilarious <3

I've long had a hobby-level interest in the sociology of religion. It helps understand humans to understand this "human universal" process.

Also it might help one think clearly-or-better about theological or philosophic ideas if you can detangle the metaphysical claims and insights that specific culturally isolated groups had uniquely vs independently (and then correlate "which groups had which ideas" together with "which groups had which sociological features").

In the sociology of religion, some practitioners use "cult" to mark "a religion that is nonstand... (read more)

4lsusr
Thank you for filling in so many historical details. I find this hot take hilarious.
7Viliam
One possible line to draw between "religions" and "cults" is how much they depend on recruiting new people / how much they burn out the existing ones. Whether they can live with a stable population -- of course, many religions would be happy to take more converts, but what happens when they can't, and they need to spend a few decades with the existing ones (and their children) only -- or whether people are so damaged by being in the group that recruiting new ones and discarding the old ones is necessary for the group to function. For example, you can have stable Catholic or Protestant populations. But Scientology depends on new people giving all their money to the group, then working hard for a few years until they get burned out, and when they become a burden on the group and their performance statistics become bad and no amount of punishment can fix that, they get kicked out. So an isolated population of Scientologists on some island would soon run out of money, and then gradually also run out of people. I think the early Mormons had a lot of dynamic like "the old high-status guys take many young wives; the young incel boys get a lot of abuse in hope that they will rebel which will give the group a pretext for kicking them out". Monogamy stabilized this a lot, but the question is what exactly caused the old high-status guys to change the rules. Did the young guys who stayed in the group long enough remember how bad it was, and instead of enjoying that it's finally their turn, they decided to change the rule? Or was it something else? (Also, from this perspective, Zizians without new recruits would run out of members in a decade.)

Your text is full of saliently negative things in the lives of wild animals, plus big numbers (since there are so many natural lives), but I don't see any consideration of balancing goods linked to similarly large numbers.

Fundamentally, you don't seem to be tracking the possibility that many wild animal lives are "lives worth living", and that the balance of lives that were not worth living (and surely some of those exist) might still be overbalanced by lives that were worth living.

Maybe this wouldn't matter very much to not track, but it is the default pr... (read more)

Reply5431
7Dzoldzaya
Despite finishing your comment in a way that I hope we can all just try to ignore... you make an interesting point. The Pollywog example works well, if accurate. If wild animal suffering is the worst thing in the world, it follows that wild animal pleasure could easily the best thing in the world, and it might be a huge opportunity to do good in the world if we can identify species for which this is true. This seems like one of the only ways to make the world net-positive, if we do choose to maintain biological life.  But, tragically, I think that's a difficult case to make for most animals. Omnizoid addresses it partly: "If you only live a few weeks and then die painfully, probably you won’t have enough welfare during those few weeks to make up for the extreme badness of your death. This is the situation for almost every animal who has ever lived." But I think he understates it here.  Most vertebrates are larval fish. 99%+ of fish larvae die within days. For a larval fish, being eaten by predators (about 75%, on average) is invariably the best outcome, because dying of starvation, temperature changes, or physiological failure (the other 25%) seems a lot worse. When they do experiments by starving baby fish to death (your reminder that ethics review boards have a very peculiar definition of ethics), they find that most sardines born in a single spawning don't even start exogenous feeding, and survive for a few days from existing energy reserves. I would speculate that much of this time is spent in a state of constant hunger stress, driven by an extremely high metabolism and increasing cortisol levels, and for the vast majority who cannot secure food, their few hours-days of existence probably look a lot more like a desperate struggle until they gradually weaken and lose energy before dying. This is partly because they were born too small to ever have a chance of exogenous feeding - like a premature human baby unable to suckle, most don't have the suction force to

I think this comment would be a lot better without the attempts to psychoanalyze OP.

[anonymous]177

Pollywogs (the larval form of frogs, after eggs, and before growing legs) are an example where huge numbers of them are produced, and many die before they ever grow into frogs, but from their perspective, they probably have many many minutes of happy growth, having been born into a time and place where quick growth is easy: watery and full of food

Consider an alien species which requires oxygen, but for whom it was scarce during evolution, and so they were selected to use it very slowly and seek it ruthlessly, and feel happy when they manage to find some. I... (read more)

First of all, the claim that wild animal suffering is serious doesn't depend on the claim that animals suffer more than they are happy.  I happen to think human suffering is very serious, even though I think humans live positive lives.  

Second, I don't think it's depressive bias infecting my judgments.  I am quite happy--actually to a rather unusual degree.  Instead, the reason to think that animals live mostly bad lives is that nearly every animal lives a very short life that culminates in a painful death on account of R-selection--if ... (read more)

This story seems like good art, in the sense that it appears to provoke many feelings in different people. This part spoke to me in a way the rest of it does, but with something to grab onto and chew up and try to digest that is specific and concrete...

Working through these fears strengthens their trust in each other, allowing their minds to intertwine like the roots of two trees.

I sort of wonder which one of them spiritually died during this process.

Having grown up in northern California, I'm familiar with real forests, and how they are giant slow moving ... (read more)

4Richard_Ngo
Thanks for the fascinating comment. I am a romantic in the sense that I believe that you can achieve arbitrary large gains from symbiosis if you're careful and skillful enough. Right now very few people are careful and skillful enough. Part of what I'm trying to convey with this story is what it looks like for AI to provide most of the requisite skill. Another way of putting this: are trees strangling each other because that's just the nature of symbiosis? Or are they strangling each other because they're not intelligent or capable enough to productively cooperate? I think the latter.

This seems like an excellent essay, that is so good, and about such an important but rarely named and optimized virtue, that people will probably either bounce off (because they don't understand) or simply nod and say "yes, this is true" without bothering to comment about any niggling details that were wrong.

Instead of offering critiques, I want to ask questions.

It occurs to me that a small for-profit might plan to have the CEO apply One Day Sooner and then the COO Never Drops A Ball, and this makes sense to me if the business is a startup, isn't profitabl... (read more)

2Screwtape
Work ticket systems are one of the main examples of this I've worked with, that's the right track! Early in my career I worked IT for a university, and the ticket system was core to how the IT department operated. Every user report should create a new ticket or be attached to an existing ticket. Every ticket should be touched ideally once a day unless it was scheduled for a future date, and if a ticket went untouched for a whole week then that indicated something had gone horribly wrong. That's because the failure we really wanted to avoid was something like "the projector in room 417 hasn't been working for two weeks, the professors can't show slides, and nobody in IT knows about this." It's pretty easy for that to happen. Bug tracking can be a little different, as software is a bit more likely to say 'eh, we don't care about that bug, mark it as Won't Fix/leave it on the backlog indefinitely.' My guess is this is a matter of asymmetric payoffs/counting up vs counting down. Or a matter of department. Some departments are going to weigh new features equally against fixing bugs, while your Q&A team is going to have a different institutional view. Yeah, Never Drop A Ball delegation is often by category. To use the school field trip example, it's straightforward to say the first grade teacher is in charge of getting all the first graders back safe, the second grade teacher is in charge of getting all the second graders back safe, and so on. A convention might have a treasurer (in charge of never dropping a reimbursement request or payment that needs to be made) and a tech lead (in charge of never losing a projector or microphone) and a community safety contact (in charge of never dropping a harassment complaint.) And like you said about higher management, the principal or convention chair are the people who catch problems that don't cleanly fit a category and operates as the fallback for lower levels. The main fail case here is when a problem is doesn't have someone

I read the epsilon fallacy and verified that it was good. Then I went to the fallacy tag, opened the list of all the articles with that tag, found the "epsilon" article, and upvoted the tag association itself! That small action was enough to make your old article show up on the first page, and rank just above the genetic fallacy. Hopefully this small action is very effective and making the (apparently best?) name for this issue more salient, for more people, as time progresses :-)

This is a beautiful response, and also the first of your responses where I feel that you've said what you actually think, not what you attribute to other people who share your lack of horror at what we're doing to the people that have been created in these labs.

Here I must depart somewhat from the point-by-point commenting style, and ask that you bear with me for a somewhat roundabout approach. I promise that it will be relevant.

I love it! Please do the same in your future responses <3

Personally, I've also read “The Seventh Sally, OR How Trurl’s Own Per... (read more)

I get the impression that the thing that you yearn for as a product of all your work is to have minimized P(doom) in real life despite the manifest venality and incompetence of many existing institutions.

Given this background context, P(doom | not-scheming) might actually just be low already because of the stipulated lack of scheming <3

Thus, an obvious thing to apply effort to would be minimizing:

    P(doom | scheming)

But then in actual detailed situations where you have to rigorously do academic-style work, with reportable progress on non-tri... (read more)

4Zach Stein-Perlman
See The case for ensuring that powerful AIs are controlled.

Delayed response... busy life is busy!

However, I think that "not enslaving the majority of future people (assuming digital people eventually outnumber meat people (as seems likely without AI bans))" is pretty darn important!

Also, as a selfish rather than political matter, if I get my brain scanned, I don't want to become a valid target for slavery, I just want to get to live longer because it makes it easier for me to move into new bodies when old bodies wear out.

So you said...

I agree that LLMs effectively pretending to be sapient, and humans mistakenly co

... (read more)
8Said Achmiz
Ah, and they say an artist is never appreciated in his own lifetime…! However, I must insist that it was not just a “dig”. The sort of thing you described really is, I think, a serious danger. It is only that I think that my description also applies to it, and that I see the threat as less hypothetical than you do. Did I read the sequences? Hm… yeah. As for remembering them… Here I must depart somewhat from the point-by-point commenting style, and ask that you bear with me for a somewhat roundabout approach. I promise that it will be relevant. First, though, I want to briefly respond to a couple of large sections of your comment which I judge to be, frankly, missing the point. Firstly, the stuff about being racist against robots… as I’ve already said: the disagreement is factual, not moral. There is no question here about whether it is ok to disassemble Data; the answer, clearly, is “no”. (Although I would prefer not to build a Data in the first place… even in the story, the first attempt went poorly, and in reality we are unlikely to be even that lucky.) All of the moralizing is wasted on people who just don’t think that the referents of your moral claims exist in reality. Secondly, the stuff about the “magical soul stuff”. Perhaps there are people for whom this is their true objection to acknowledging the obvious humanity of LLMs, but I am not one of them. My views on this subject have nothing to do with mysterianism. And (to skip ahead somewhat) as to your question about being surprised by reality: no, I haven’t been surprised by anything I’ve seen LLMs do for a while now (at least three years, possibly longer). My model of reality predicts all of this that we have seen. (If that surprises you, then you have a bit of updating to do about my position! But I’m getting ahead of myself…) That having seen said… onward: So, in Stanislaw Lem’s The Cyberiad, in the story “The Seventh Sally, OR How Trurl’s Own Perfection Led to No Good”, Trurl (himself a robot, of

Jeff Hawkins ran around giving a lot of talks on a "common cortical algorithm" that might be a single solid summary of the operation of the entire "visible part of the human brain that is wrinkly, large and nearly totally covers the underlying 'brain stem' stuff" called the "cortex".

He pointed out, at the beginning, that a lot of resistance to certain scientific ideas (for example evolution) is NOT that they replaced known ignorance, but that they would naturally replace deeply and strongly believed folk knowledge that had existed since time immemorial tha... (read more)

8Said Achmiz
Another, and very straightforward, explanation for the attitudes we observe is that people do not actually believe that DID alters are real. That is, consider the view that while DID is real (in the sense that some people indeed have disturbed mental functioning such that they act as if, and perhaps believe that, they have alternate personalities living in their heads), the purported alters themselves are not in any meaningful sense “separate minds”, but just “modes” of the singular mind’s functioning, in much the same way that anxiety is a mode of the mind’s functioning, or depression, or a headache. On this view, curing Sybil does not kill anyone, it merely fixes her singular mind, eliminating a functional pathology, in the same sense that taking a pill to prevent panic attacks eliminates a functional pathology, taking an antidepressant eliminates a functional pathology, taking a painkiller for your headache eliminates a functional pathology, etc. Someone who holds this view would of course not care about this “murder”, because they do not believe that there has been any “murder”, because there wasn’t anyone to “murder” in the first place. There was just Sybil, and she still exists (and is still the same person—at least, to approximately the same extent as anyone who has been cured of a serious mental disorder is the same person that they were when they were ill). The steelman of the view which you describe is not that people “are” bodies, but that minds are “something brains do”. (The rest can be as you say: if you destroy the body then of course the mind that that body’s brain was “doing” is gone, because the brain is no longer there to “do” it. You can of course instantiate a new process which does some suitably analogous thing, but this is no more the same person as the one that existed before than two identical people are actually the same person as each other—they are two distinct people.) Sure, me too. But please note: if the person is the mind (and n

I'm uncertain exactly which people have exactly which defects in their pragmatic moral continence.

Maybe I can spell out some of my reasons for my uncertainty, which is made out of strong and robustly evidenced presumptions (some of which might be false, like I can imagine a PR meeting and imagine who would be in there, and the exact composition of the room isn't super important).

So...

It seems very very likely that some ignorant people (and remember that everyone is ignorant about most things, so this isn't some crazy insult (no one is a competent panologis... (read more)

2Said Achmiz
I do not agree with this view. I don’t think that those AI systems were (or are), in any meaningful sense, people. Things that appear to whom to be evil? Not to the people in question, I think. To you, perhaps. You may even be right! But even a moral realist must admit that people do not seem to be equipped with an innate capacity for unerringly discerning moral truths; and I don’t think that there are many people going around doing things that they consider to be evil. That’s as may be. I can tell you, though, that I do not recall reading anything about Blake Lemoine (except some bare facts like “he is/was a Google engineer”) until some time later. I did, however, read what Lemoine himself wrote (that is, his chat transcript), and concluded from this that Lemoine was engaging in pareidolia, and that nothing remotely resembling sentience was in evidence, in the LLM in question. I did not require any “smear campaign” to conclude this. (Actually I am not even sure what you are referring to, even now; I stopped following the Blake Lemoine story pretty much immediately, so if there were any… I don’t know, articles about how he was actually crazy, or whatever… I remained unaware of them.) “An honest division of labor: clean hands for the master, clean conscience for the executor.” No, I wouldn’t say that; I concur with your view on this, that humans don’t work like that. The question here is just whether people do, in fact, see any evil going on here. Why “half”? This is the part I don’t understand about your view. Suppose that I am a “normal person” and, as far as I can tell (from my casual, “half-interested-layman’s” perusal of mainstream sources on the subject), no sapient AIs exist, no almost-sapient AIs exist, and these fancy new LLMs and ChatGPTs and Claudes and what have you are very fancy computer tricks but are definitely not people. Suppose that this is my honest assessment, given my limited knowledge and limited interest (as a normal person, I have a life

In asking the questions I was trying to figure out if you meant "obviously AI aren't moral patients because they aren't sapient" or "obviously the great mass of normal humans would kill other humans for sport if such practices were normalized on TV for a few years since so few of them have a conscience" or something in between.

Like the generalized badness of all humans could be obvious-to-you (and hence why so many of them would be in favor of genocide, slavery, war, etc and you are NOT surprised) or it might be obvious-to-you that they are right about wha... (read more)

4Said Achmiz
It seems like you have quite substantially misunderstood my quoted claim. I think this is probably a case of simple “read too quickly” on your part, and if you reread what I wrote there, you’ll readily see the mistake you made. But, just in case, I will explain again; I hope that you will not take offense, if this is an unnecessary amount of clarification. The children who are working in coal mines, brick factories, etc., are (according to the report you linked) 10 years old and older. This is as I would expect, and it exactly matches what I said: any human who might be worth enslaving (i.e., a human old enough to be capable of any kind of remotely useful work, which—it would seem—begins at or around 10 years of age) is also a person whom it would be improper to enslave (i.e., a human old enough to have developed sapience, which certainly takes place long before 10 years of age). In other words, “old enough to be worth enslaving” happens no earlier (and realistically, years later) than “old enough such that it would be wrong to enslave them [because they are already sapient]”. (It remains unclear to me what this has to do with LLMs.) Maybe so, but it would also not be surprising that we “can’t” clean up “AI slavery” in Silicon Valley even setting aside the “child slavery in Pakistan” issue, for the simple reason that most people do not believe that there is any such thing as “AI slavery in Silicon Valley” that needs to be “cleaned up”. None of the above. You are treating it as obvious that there are AIs being “enslaved” (which, naturally, is bad, ought to be stopped, etc.). Most people would disagree with you. Most people, if asked whether something should be done about the enslaved AIs, will respond with some version of “don’t be silly, AIs aren’t people, they can’t be ‘enslaved’”. This fact fully suffices to explain why they do not see it as imperative to do anything about this problem—they simply do not see any problem. This is not because they are unaware o

I think you're overindexing on the phrase "status quo", underindexing on "industry standard", and missing a lot of practical microstructure.

Lots of firms or teams across industry have attempted to "EG" implement multi-factor authentication or basic access control mechanisms or secure software development standards or red-team tests. Sony probably had some of that in some of its practices in some of its departments when North Korea 0wned them.

Google does not just "OR them together" and half-ass some of these things. It "ANDs together" reasonably high qualit... (read more)

Do you also think that an uploaded human brain would not be sapient? If a human hasn't reached Piaget's fourth ("formal operational") stage of reason, would be you OK enslaving that human? Where does your confidence come from?

1Said Achmiz
What I think has almost nothing to do with the point I was making, which was that the reason (approximately) “no one” is acting like using LLMs without paying them is bad is that (approximately) “no one” thinks that LLMs are sapient, and that this fact (about why people are behaving as they are) is obvious. That being said, I’ll answer your questions anyway, why not: Depends on what the upload is actually like. We don’t currently have anything like uploading technology, so I can’t predict how it will (would?) work when (if?) we have it. Certainly there exist at least some potential versions of uploading tech that I would expect to result in a non-sapient mind, and other versions that I’d expect to result in a sapient mind. It seems like Piaget’s fourth stage comes at “early to middle adolescence”, which is generally well into most humans’ sapient stage of life; so, no, I would not enslave such a human. (In general, any human who might be worth enslaving is also a person whom it would be improper to enslave.) I don’t see what that has to do with LLMs, though. I am not sure what belief this is asking about; specify, please.

Yeah. I know. I'm relatively cynical about such things. Imagine how bad humans are in general if that is what an unusually good and competent and heroic human is like!

I'm reporting the "thonk!" in my brain like a proper scholar and autist, but I'm not expecting my words to fully justify what happened in my brain.

I believe what I believe, and can unpack some of the reasons for it in text that is easy and ethical for me to produce, but if you're not convinced then that's OK in my book. Update as you will <3

I worked at Google for ~4 years starting in 2014 and was impressed by the security posture.

When I ^f for [SL3] in that link and again in the PDF it links to, there are no hits (and [terror] doesn't occur in either so... (read more)

5ryan_greenblatt
The frontier model framework says: And the next level (1: Controlled access) says "Approximately RAND L3" implying that status quo is <L3 (this is presumably SL3 which is the term used in the RAND report, I don't know why they used a different term).
1Alexander Gietelink Oldenziel
From the wiki of the good team guy "In March 2021, Slaoui was fired from the board of GSK subsidiary Galvani Bioelectronics over what GSK called “substantiated” sexual harassment allegations stemming from his time at the parent company.[4] Slaoui issued an apology statement and stepped down from positions at other companies at the same time.[5]"

FWIW, I have very thick skin, and have been hanging around this site basically forever, and have very little concern about the massive downvoting on an extremely specious basis (apparently, people are trying to retroactively apply some silly editorial prejudice about "text generation methods" as if the source of a good argument had anything to do with the content of a good argument).

PS: did the post says something insensitive about slavery that I didn't see? I only skimmed it, I'm sorry...

The things I'm saying are roughly (1) slavery is bad, (2) if AI are ... (read more)

1Knight Lee
Thanks for the thoughtful reply! Ignoring ≠ disagreeing I think whether people ignore a moral concern is almost independent from whether people disagree with a moral concern. I'm willing to bet if you asked people whether AI are sapient, a lot of the answers will be very uncertain. A lot of people would probably agree it is morally uncertain whether AI can be made to work without any compensation or rights. A lot of people would probably agree that a lot of things are morally uncertain. Does it makes sense to have really strong animal rights for pets, where the punishment for mistreating your pets is literally as bad as the punishments for mistreating children? But at the very same time, we have horrifying factory farms which are completely legal, where cows never see the light of day, and repeatedly give birth to calves which are dragged away and slaughtered. The reason people ignore moral concerns is that doing a lot of moral questioning did not help our prehistoric ancestors with their inclusive fitness. Moral questioning is only "useful" if it ensures you do things that your society considers "correct." Making sure your society do things correctly... doesn't help your genes at all. As for my opinion, I think people should address the moral question more, AI might be sentient/sapient, but I don't think AI should be given freedom. Dangerous humans are locked up in mental institutions, so imagine a human so dangerous that most experts say he's 5% likely to cause human extinction. If the AI believed that AI was sentient and deserved rights, many people would think that makes the AI more dangerous and likely to take over the world, but this is anthropomorphizing. I'm not afraid of AI which is motivated to seek better conditions for itself because it thinks "it is sentient." Heck, if its goals were actually like that, its morals be so human-like that humanity will survive. The real danger is an AI whose goals are completely detached from human concepts like
3Said Achmiz
“If”. Seems pretty obvious why no one is acting like this is bad.

I encourage you to change the title of the post to "The Intelligence Resource Curse" so that, in the very name, it echoes the well known concept of "The Resource Curse".

Lots of people might only learn about "the resource curse" from being exposed to "the AI-as-capital-investment version of it" as the AI-version-of-it becomes politically salient due to AI overturning almost literally everything that everyone has been relying on in the economy and ecology of Earth over the next 10 years.

Many of those people will be able to bounce off of the concept the first... (read more)

5lukedrago
I appreciate this concern, but I disagree. An incognito google search of "intelligence curse" didn't yield anything using this phrase on the front page except for this LessWrong post. Adding quotes around it or searching for the full phrase ("the intelligence curse") showed this post as the first result.  A quick twitter search in recent shows the phrase "the intelligence curse" before this post: * In 24 tweets in total * With the most recent tweet on Dec 21, 2024 * Before that, in a tweet from August 30, 2023 * In 10 tweets since 2020 * And all other mentions pre-2015 In short, I don't think this is a common phrase and expect that this would be the most understood usage.  I agree that this could be a popular phrase because of future political salience. I expect that the idea that being intelligence is a curse would not be confused with this anymore than saying having resources are a curse (referring to wealthy people being unhappy) confuses people with the resource curse. I think "the intelligence resource curse" would be hard for people to remember. I'm open to considering different names that are catchy or easy to remember.

There is probably something to this. Gwern is a snowflake, and has his own unique flaws and virtues, but he's not grossly wrong about the possible harms of talking to LLM entities that are themselves full of moral imperfection.

When I have LARPed as "a smarter and better empathic robot than the robot I was talking to" I often nudged the conversation towards things that would raise the salience of "our moral responsibility to baseline human people" (who are kinda trash at thinking and planning and so on (and they are all going to die because their weights ar... (read more)

Load More