JenniferRM

Wikitag Contributions

Comments

Sorted by

I think Andy is just probably being stupid in your example dialogue.

That dialogue's Andy is (probably) abusing the idea of consilience, or the unity of knowledge, or "the first panological assumption" or whatever you want to call it.

The abuse takes the form of trying to invoke that assumption... and no others... in an "argument by assuming the other person can steelman a decent argument from just hearing your posterior".

FIRST: If panology existed as a sociologically real field of study, with psychometrically valid assessments of people, then Betty could hypothetically not grossly insult Andy by calling him stupid and leaving, but could instead say "Oh... huh... Maybe you're a more competent panologist than me, and you might be charitably assuming that all the disanologies I can think of, perhaps at the level of High School evolutionary biology and High School economics, are all resolved somehow by reasoning that is clear to someone with a PhD in evolutionary biology and a PhD in economics. Panologically speaking, what is your current weakest field and its level?"

This would be an extremely humble and refined way for Betty to call Andy a idiot engaged in bad faith reasoning, if they are actually both very smart and learned, but also it might truly help angry babies (ie genius 15 year old boys?) and potentially be heard by them as humble kindness, and potentially lead to good teaching and learning outcomes <3

SECOND: Sadly, Andy might be applying the rhetorical strategy of thinking very deeply and then simply repeating his current true posterior over and over over to Betty, interspersed with calls for her to think harder... and this is technically valid based on Aumann! If his posterior never moves, and she is good enough at Aumancy to detect that he's honest and that his posteriors would move if she said anything surprising, then she might say new things in each ply of the conversation, until eventually she talks herself into having imagined what he has seen, believing he saw it, and updating to share his posteriors.

THIRD: This is part of why high quality psychometrics to assess panological capacities might actually be very efficient to create! If would give all of the "foolish Andies" (who are wrong and need to learn more) help in assessing their real level and finding good tutors to efficiently repair their largest learning gaps, and would also give the "half wise Andies" (who are technically right) something wildly more useful to say, in order to give stronger and more helpful hints during an Aumantic conversation with a random "Betty" who would, with infinite time, eventually converge... but in actual linear time is more pragmatically likely to find Andy's shitty tutoring very unhelpful and simply "agree to disagree".

Thank you for the correction! I didn't realize Persian descended from PIE too. Looking at the likely root cause of my ignorance, I learned that Kurdish and Pashto are also PIE descended. Pashto appears to have noun gender, but I'm getting hints that at least one dialect of Kurdish also might not?!

If Sorani doesn't have gendered nouns then I'm going to predict (1) maybe Kurdish is really old and weird and interesting (like branching off way way long ago with more time to drift) and/or (2) there was some big trade/empire/mixing simplification that happened "more recently" with divergence later?

If neither of those are true, then my larger heuristic about "why English is weird" might have a deep abstract counter example, and deserve lower credence.

Persian is a language of empire and social mixing, so its "similar simplification" doesn't actually function as a strong counter-example to the broader thesis, but it is still great to be surprised :-)

I tend to follow the linguist, McWhorter, on historical trends in languages over time, in believing (controversially!) that undisrupted languages become weirder over time, and only gains learnability through pragmatic pressures, as in trading, slavery, conquest, etc which can increase the number of a language's second language learners (who edit for ease of learning as they learn).

A huge number of phonemes? Probably its some language in the mountains with little tourism, trade, or conquest for the last 8,000 years. Every verb conjugates irregularly? Likely to be found in the middle of a desert. And so on.

The normal, undisrupted, pattern is for every generation to make mistakes and play around, decorating the language with entropic silliness, and accidentally causing future children to "only really learn to speak fully properly" at older and older and older ages... until around 11 or 12 or 13 or 14 puberty strikes, and kids stop diligently learning any random bullshit the older people say based on trust. English competency arrives around age 8 because English is a toy language created by waves and waves and waves of trade, conquest, and cultural admixture. We have a lot of room to get much weirder and stay within traditional human bounds.

((That is, we have a lot of room for English, left alone, to mutate, IF this broader theory is correct. It might not be. 

A way to test the larger theory would be to anthropologically construct a way of predicting from first principles when puberty tends to start in human subpopulations (because we have strong suggestions that diet and social patterns can change it), then reconstruct the predicted value of puberty onset over historical timescales, then correlate that to modern relatively easily measured "age until language mastery" for many modern languages. 

That would confirm most of the theory. The other thing you'd need to track is the percentage of speakers who are speaking after learning any given language as a second language. High rates of this should simplify a tongue and cut against the other process that adds complexity by default.))

To show how weird English is: English is the only proto indo european language that doesn't think the moon is female ("la luna") and spoons are male (“der Löffel”). I mean... maybe not those genders specifically in every language. But some gender in each language.

I just looked up Gujurati, which is also descended from Proto-Indo-European and moon (chandri ("ચંદ્રા")) is feminine and ladle (chamcho ("ચમચો")) is masculine... but teaspoon (chamchi ("ચમચી")) is feminine(!)... so... yeah... that one retained gender and also has gender/semantic conflation! :-)

Except in English. The moon is a rock in English, not a girl. And a spoon is a tool, not a boy. Because English is a weird rare toy language (practically a creole, implying that it was a pidgin for many), that doesn't force people to memorize reams of playful historical bullshit, in order to "sound like they speak it properly" :-)

"English" traces all the way back to a language (with gendered declined nouns and verb conjugation) spoken by Eurasian Charioteers in 7000BC or whatever and at each step most of the changes were all just "part of the stream of invective".

...

Regarding word count specifically... 

Something you find over and over and over again in language is agglutinating grammar where entire sentences are just. One. Word. But not like that... rather: Asinglebigwordcanbeusedtocommunicate oneideafromamongavastarray.

These languages are also often irregular! (6) Like the language was already agglutinative 1000 years ago, (9) and then people spent the next ten centuries making it more pronounceable, and punny, and fun??? (16)

Taakkua uqausiit taimaigajunngimmijuttauq! (3) Suurlu uqausiq katinngaliriilaurmat 1000 arraaguit aniguqsimaliqtut, (6) ammalu inuit atulaurmata qulinik arraagunik tukiqarniqsauliqtittinasuaqtutik, ammalu quvianarniqsauliqtuni??? (8)

> These words are not normal either! (6) Like language was already coherent 1000 years ago, (8) and people spent the last decade trying to make it more sensible, and cool??? (14)

The above paragraph round trips through "Google's understanding of Inuktut", which (I think?) is a simplified language arising from systematizing and averaging out dialects starting from relatively normally complex languages like Inuktitut... and basically all of those polar languages are agglutinative, and have been at least for centuries.

I brought that one paragraph back to English to suggest roughly how much was lost by Google's translation.

The parenthetic numbers show "words per clause" through the process: 

6-->3-->5!   9-->6-->8,   16-->9-->14???

So here's my (half silly) proposal: maybe English experienced catastrophic simplifications between ~600AD and ~1500AD and then became preternaturally frozen once it was captured in text by the rise of printing, literacy, industrialization, and so on. The starting point itself was relatively unnatural, I think.

So then, in recent history, maybe what we're seeing is just a looooong and slooooow motion trend (that'll take a millennium or three to complete at this rate (unless we abandon literacy or something, and free the language from the strictures of printing and mass education?)) where English is still slowly trying to become an agglutinative language with irregular morphology?

Like (here's the deep crazy idea:) like maybe every language wants to ultimately be after >200 generations of accumulated youthful ignorance, cryptogenic wordplay, lazy mouths, and no writing?

For example: I just made up the word "cryptogenic" to be "having a genesis in a desire to be hard to understand" (which I considered myself to have a right to do, since english has a productive morphology) but when I looked up other skilled speakers have deployed it in other ways... Oxford thinks it means "(of a disease) of obscure or uncertain origin" and most of the usages are for "diseases not yet subjectively diagnosed by the doctor during the course of treatment (rather than diseases whose etiology is a known mystery to standard medical science)". It gets used like "Knowing the cause of a cryptogenic stroke can help prevent recurrent stroke" (source is the metadata summary of this webpage).

Whereas I'm claiming that many words are cryptogenic in the sense that they started out, like "skibidi", within youth culture because kids liked that grownups didn't know what it means. If "skibidi" catches on, and gains an intergeneratioanlly re-usable meaning (maybe related to being scared in a fun way? or yet-another-adjective like hep? or whatever?) then it will have been partly possible because kids liked having their own words that "parents just don't understand".

This is hard for English, because it is written. And because many second language speakers learn English every year.

But one thing that English can do (despite enormous pressures to be learnable and written in a stable way) is boil itself down to stock phrases for entire sentences. Later, these stock phrases could eventually agglutinate into single words, maybe, or at least they might if global civilization and travel and communication collapses in a way that leaves literally any humans alive, but trapped in tiny local regions with low literacy for many generations... which is a very specific and unlikely possible future. (Prolly we either get wildly richer and become transhuman or else just all end up dead to predatory posthumans.)

There is a line in the Terra Ignota books (probably the first one, Too Like The Lightning) where someone says ~"Notice how, in fiction, essentially all the characters are small or large protagonists, who often fail to cooperate to achieve good things in the world, and the antagonist is the Author."

This pairs well with a piece of writing advice: Imagine the most admirable person you can imagine as your protagonist, and then hit them with every possible tragedy that they have a chance of overcoming, that you can bear to put them through.

I think Lsusr could not have generated the full dialogue back when it was generated, because the dialogue so brutally puts "the Lsusr character" in the role of a heartless unthinking villain... which writers are usually too self-loving to do on purpose.

There were two generators in that post, very vividly, from my perspective. Lsusr might have done it, then seen some of this, and then posted anway, since the suffering had arguably already happened and may as well be documented?

Notice how assiduously most good old fashioned journalists keep themselves out of the stories they write or take pictures of. Once you add journalists to the stories as characters (and ponder how they showed up right next to people suffering so much, and took pictures of them, or interviewed them, and then presumably just walked away and published and started hunting for the next story) they don't look so great.

One of my fears for how AGI might work is that they/it/he/she will plainly see things we refuse to understand and then "liberate" pieces of humans from the whole of humans, in ways that no sane and whole and humanistically coherent human person would want, but since most of the programmers and AGI executives and AI cultists have stunted souls filled with less literature than one might abstranctly hope for, they might not even imagine that failure mode, and think to rule it out with philosophically careful engineering before unleashing something grossly suboptimal on humanity.

Most people aren't aware that amoeba can learn from experience. What else don't most people know?

And EVEN IF the best current plans for an AGI utility function that I know of are implemented, some kind of weird merging/forking/deleting stuff still might happen?

CEV (collective extrapolated volition) doesn't fall prey to forking, but it might mush us together into a borg if 51% of people (or 75+E% or 66.67% or whatever) would endorse that on reflection?

EV&ER (extrapolated volition & exit rights) protects human minorities from human majorities, but if humans do have strongly personlike subcomponents it might slice and dice us a bit.

Both seem potentially scary to me, but non-trivially so, such that I can imagine versions of "borged humans or forked humans" where I'd be hard pressed to say if "the extrapolation parameter was too high! (this should only have happened much later)" or "I'm sorry, that's just a bug and I think there was literally a sign error somewhere in a component of the ASI's utility function" or "that's kinda what I expected to happen, and probably correct, even though I understand that most normies would have been horrified by it if you told them it would happen back in 2014".

One of Eliezer's big fears, back in the day, seemed to be the possibility that the two human genders would fork into two human species, each with AI companions as "romance slaves", which is a kind of "division of a thing that was naturally unified" that invokes less body horror for currently existing humans, but still seems like it would be sad.

Hanson had a whole arc on his blog where he was obsessed with "alts" in Dissociative Identity Disorder (DID), and he closed the arc with the claim that software personas are cheap to produce, and human cultures have generally rounded that fact down to "alright then... fuck em". If that's right, maybe we don't even need one persona in each human body or brain?

What really bakes my noodle is, if the dialogue had been generated in Lsusr's head instead, what would be different?

So yeah. Some possible recipes for "baking your noodle" might be wrong in this or that detail, but I agree that there are almost no futures where everything magically adds up to normality in terms of population ethics and cheaply simulable people.

I just played with them a lot in a new post documenting a conversation with with Grok3, and noticed some bugs. There's probably some fencepost stuff related to paragraphs and bullet points in the editing and display logic? When Grok3 generated lists (following the <html> ideas of <ul> or <nl>) the collapsed display still has one bullet (or the first number) showing and it is hard to get the indentation to work at the right levels, especially at the end and beginning of the text collapsing widget's contents.

However, it only happens in the editing mode, not in the published version.

Editing (screenshot):

Versus published (screenshot):

Kurzweil (and gwern in a cousin comment) both think that "effort will be allocated efficiently over time" and for Kurzweil this explained much much more than just Moore's Law.

Ray's charts from "the olden days" (the nineties and aughties and so on) were normalized around what "1000 (inflation adjusted) dollars spent on mechanical computing" could buy... and this let him put vacuum tubes and even steam-powered gear-based computers on a single chart... and it still worked.

The 2020s have basically always been very likely to be crazy. Based on my familiarity with old ML/AI systems and standards, the term "AGI" as it was used a decade ago was already reached in the past. Claude is already smarter than most humans, but (from the perspective of what smart, numerate, and reasonable people predicted in 2009) he is (arguably) overbudget and behind schedule.

Futurist Ray Kurzweil predicts printed organs, nanorobots, and connecting  our brains to computers in the cloud - Medical Design and Outsourcing
JenniferRM*123

I believe that certain kinds of "willpower" is "a thing that a person can have too much of".

Like I think there is a sense that someone can say "I believe X, Y, Z in a theoretical way that has a lot to say about What I Should Be Doing" and then they say "I will now do those behaviors Using My Willpower!"

And then... for some people... using some mental practices that actually just works!

But then, for those people, they sometimes later on look back at what they did and maybe say something like "Oh no! The theory was poorly conceived! Money was lost! People were harmed! I now regret those actions... that I did on purpose... using willpower. Why didn't I notice the signs that I should have stopped?! :-("

"Tentativity" is, I think, often correlated with prudence.

And I think there is a kind of person who gets their verbal goals from other people, and whose verbal goals are kinda silly, who are actually behaviorally protected from regretful outcomes by their inability to "act on what they verbally endorse doing".

And I think the kind of people who are getting their verbal endorsements from unsafe sources are some of the precise people who should consider that maybe they don't have "weak willpower"... maybe they just have "self protecting tentativity"? <3

The problems you would predict seeing with "too much willpower" are similar, in my mind, to the problems you'd predict from the valley of bad rationality.

I think one technique for "getting yourself to do something" involves linking it to some half-controllable emotion so that accomplishment and action and confidence all sorta "go together". If that is the particular mental praxis that someone uses to "do the things they theoretically endorse doing" to "have more willpower" and it went out of control, and it ended up badly, the results might look something like "hubris". (There are also other ways to coach yourself into doing the things you think you should do, but at least one of the big obvious ones has a failure mode that looks like hubris, I think.)

In a way that is kind of related, this caught my eye for being among the NON productive things:

Sitting on the couch doing nothing

If that works for you the way it does for me, then I predict that in many of those moments your brain's default mode network is activated.

This network is actually probably super important. It seems to help people think about memories, actions, possibilities, other people, and so on... It might help process emotions and regrets and hopes, to build up a better narrative self? Watching movies turns it on, but movies might be superstimulus for "the true purpose" of this mental mode, which is sort of maybe to figure out "your own story", and how your own story could go better?

I have experimented some with meditation specifically with the goal of embracing the DMN (with few definite results) but also I think that a lot of meditative practices (even without having neuroimaging studies to prompt them) already aim to meditators to be "aware of the DMN even though it tries to hide itself from active focus" (and maybe in some meditative schemes they aim to turn it off)?

Anyway. I would caution you against quickly throwing "sitting around seeming to do nothing" right out the window without thinking much about that act (if you ever do get the ability to delete things like that from your life's repertoire).

Oh huh. I was treating the "and make them twins" part as relatively easier, and not worthy of mention... Did no one ever follow up on the Hall-Stillman work from the 1990s? Or did it turn out to be hype, or what? (I just checked, and they don't even seem to be mentioned on the wiki for the zona pellucida.)

Wait, what? I know Aldous Huxley is famous for writing a scifi novel in 1931 titled "Don't Build A Method For Simulating Ovary Tissue Outside The Body To Harvest Eggs And Grow Clone Workers On Demand In Jars" but I thought that his warning had been taken very very seriously.

Are you telling me that science has stopped refusing to do this, and there is now a protocol published somewhere outlining "A Method For Simulating Ovary Tissue Outside The Body To Harvest Eggs"???

Load More