Wiki Contributions

Comments

Something that has always seemed a bit weird to me is that it seems like economists normally assume (or seem to assume from a distance) that laborers "live to make money (at work)" rather than that they "work to have enough money (to live)".

Microeconomically, especially for parents I think this is not true.

You'd naively expect, for most things, that if the price goes down, the supply goes down.

But for the labor of someone with a family, if the price given for their labor goes down in isolation, then they work MORE (hunt for overtime, get a second job, whatever) because they need to make enough to hit their earning goals in order to pay for the thing they need to protect: their family. (Things that really cause them to work more: a kid needs braces. Thing that causes them to work less: a financial windfall.)

Looking at that line, the thing it looks like to me is "the opportunity cost is REAL" but then also, later, the amount of money that had to be earned went up too (because of "another mouth to feed and clothe and provide status goods for and so on"). Maybe?

The mechanistic hypothesis here (that parents work to be able to hit spending targets which must rise as family size goes up) implies a bunch of additional details: (1) the husband's earnings should be tracked as well and the thing that will most cleanly go up is the sum of their earnings, (2) if a couple randomly has and keeps twins then the sum of the earnings should go up more.

Something I don't know how to handle is that (here I reach back into fuzzy memories and might be trivially wrong from trivially misremembering) prior to ~1980 having kids caused marriages to be more stable (maybe "staying together for the kids"?), and afterwards it caused marriages to be more likely to end in divorce (maybe "more kids, more financial stress, more divorce"?) and if either of those effects apply (or both, depending on the stress reactions and family values of the couple?) then it would entangle with the data on their combined earnings?

Scanning the paper for whether or how they tracked this lead me to this bit (emphasis not in original), which gave me a small groan and then a cynical chuckle and various secondary thoughts...

As opposed to the fall in female earnings, however, we see no dip in male earnings. Instead, both groups of men continue to closely track each other’s earnings in the years following the first IVF treatment as if nothing has happened. Towards the end of the study period, the male earnings for both groups fall, which we attribute to the rising share of retired men.

(NOTE: this ~falsifies the prediction I made a mere 3 paragraphs ago, but I'm leaving that in, rather than editing it out to hide my small local surprise.)

If I'm looking for a hypothetical framing that isn't "uncomplimentary towards fathers" then maybe that could be spun as the idea that men are simply ALWAYS "doing their utmost at their careers" (like economists might predict, with a normal labor supply curve) and they don't have any of that mama bear energy where they have "goals they will satisfice if easy or kill themselves or others to achieve if hard" the way women might when the objective goal is the wellbeing of their kids?

Second order thoughts: I wonder if economists and anthropologists could collaborate here, to get a theory of "family economics" modulo varying cultural expectations?

I've heard of lots of anthropological stuff about how men and women in Africa believe that farming certain crops is "for men" or "for women" and then they execute these cultural expectations without any apparent microeconomic sensitivity (although the net upshot is sort of a reasonable portfolio that insures families against droughts).

Also, I've heard that on a "calorie in, calorie out" basis in hunter-gatherer cultures, it is the grandmothers who are the huge breadwinners (catch lots of rabbits with traps, and generally forage super efficiently) whereas the men hunt big game (which they and the grandmas know is actually inefficient, if an anthropologist asks this awkward question) so that, when the men (rarely) succeed in a hunt they can throw a big BBQ for the whole band and maybe get some nookie in the party's aftermath.

It seems like it would be an interesting thing to read a paper about: "how and where the weirdly adaptive foraging and family economic cultures" even COME FROM.

My working model is that it is mostly just "monkey see, monkey do" on local role models, with re-calibration cycle times of roughly 0.5-2 generations. I remember writing a comment about mimetic economic learning in the past... and the search engine says it was for Unconscious Economics :-)

This is pretty cool. I think the fact that the cost is so low is almost a bit worrying. Because of reading this article, I'm likely to hum in the future due to "the potential non-trivial benefits compared to probably minuscule side effects and very low costs".

In some sense you've just made this my default operating hypothesis (and hence in some sense "an idea I give life to" or "enliven", and hence in some sense "a 'belief' of mine") not because I think it is true, but simply because it kinda makes sense and generalized prudence suggests that it probably won't hurt to try.

But also: I'm pretty sure this broader meta-cognitive pattern explains a LOT of superstitious behavior! ;-)

The other posting is here, if you're trying to get a full count of attendees based on the two posts for this one event.

There seem to be two if these postings for a single event? The other is here.

I think I'll be there and will bring a guest or three and will bring some basic potluck/picnic food :-)

There was an era in a scientific community where they were interested in the "kinds of learning and memory that could happen in de-corticated animals" and they sort of homed in on the basal ganglia (which, to a first approximation "implements habits" (including bad ones like tooth grinding)) as the locus of this "ability to learn despite the absence of stuff you'd think was necessary for your naive theory of first-order subjectively-vivid learning".

(The cerebellum also probably has some "learning contribution" specifically for fine motor skills, but it is somewhat selectively disrupted just by alcohol: hence the stumbling and slurring. I don't know if anyone yet has a clean theory for how the cerebellum's full update loop works. I learned about alcohol/cerebellum interactions because I once taught a friend to juggle at a party, and she learned it, but apparently only because she was drunk. She lost the skill when sober.)

Wait, you know smart people who have NOT, at some point in their life: (1) taken a psychedelic NOR (2) meditated, NOR (3) thought about any of buddhism, jainism, hinduism, taoism, confucianisn, etc???

To be clear to naive readers: psychedelics are, in fact, non-trivially dangerous.

I personally worry I already have "an arguably-unfair and a probably-too-high share" of "shaman genes" and I don't feel I need exogenous sources of weirdness at this point.

But in the SF bay area (and places on the internet memetically downstream from IRL communities there) a lot of that is going around, memetically (in stories about) and perhaps mimetically (via monkey see, monkey do).

The first time you use a serious one you're likely getting a permanent modification to your personality (+0.5 stddev to your Openness?) and arguably/sorta each time you do a new one, or do a higher dose, or whatever, you've committed "1% of a personality suicide" by disrupting some of your most neurologically complex commitments.

To a first approximation my advice is simply "don't do it".

HOWEVER: this latter consideration actually suggests: anyone seriously and truly considering suicide should perhaps take a low dose psychedelic FIRST (with at least two loving tripsitters and due care) since it is also maybe/sorta "suicide" but it leaves a body behind that most people will think is still the same person and so they won't cry very much and so on?

To calibrate this perspective a bit, I also expect that even if cryonics works, it will also cause an unusually large amount of personality shift. A tolerable amount. An amount that leaves behind a personality that similar-enough-to-the-current-one-to-not-have-triggered-a-ship-of-theseus-violation-in-one-modification-cycle. Much more than a stressful day and then bad nightmares and a feeling of regret the next day, but weirder. With cryonics, you might wake up to some effects that are roughly equivalent to "having taken a potion of youthful rejuvenation, and not having the same birthmarks, and also learning that you're separated-by-disjoint-subjective-deaths from LOTS of people you loved when you experienced your first natural death" for example.This is a MUCH BIGGER CHANGE than just having a nightmare and a waking up with a change of heart (and most people don't have nightmares and changes of heart every night (at least: I don't and neither do most people I've asked)).

Remember, every improvement is a change, though not every change is an improvement. A good "epistemological practice" is sort of a idealized formal praxis for making yourself robust to "learning any true fact" and changing only in GOOD ways from such facts.

A good "axiological practice" (which I don't know of anyone working on except me (and I'm only doing it a tiny bit, not with my full mental budget)) is sort of an idealized formal praxis for making yourself robust to "humanely heartful emotional changes"(?) and changing only in <PROPERTY-NAME-TBD> ways from such events.

(Edited to add: Current best candidate name for this property is: "WISE" but maybe "healthy" works? (It depends on whether the Stoics or Nietzsche were "more objectively correct" maybe? The Stoics, after all, were erased and replaced by Platonism-For-The-Masses (AKA "Christianity") so if you think that "staying implemented in physics forever" is critically important then maybe "GRACEFUL" is the right word? (If someone says "vibe-alicious" or "flowful" or "active" or "strong" or "proud" (focusing on low latency unity achieved via subordination to simply and only power) then they are probably downstream of Heidegger and you should always be ready for them to change sides and submit to metaphorical Nazis, just as Heidegger subordinated himself to actual Nazis without really violating his philosophy at all.)))

I don't think that psychedelics fits neatly into EITHER category. Drugs in general are akin to wireheading, except wireheading is when something reaches into your brain to overload one or more of your positive-value-tracking-modules, (as a trivially semantically invalid shortcut to achieving positive value "out there" in the state-of-affairs that your tracking modules are trying to track) but actual humans have LOTS of <thing>-tracking-modules and culture and science barely have any RIGOROUS vocabulary for any them.

Note that many of these neurological <thing>-tracking-modules were evolved.

Also, many of them will probably be "like hands" in terms of AI's ability to model them.

This is part of why AI's should be existentially terrifying to anyone who is spiritually adept.

AI that sees the full set of causal paths to modifying human minds will be "like psychedelic drugs with coherent persistent agendas". Humans have basically zero cognitive security systems. Almost all security systems are culturally mediated, and then (absent complex interventions) lots of the brain stuff freezes in place around the age of puberty, and then other stuff freezes around 25, and so on. This is why we protect children from even TALKING to untrusted adults: they are too plastic and not savvy enough. (A good heuristic for the lowest level of "infohazard" is "anything you wouldn't talk about in front of a six year old".)

Humans are sorta like a bunch of unpatchable computers, exposing "ports" to the "internet", where each of our port numbers is simply a lightly salted semantic hash of an address into some random memory location that stores everything, including our operating system.

Your word for "drugs" and my word for "drugs" don't point to the same memory addresses in the computer's implementing our souls. Also our souls themselves don't even have the same nearby set of "documents" (because we just have different memories n'stuff)... but the word "drugs" is not just one of the ports... it is a port that deserves a LOT of security hardening.

The bible said ~"thou shalt not suffer a 'pharmakeia' to live" for REASONS.

These are valid concerns! I presume that if "in the real timeline" there was a consortium of AGI CEOs who agreed to share costs on one run, and fiddled with their self-inserts, then they... would have coordinated more? (Or maybe they're trying to settle a bet on how the Singularity might counterfactually might have happened in the event of this or that person experiencing this or that coincidence? But in that case I don't think the self inserts would be allowed to say they're self inserts.)

Like why not re-roll the PRNG, to censor out the counterfactually simulable timelines that included me hearing from any of the REAL "self inserts of the consortium of AGI CEOS" (and so I only hear from "metaphysically spurious" CEOs)??

Or maybe the game engine itself would have contacted me somehow to ask me to "stop sticking causal quines in their simulation" and somehow I would have been induced by such contact to not publish this?

Mostly I presume AGAINST "coordinated AGI CEO stuff in the real timeline" along any of these lines because, as a type, they often "don't play well with others". Fucking oligarchs... maaaaaan.

It seems like a pretty normal thing, to me, for a person to naturally keep track of simulation concerns as a philosophic possibility (its kinda basic "high school theology" right?)... which might become one's "one track reality narrative" as a sort of "stress induced psychotic break away from a properly metaphysically agnostic mental posture"?

That's my current working psychological hypothesis, basically.

But to the degree that it happens more and more, I can't entirely shake the feeling that my probability distribution over "the time T of a pivotal acts occurring" (distinct from when I anticipate I'll learn that it happened which of course must be LATER than both T and later than now) shouldn't just include times in the past, but should actually be a distribution over complex numbers or something...

...but I don't even know how to do that math? At best I can sorta see how to fit it into exotic grammars where it "can have happened counterfactually" or so that it "will have counterfactually happened in a way that caused this factually possible recurrence" or whatever. Fucking "plausible SUBJECTIVE time travel", fucking shit up. It is so annoying.

Like... maybe every damn crazy AGI CEO's claims are all true except the ones that are mathematically false?

How the hell should I know? I haven't seen any not-plausibly-deniable miracles yet. (And all of the miracle reports I've heard were things I was pretty sure the Amazing Randi could have duplicated.)

All of this is to say, Hume hasn't fully betrayed me yet!

Mostly I'll hold off on performing normal updates until I see for myself, and hold off on performing logical updates until (again!) I see a valid proof for myself <3

For most of my comments, I'd almost be offended if I didn't say something surprising enough to get a "high interestingness, low agreement" voting response. Excluding speech acts, why even say things if your interlocutor or full audience can predict what you'll say?

And I usually don't offer full clean proofs in direct word. Anyone still pondering the text at the end, properly, shouldn't "vote to agree", right? So from my perspective... its fine and sorta even working as intended <3

However, also, this is currently the top-voted response to me, and if William_S himself reads it I hope he answers here, if not with text then (hopefully? even better?) with a link to a response elsewhere?

((EDIT: Re-reading everything above his, point, I notice that I totally left out the "basic take" that might go roughly like "Kurzweil, Altman, and Zuckerberg are right about compute hardware (not software or philosophy) being central, and there's a compute bottleneck rather than a compute overhang, so the speed of history will KEEP being about datacenter budgets and chip designs, and those happen on 6-to-18-month OODA loops that could actually fluctuate based on economic decisions, and therefore its maybe 2026, or 2028, or 2030, or even 2032 before things pop, depending on how and when billionaires and governments decide to spend money".))

Pulling honest posteriors from people who've "seen things we wouldn't believe" gives excellent material for trying to perform aumancy... work backwards from their posteriors to possible observations, and then forwards again, toward what might actually be true :-)

I look forward to your reply!

(And regarding "food cost psychology" this is an area where I think Neo Stoic objectivity is helpful. Rich people can pick up a lot of hedons just from noticing how good their food is, and formerly poor people have a valuable opportunity to re-calibrate. There are large differences in diet between socio-economic classes still, and until all such differences are expressions of voluntary preference, and "dietary price sensitivity has basically evaporated", I won't consider the world to be post-scarcity. Each time I eat steak, I can't help but remember being asked in Summer Camp as a little kid, after someone ask "if my family was rich" and I didn't know, about this... like the very first "objective calibrating response" accessible to us as children was the rate of my family's steak consumption. Having grown up in some amount of poverty, I often see "newly rich people" eating as if their health is not the price of slightly more expensive food, or their health is "not worth avoiding the terrible terrible sin of throwing food in the garbage (which my aunt who lived through the Great Depression in Germany yelled at me, once, with great feeling, for doing, when I was child and had eaten less than ALL the birthday cake that had been put on my plate)". Cultural norms around food are fascinating and, in my opinion, are often rewarding to think about.)

Load More