Wiki Contributions

Comments

I agree with this. I'd add that some people use "autodidact" as an insult, and others use it as a compliment, and picking one or the other valence to use reliably is sometimes a shibboleth. Sometimes you want to show off autodidactic tendencies to get good treatment from a cultural system, and sometimes you want to hide such tendencies.

Both the praise and the derogation grow out of a shared awareness that the results (and motivational structures of the people who do the different paths) are different.

The default is for people to be "allodidacts" (or perhaps "heterodidacts"?) but the basic idea is that most easily observed people are in some sense TAME, while others are FERAL.

There is a unity to coherently tamed things, which comes from their tamer. If feral things have any unity, it comes from commonalities in the world itself that they all are forced to hew to because the world they autonomously explore itself contains regularities.

A really interesting boundary case is Cosma Shalizi who started out as (and continues some of the practices of) a galaxy brained autodidact. Look at all those interests! Look at the breadth! What a snowflake! He either coined (or is the central popularizer?) of the term psychoceramics!

But then somehow, in the course of becoming a tenured professor of statistics, he ended up saying stuff like "iq is a statistical myth" as if he were some kind of normy, and afraid of the big bad wolf? (At least he did it in an interesting way... I disagree with his conclusions but learned from his long and detailed justification.)

However, nowhere in that essay does he follow up the claim with any kind of logical sociological consequences. Once you've become so nihilistic about the metaphysical reality of measurable things as to deny that "intelligence is a thing", wouldn't the intellectually honest thing be to follow that up with a call to disband all social psychology departments? They are, after all, very methodologically derivative of (and even more clearly fake than) the idea, and the purveyors of the idea, that "human intelligence" is "a thing". If you say "intelligence" isn't real, then what the hell kind of ontic status (or research funding) does "grit" deserve???

The central difference between autodidacts and allodidacts is probably an approach to "working with others (especially powerful others) in an essentially trusting way".

Autodidacts in the autodidactic mode would generally not have been able to work together to complete the full classiciation of all the finite simple groups. A huge number of mathematicians (so many you'd probably need a spreadsheet and a plan and flashcards to keep them all in your head) worked on that project from ~1800s to 2012, and this is not the kind of project that autodidacts would tend to do. Its more like being one of many many stone masons working on a beautiful (artistic!) cathedral than like being Henry Darger.

1) ...a pile of prompts/heuristics/scaffolding so disgusting and unprincipled only a team of geniuses could have created it

I chuckled out loud over this. Too real.

Also, regarding that second point, how to you plan to adjudicate the bet? It is worded as "create" here, but what can actually be seen to settle the bet will be the effects.

There are rumors coming out of Google including names like "AlphaCode" and "Goose" that suggest they might have already created such a thing, or be near to it. Also, one of the criticisms of Devin (and Devin's likelihood of getting better fast) was that if someone really did crack the problem then they'd just keep the cow and sell the milk. Critch's "tech company singularity" scenario comes to mind.

I wrote this earlier today. I post it here as a comment because there's already top level post on the same topic.

Vernor Vinge, math professor at San Diego State University, hero of the science fiction community (a fan who eventually retired from his extremely good day job to write novels), science consultant, and major influence over the entire culture of the LW community, died due to Parkinson's Disease on March 20th, 2024.

David Brin's memoriam for Vinge is much better than mine, and I encourage you to read it. Vernor and David were colleagues and friends and that is a good place to start.

In 1993, Vernor published the non-fiction essay that coined the word "Singularity".

In 1992, he published "A Fire Upon The Deep" which gave us such words as "godshatter" that was so taken-for-granted as "the limits of what a god can pack into a pile of atoms shaped like a human" that the linked essay doesn't even define it.

As late as 2005 (or as early, if you are someone who thinks the current AI hype cycle came out of nowhere) Vernor was giving speeches about the Singularity, although my memory is that the timelines had slipped a bit between 1993 and 2005 so that in mid aughties F2F interactions he would often stick a thing in his speech that echoed the older text and say:

I'll be surprised if this event occurs before 2005 2012 or after 2030 2035.

Here in March 2024, I'd say that I'd be surprised if the event is publicly and visibly known to have happened before June 2024 or after ~2029.

(Foerester was more specific. He put the day that the GDP of Earth would theoretically become infinite on Friday, November 13, 2026. Even to me, this seems a bit much.)

Vernor Vinge will be missed with clarity now, but he was already missed by many, including me, because his last major work was Rainbows End in 2006, and by 2014 he had mostly retreated from public engagements.

He sometimes joked that many readers missed the missing apostrophe in the title, which made "Rainbows End" a sad assertion rather than a noun phrase about the place you find treasure. Each rainbow and all rainbows: end. They don't go forever.

The last time I ever met him was at a Singularity Summit, back before SIAI changed its name to MIRI, and he didn't recognize me, which I attributed to me simply being way way less important in his life than he was in mine... but I worried back then that maybe the cause was something less comforting than my own unimportance.

In Rainbows End, the protagonist, Robert Gu, awakens from a specific semi-random form of a neuro-degenerative brain disease (a subtype of Alzheimer's not a subtype of Parkinson's) that, just before the singularity really takes off, has been cured.

(It turned out, in the novel, that the AI takeoff was quite slow and broad, so that advances in computing sprinkled "treasures" on people just before things really became unpredictable. Also, as might be the case in real life, in the story it was true that neither Alzheimer's, nor aging in general, was one disease with one cause and one cure, but a complex of things going wrong, where each thing could be fixed, one specialized fix at a time. So Robert Gu awoke to "a fully working brain" (from his unique type of Alzheimer's being fixed) and also woke up more than 50% of the way to having "aging itself" cured, and so he was in a weird patchwork state of being a sort of "elderly teenager".)

Then the protagonist headed to High School, and fell into a situation where he helped Save The World, because this was a trope-alicious way for a story to go.

But also, since Vernor was aiming to write hard science fiction, where no cheat codes exist, heading to High School after being partially reborn was almost a sociologically and medically plausible therapy for an imminent-singularity-world to try on someone half-resurrected by technology (after being partially erased by a brain disease).

It makes some sense! That way they can re-integrate with society after waking up into the new and better society that could (from their perspective) reach back in time and "retroactively save them"! :-)

It was an extremely optimistic vision, really.

In that world, medicine was progressing fast, and social systems were cohesive and caring, and most of the elderly patients in America who lucked into having something that was treatable, were treated.

I have no special insight into the artistic choices here, but it wouldn't surprise me if Vernor was writing about something close to home, already, back then.

I'm planning on re-reading that novel, but I expect it to be a bit heartbreaking in various ways.

I'll be able to see it from knowing that in 2024 Vernor passed. I'll be able to see it from learning in 2020 that the American Medical System is deeply broken (possibly irreparably so (where one is tempted to scrap it and every durable institutional causally upstream of it that still endorses what's broken, so we can start over)). I'll be able to see it in light of 2016, when History Started Going Off The Rails and in the direction of dystopia. And I'll be able to see Rainbows End in light of the 2024 US Presidential Election which be a pointless sideshow if it is not a referendum on the Singularity.

Vernor was an optimist, and I find such optimism more and more needed, lately.

I miss him, and I miss the optimism, and my missing of him blurs into missing optimism in general.

If we want literally everyone to get a happy ending, Parkinson's Disease is just one tiny part of all the things we must fix, as part of Sir Francis Bacon's Project aimed at "the effecting of all (good) things (physically) possible".

Francis, Vernor, David, you (the reader), I (the author of this memoriam), and all the children you know, and all the children of Earth who were born in the last year, and every elderly person who has begun to suspect they know exactly how the reaper will reap them... we are all headed for the same place unless something in general is done (but really unless many specific things are done, one fix at a time...) and so, in my opinion, we'd better get moving.

Since science itself is big, there are lots of ways to help!

Fixing the world is an Olympian project, in more ways than one.

First, there is the obvious: "Citius, Altius, Fortius" is the motto of the Olympics, and human improvement and its celebration is a shared communal goal, celebrated explicitly since 2021 when the motto changed to "Citius, Altius, Fortius – Communiter" or "Faster, Higher, Stronger – Together". Human excellence will hit a limit, but it is admirable to try to push our human boundaries.

Second, every Olympics starts and ends with a literal torch literally being carried. The torch's fire is symbolically the light of Prometheus, standing for spirit, knowledge, and life. In each Olympic event the light is carried, by hand, from place to place, across the surface of the Earth, and across the generations. From those in the past, to we in the present, and then to those in the future. Hopefully it never ends. Also, we remember how it started.

Thirdly, the Olympics is a panhuman practice that goes beyond individuals and beyond governments and aims, if it aims for any definite thing, for the top of the mountain itself, though the top of the mountain is hidden in clouds that humans can't see past, and dangerous to approach. Maybe some of us ascend, but even if not, we can imagine that the Olympians see our striving and admire it and offer us whatever help is truly helpful.

The last substantive talk I ever heard from Vernor was in a classroom on the SDSU campus in roughly 2009, with a bit over a dozen of us in the audience and he talked about trying to see to and through the Singularity, and he had lately become more interested in fantasy tropes that might be amenable to a "hard science fiction" treatment, like demonology (as a proxy for economics?) or some such. He thought that a key thing would be telling the good entities apart from the bad ones. Normally, in theology, this is treated as nearly impossible. Sometimes you get "by their fruits ye shall know them" but that doesn't help prospectively. Some programmers nowadays advocate building the code from scratch, to do what it says on the tin, and have the label on the tin say "this is good". In most religious contexts, you hear none of these proposals, but instead hear about leaps of faith and so on.

Vernor suggested a principle: The bad beings nearly always optimize for engagement, for pulling you ever deeper into their influence. They want to make themselves more firmly a part of your OODA loop. The good ones send you out, away from themselves in an open ended way, but better than before.

Vernor back then didn't cite the Olympics, but as I think about torches being passed, and remember his advice, I still see very little wrong with the idea that a key aspect of benevolence involves sending people who seek your aid away from you, such they they are stronger, higher, faster, and more able to learn and improve the world itself, according to their own vision, using power they now own.

Ceteris paribus, inculcating deepening dependence on oneself, in others, is bad. This isn't my "alignment" insight, but is something I got from Vernor.

I want the bulk of my words, here, to be about the bright light that was Vernor's natural life, and his art, and his early and helpful and hopeful vision of a future, and not about the tragedy that took him from this world.

However, I also think it would be good and right to talk about the bad thing that took Vernor from us, and how to fix it, and so I have moved the "effortful tribute part of this essay" (a lit review and update on possible future cures for Parkinson's Disease) to a separate follow-up post that will be longer and hopefully higher quality.

I apologize. I think the topic is very large, and inferential distances would best be bridged either by the fortuitous coincidence of us having studied similar things (like two multidisciplinary researchers with similar interests accidentally meeting at a conference), or else I'd have to create a non-trivially structured class full of pre-tests and post-tests and micro-lessons, to get someone from "the hodge-podge of high school math and history and biology and econ and civics and cognitive science and theology and computer science that might be in any random literate person's head... through various claims widely considered true in various fields, up to the active interdisciplinary research area where I know that I am confused as I try to figure out if X or not-X (or variations on X that are better formulated) is actually true". Sprawl of words like this is close to the best I can do with my limited public writing budget :-(

Public Choice Theory is a big field with lots and lots of nooks and crannies and in my surveys so far I have not found a good clean proof that benevolent government is impossible.

If you know of a good clean argument that benevolent government is mathematically impossible, it would alleviate a giant hole in my current knowledge, and help me resolve quite a few planning loops that are currently open. I would appreciate knowing the truth here for really real.

Broadly speaking, I'm pretty sure most governments over the last 10,000 years have been basically net-Evil slave empires, but the question here is sorta like: maybe this because that's mathematically necessarily how any "government shaped economic arrangement" necessarily is, or maybe this is because of some contingent fact that just happened to be true in general in the past... 

...like most people over the last 10,000 years were illiterate savages and they didn't know any better, and that might explain the relatively "homogenously evil" character of historical governments and the way that government variation seems to be restricted to a small range of being "slightly more evil to slightly less evil".

Or perhaps the problem is that all of human history has been human history, and there has never been a AI dictator nor AI general nor AI pope nor AI mega celebrity nor AI CEO. Not once. Not ever. And so maybe if that changed then we could "buck the trend line of generalized evil" in the future? A single inhumanly saintlike immortal leader might be all that it takes!

My hope is: despite the empirical truth that governments are evil in general, perhaps this evil has been for contingent reasons (maybe many contingent reasons (like there might be 20 independent causes of a government being non-benevolent, and you have to fix every single one of them to get the benevolent result)).

So long as it is logically possible to get a win condition, I think grit is the right virtue to emphasize in the pursuit of a win condition.

It would just be nice to even have an upper bound on how much optimization pressure would be required to generate a fully benevolent government, and I currently don't even have this :-(

I grant, from my current subjective position, that it could be that it requires infinite optimization pressure... that is to say: it could be that "a benevolent government" is like "a perpetual motion machine"?

Applying grit, as a meta-programming choice applied to my own character structures, I remain forcefully hopeful that "a win condition is possible at all" despite the apparent empirical truth of some broadly catharist summary of the evils of nearly all governments, and darwinian evolution, and so on.

The only exceptions I'm quite certain about are the "net goodness" of sub-Dunbar social groupings among animals.

For example, a lion pride keeps a male lion around as a policy, despite the occasional mass killing of babies when a new male takes over. The cost in murdered babies is probably "worth it on net" compared to alternative policies where males are systematically driven out of a pride when they commit crimes, or females don't even congregate into social groups.

Each pride is like a little country, and evolution would probably eliminate prides from the lion behavioral repertoire if it wasn't net useful, so this is a sort of an existence proof of a limited and tiny government that is "clearly imperfect, but probably net good".

((

In that case, of course, the utility function evolution has built these "emergent lion governments" to optimize for is simply "procreation". Maybe that must be the utility function? Maybe you can't add art or happiness or the-self-actualization-of-novel-persons-in-a-vibrant-community to that utility function and still have it work?? If someone proved it for real and got an "only one possible utility function"-result, it would fulfill some quite bleak lower level sorts of Wattsian predictions. And I can't currently rigorously rule out this concern. So... yeah. Hopefully there can be benevolent governments AND these governments will have some budgetary discretion around preserving "politically useless but humanistically nice things"?

))

But in general, from beginnings like this small argument in favor of "lion government being net positive", I think that it might be possible to generate a sort of "inductive proof".

1. "Simple governments can be worth even non-trivial costs (like ~5% of babies murdered on average, in waves of murderous purges (or whatever the net-tolerable taxation process of the government looks like))" and also..

If N, then N+1: "When adding some social complexity to a 'net worth it government' (longer time rollout before deciding?) (more members in larger groups?) (deeper plies of tactical reasoning at each juncture by each agent?) the WORTH-KEEPING-IT-property itself can be reliably preserved, arbitrarily, forever, using only scale-free organizing principles".

So I would say that's close to my current best argument for hope.

If we can start with something minimally net positive, and scale it up forever, getting better and better at including more and more concerns in fair ways, then... huzzah!

And that's why grit seems like "not an insane thing to apply" to the pursuit of a win condition where a benevolent government could exist for all of Earth.

I just don't have the details of that proof, nor the anthropological nor ethological nor historical data at hand :-(

The strong contrasting claim would be: maybe there is an upper bound. Maybe small packs of animals (or small groups of humans, or whatever) are the limit for some reason? Maybe there are strong constraints implying definite finitudes that limit the degree to which "things can be systematically Good"?

Maybe singleton's can't exist indefinitely. Maybe there will always be civil wars, always be predation, always be fraud, always be abortion, always be infanticide, always be murder, always be misleading advertising, always be cannibalism, always be agents coherently and successfully pursuing unfair allocations outside of safely limited finite games... Maybe there will always be evil, woven into the very structure of governments and social processes, as has been the case since the beginning of human history.

Maybe it is like that because it MUST be like that. Maybe its like that because of math. Maybe it is like that across the entire Tegmark IV multiverse: maybe "if persons in groups, then net evil prevails"?

I have two sketches for a proof that this might be true, because it is responsible and productive to slosh back and forth between "cognitive extremes (best and worst planning cases, true and false hypotheses, etc) that are justified by the data and the ongoing attempt to reconcile the data" still.

Procedure: Try to prove X, then try to prove not-X, and then maybe spend some time considering Goedel and Turing with respect to X. Eventually some X-related-conclusion will be produced! :-)

I think I'd prefer not to talk too much about the proof sketches for the universal inevitability of evil among men.

I might be wrong about them, but also it might convince some in the audience, and that seems like it could be an infohazard? Maybe? And this response is already too large <3

But if anyone already has a proof of the inevitability of evil government, then I'd really appreciate them letting me know that they have one (possibly in private) because I'm non-trivially likely to find the proof eventually anyway, if such proofs exist to be found, and I promise to pay you at least $1000 for the proof, if proof you have. (Offer only good to the first such person. My budget is also finite.)

I wrote 1843 words in response, but it was a bad essay.

This is a from-scratch second draft focused on linking the specifics of the FDA to the thing I actually care about, which is the platonic form of the Good, and its manifestation in the actual world.

The problem is that I'm basically an albigenisian, or cathar, or manichian, in that I believe that there is a logically coherent thing called Goodness and that it is mostly not physically realized in our world and our world's history.

Most governments are very far from a "Good shape", and one of the ways that they are far from this shape is that they actively resist being put into a Good shape.

The US in 1820 was very unusually good compared to most historically available comparison objects but that's not saying very much since most governments, in general, are conspiracies of powerful evil men collaborating to fight with each other marginally less than they otherwise would fight in the absence of their traditional conflict minimization procedures, thus forming a localized cartel that runs a regional protection racket.

The FDA is thus a locally insoluble instance of a much much larger problem.

From December 2019 to February 2022 the nearly universal failure of most governments to adequately handle the covid crisis made the "generalized evil-or-incompetent state" of nearly all worldy governments salient to the common person.

In that period, by explaining in detail how the FDA (and NIH and OSHA and CDC and so on) contributed to the catastrophe, there was a teachable moment regarding the general tragedy facing the general world.

The general problem can be explained in several ways, but one way to explain it is that neither Putin nor Hamas are that different from most governments.

They are different in magnitude and direction... they are different from other governments in who specifically they officially treat as an outgroup, and how strong they are. (All inner parties are inner parties, however.)

Since Putin and Hamas clearly would hurt you and me if they could do so profitably, but since they also obviously can't hurt you and me, it is reasonably safe for you and me to talk about "how Putin and Hamas would be overthrown and replaced with non-Bad governance for their respective communities, and how this would be Good".

From a distance, we can see that Putin is preying on the mothers and families and children of Russia, and we can see that Hamas is preying on the mothers and families and children of Palestine.

Basically, my argument is that every government is currently preying upon every group of people they rule, rather than serving those people, on net.

I'm opposed to death, I'm opposed to taxes, and I'm opposed to the FDA because the FDA is a sort of "tax" (regulations are a behavioral tax) that produces "death" (the lack of medical innovation unto a cure for death).

These are all similar and linked to me. They are vast nearly insoluble tragedies that almost no one is even willing to look at clearly and say "I cannot personally solve this right now, but if I could solve it then it would be worth solving."

Not that there aren't solutions! Logically, we haven't ruled out solutions in full generality in public discussions yet!

I'm pretty sure (though not 100%) that "science doesn't know for sure" that "benevolent government" is literally mathematically impossible. So I want to work on that! <3

However... in Palestine they don't talk much in public about how to fix the problem that "Hamas exists in the way that it does" and in Russia they don't talk much in public about how to fix that "Putin exists in the way that he does" and in China they don't talk much in public about how to fix that "the CCP exists in the way that it does", and so on...

The US, luckily, still has a modicum of "free speech" and so I'm allowed to say "All of our presidents are and have been basically evil" and I'm allowed to say "FDA delenda est" and I'm allowed to say "The Constitution legally enshrines legalized slavery for some, and that is bad, and until it changes we in the US should admit that the US is pretty darn evil. Our median voter functionally endorses slavery, and so our median voter is functionally a moral monster, and if we have any moral leaders then they are the kind of moral leader who will serve evil voters IN SPITE of the obvious evils."

I don't usually bring up "that the FDA is evil" very much anymore.

Covid is old news. The common man is forgetting and the zeitgeist has moved on.

Lately I've been falling back to the much broader and simpler idea that the US Constitution should be amended to simply remove the part of the 13th amendment that literally legalizes literal slavery.

This seems like a cleaner thing, that could easily fit within the five word limit.

And perhaps, after decades of legalisitic struggle, the US could change this one bad law to finally make slavery fully illegal?

But there are millions of bad laws.

Personally, I think the entire concept of government should be rederived from first principles from scratch and rebooted, as a sort of "backup fallback government" for the entire planet, with AI and blockshit, until all the old governments still exist, like the way there are still torture machines in museums of torture, but we just don't use any of the old governments anymore.

There's a logically possible objection from the other direction, saying that government is necessarily evil and there just shouldn't be one. I disagree with this because good institutions are incredibly important to good outcomes, empirically, and also the consent of the governed seems like valid formula. I'm an archist and not an anarchist.

But I'd aim for a state of affairs where instead of using the old governments, we would use things like a Justice API, and Local Barter Points, and a Council of DACs, and a Polyhive Senate Of Self Defense, and Open Source Parliamentarians (AIs built to represent humans within an Open Source Governance framework like in the backstory of Lady Of Mazes), and other weird new things?

Then at some point I'd expect that if most people on Earth looked at their local violence monopoly and had the thought "hey, I'm just not using this anymore" it would lead to waves, in various places, and due to various crises, of whole regions of Earth upgrading their subscriptions to the new system (maybe taking some oaths of mutual defense and signing up for a few new DACs) and then... we'd have something much much better without the drawbacks of the old stuff.

If such "fallback governance systems" had been designed and built in 2019, then I think covid would have caused such a natural phase transition for many countries, when previous systems had visibly and clearly lost the global mandate of heaven.

And if or when such phase transitions occur, there would still be a question of whether the old system will continue to try to prey on the people voluntarily switching over to a new and better system...

And I think it is clear to me and most of my readers that no such reform plan is within any Overton Window in sight...

...and maybe you therefore don't think THIS could be a realistic way to make the FDA not exist in 2026 or 2028 or 2033 (or any other near term date)... 

...but a cautious first principles reboot of the global order to address the numerous and obvious failures of the old order is currently the best I can currently come up with on BOTH the (1) realism and (2) goodness axes.

And while possible replacement system(s) for the government are still being designed, the only people I think it would be worth working with on this project are people who can independently notice that the FDA is evil, and independently notice that slavery is bad and also legal in the US (and also hopefully they can do math and have security mindset).

So, I still endorse "FDA delenda est" but I don't think there's a lot of point to beating that dead horse, or talking about the precise logistics of how to move deck chairs on the titanic around such that the FDA could be doing slightly less evil things while the ship sinks.

The ship is sinking. The water is rising. Be Noah. Build new ships. And don't bother adding "an FDA" to your new ship. That part is surplus to requirements.

The video you linked to was really interesting! I got TWO big lessons from it!

First, I learned something about ambiguity of design intent in designed environments from going "from my subjective framing to the objective claims about the scene" (where I misunderstood the prompt and got a large list of wrong things and didn't notice a single change, and later realized that almost all the changes preserved the feature of misdesign that had been salient for me).

Second, I learned a lot from "trying to use the video's frame to create a subjectivity that could represent what really happened in a subjectively coherent trace" by watching over and over while doing gestalt awareness meditation... and failing at the meditation's aims... until I stopped to reverse engineer a "theory of what happened" into a "method of observation".

I shall unpack both of these a bit more.

Initially, the instructions were

...spot the items in the room that are a little "out of place".

On my very first watch through I was proud of having noticed all the things not in parentheses: (1) the desk in the left corner (where the ball disappears, it turns out) is horribly designed and had a bent leg, (2) the ugly ceiling tiles (where two tiles entirely disappearance) violate symmetry because one of the four lights has a broken cover with the reflectors showing, (3) the couch is untidy with cloth laying over the edge (what was hanging over changed), (4) the desk is messy (but the mess lost a wine bottle), (5) the coffee table has objects VERY CLOSE to the edge, where they will be very easy to bump off and cause a tragedy if someone bumps them while moving with normal lack of caution (though the cup changed from black to white and the candle changed into a bowl).

As a proud autist, I'm happy to report that these are all flaws. I followed the instructions reasonably and collected a set of things that I could have been instructed to have collected! <3

All the flaws I found persisted from the beginning to the end, and they basically count as "things out of place" in the normal reading of that concept (like to an ergonomic engineer, or a housekeeper, or whatever).

It would be interesting to design another stimuli like this video, and have the room be absolutely tidy, with flawless design and a recent cleaning and proper maintenance of the ceiling, and see if it replicates "as much" despite there being no "latent conceptual distraction" of a reasonable set of "room flaws" to find that had been paired with ambiguity about "what counts as a flaw" in the instructions.

On my second and third watches, I knew what changes to look for but I had not yet read the video title to understand that gradual change blindness was the key concept.

So I just queued up the set of things to be "sensitive to motion about" in my subjective attentiveness filters and waited for "the feeling of something in jerky motion, for me to resist doing an eye saccade towards" to hit my gestalt scene sense... and I got a couple of those!

However, the place they triggered was in the frame-to-frame jumps in the dithering of the "greyscale" of boring parts of the scene that weren't even "officially changing"!

Like dithering is, in some sense, a cryptographic hash of a scene and so my treating "something jumps as something worthy of salience" was only detecting jumps in places that were not carefully controlled by the stimuli designers!

Ultimately, the second thing I learned was how to apply a top-down expectation of change into my observing loop

The thing that finally got me to this place was starting with a list of things that I knew had changed, and then running a rough branch and bound algorithm running a mousing-over along the timeline, and looking at the thumbnail, seeking ANY of the changes showing up as a "jerky pop" as they changed from one thing to the next thing.

This is what proved visually to me no such pops existed. Logically then: the changes were nearly continuous.

The only "pop(!) that looks like a change" that I could then find was scrubbing very fast, so the sped up video finally gave me things that looked like a fade.

What I realized is that to get a subjective sense of what was really happening in real time, I had to buy into the idea that "motion detection will fail me" and I had to make an explicit list of features of "where the scene started" and "what the designers of the scene's shift planned to happen over the course of the shift" and keep both concepts in mind actively during all perceptual acts.

Then, moment to moment, I could flick my attention around to extract, with each saccade of my eyes, a momentary impression like:

  1. "the dithering flickered and the cup on the edge of coffee table is 10% of the way from white to black (which is part of the plan)"...
  2. "the dithering flicked and the exercise ball is 20% disappeared (which is part of the plan)"...
  3. "more flickering and now the candle/bowl on the coffee table is 30% shapeshifted (which is part of the plan)"...
  4. "the portraits on the shelves are 40% moved from low to high (which is part of the plan)"... and so on.

Like here's "the untidy couch object at a fade of ~60% white, ~40% blue" which can be seen and fitted into the expectation of the overall shift that is being consciously perpetrated against your perceptual systems by the stimuli designers:

In the frames before and after it is slightly more or less faded and your visual motion detectors will never see it POP(!) with a feeling of "its like a frog jumped, or a cat's tail writhed, or a bird flew by".

It will always just seem like a locally invalid way for things to be, because it isn't something your inner mental physics simulator could ever generate as a thing that physics does... but also over time the video effect will have one plausible thing slowly be more and more ghostly until it is gone. From valid, to invalid but seemingly static, to valid again.

I think it was critical for this effect that the whole video was 53 seconds long. Auditory working memory is often about 4 seconds long, and I bet video working memory is similar.

The critical thing to make these kinds of "change-blindness mechanism proving stimuli" is probably to make the change "feel invisible" by maintaining a simple and reasonable "invariant over time".

You would want no frame-to-frame visual deltas that are (1) easily perceptible in a side by side comparison (due to low level logarithmic sensitivity processes that science has known about since ~1860) and (2) closer than 5 seconds in time such that the brain could keep lots of detail about any two images (a before and after that are distinct) because the brain will have had more images in between (such as to cause our visual change buffer to overflow before any detector-of-change-classifier actually fires and triggers a new "temporary subjective consensus block" in the brain's overall gestalt consensus summary of "the scene").

...

So that's really interesting! I can instantly imagine ways to transpose this tactic into PR, and management, and politics, and finance, and other domains where the goal is explicitly to gain benefits from hurting people who might have naively and implicitly trusted you to not hurt them through deception.

I bet it will also help with the design of wildly more effective slow missiles.

...

Humans are so fucked. The future is probably going to feel like Blindsight unless our AI overlords love us and want our subjective reality to make sense despite our limitations. "Daily experience as an empathically designed UI for the disabled"?

...

Defensively speaking, (like if there even is any possible defense and we're not just totally doomed) maybe the key principle for the design of systems of defense against the likely attacks would involve archiving obsessively and running offline change detectors on exponentially larger timescales?

It reminds me a bit of Dune "shield fighting": slow on the offense, fast on the defense... but for sense-making?

This bit might be somewhat true but I think that it actually radically understates the catastrophic harms that the FDA caused.

Every week the Covid-19 vaccines were delayed, for example, cost at least four thousand lives. Pfizer sent their final Phase 3 data to the FDA on November 20th but was not approved until 3 weeks later on December 11th. There were successful Phase I/II human trials and successful primate-challenge trials 5 months earlier in July. Billions of doses of the vaccine were ordered by September. Every week, thousands of people died while the FDA waited for more information even after we were confident that the vaccine would not hurt anybody and was likely to prevent death. The extra information that the FDA waited months to get was not worth the tens of thousands of lives it cost. Scaling back the FDA’s mandatory authority to safety and ingredient testing would correct for this deadly bias.

Something else that the FDA regulated was covid testing. In December of 2019 there were many tests for covid in many countries. I could have made one myself, and by February of 2020 I was pricing PCR machines and considering setting up "drive through covid testing" without any regulatory oversight.

Part of my "go / nogo" calculus was that I expected to get personally financially destroyed by the FDA for totally ignoring their oversight processes, but I was imagining that either (1) being destroyed by evil would be worth the good it does or (2) people would begin to realize how evil the FDA is in general and I'd be saved by some equivalent of jury nullification.

If the FAA and CDC and other authorities relevant to ports of entry had had millions of covid tests in US airports in January of 2020 then there is a possibility that nearly all covid deaths in general would have been prevented by preventing community spread by preventing covid from even getting into the US.

One of several reasons nothing like this was even conceivably possibly is that the FDA made all covid tests (except maybe 50 per day done by hand by a couple scientists in Atlanta Georgia) illegal all the way up to March or April of 2020 or so (they started authorizing things irregularly after the panic started, when community spread was undeniable, but not before).

The US was proven to basically entire lack the CONCEPT of "actual public health" where actual public health unpacks into a centralized and strategically coherent system for preventing the entry and spread of communicable diseases in the US.

The FDA is a critical part of the prevention of actual public health for every novel disease that has come along since 1962, and everything that will come along unless they "do correct policy by hand by turning off their stupid policies every time their stupid policies become OBVIOUSLY stupid in a new emergency".

If Ebola had gotten into the US in the past, the FDA would have prevented large volumes of new tests for that too. This is a fully general problem. Until we fix it structurally, we will be at the mercy of either (1) the natural evolution of new diseases or (2) the creation of new diseases by madmen in virology labs.

The US government is catastrophically stupid-to-the-point-of-evil here. It has not banned gain of function research outside of BSL5s. It has not set up a real public health system. It systematically misregulates medicine with the goal of suppressing new medicine.

Right how the US has a godawful mix of public/private "collaboration" so that we have all the charity and kindness of capitalism, mixed with all the flexibility and efficiency of the soviet empire.

We literally don't even have a private medical industry OR a public medical system and BOTH are critical for life and health.

This "worst half of each" combo we have right now should be lit on fire and two better systems should be built on their ashes.

The existing FDA is THE KEYSTONE of this vast edifice of corrupt government-based evil. Any presidential candidate will get my vote if they promise to completely reboot the entire US medical system in the direction of (1) freedom in privatized medicine and (2) huge increases in state capacity to detect and prevent terrible new diseases so that we also have good public medicine.

The CDC should go back to being part of the military. OSHA should stop regulating medical workplaces. The NIH and the residual parts of the FDA that aren't stupid-unto-evil (and I grant that the FDA isn't literally 100% evil because nothing is 100% except in math) should be put under the CDC. The efficacy mandate of the FDA should be removed. The safety mandate of the FDA should ALSO be removed. The right way to manage safety concerns for brand new drugs is tort reform for medical malpractice. Grownups OWN THEIR OWN RISK.

There should be a real right to try for people with terrible illnesses with no known reliably safe cures, who want to roll the dice and try something new that has never been tried before. Doctors in clinical practice should be able to get a signature on a risk acceptance contract, and then do crazy new medicine, and be protected in that from lawsuits.

The time to do "FDA-like oversight of the first 20 people to try a new therapy" is not PROSPECTIVELY for literally EVERY medicine. It should be done in retrospect, when it failed, and the result was sad, and the patient thinks that the sadness was not the sort of sadness they were warned about in the contract they signed when they accepted the risks of trying something new.

The existing medical system has SO MANY bad ideas and so little coherent planning about how to do actual good that a reboot with new people in a new organizational shape is strongly indicated.

The existing FDA is THE KEYSTONE of this vast edifice of corrupt government-based evil.

FDA delenda est.

Reply1111

I do NOT know that "the subjective feeling of being right" is an adequate approach to purge all error.

Also, I think that hypotheses are often wrong, but they motivate new careful systematic observation, and that this "useful wrongness" is often a core part of a larger OODA loop of guessing and checking ideas in the course of learning and discovery.

My claim is that "the subjective feeling of being right" is a tool whose absence works to disqualify at least some wrongnesses as "maybe true, maybe false, but not confidently and clearly known to be true in that way that feels very very hard to get wrong".

Prime numbers fall out of simple definitions, and I know in my bones that five is prime.

There are very few things that I know with as much certainty as this, but I'm pretty sure that being vividly and reliably shown to be wrong about this would require me to rebuild my metaphysics and epistemics in radical ways. I've been wrong a lot, but the things I was wrong about were not like my mental state(s) around "5 is prime".

And in science, seeking reliable generalities about the physical world, there's another sort of qualitative difference that is similar. For example, I grew up in northern California, and I've seen so many Sequoia sempervirens that I can often "just look" and "simply know" that that is the kind of tree I'm seeing.

If I visit other biomes, the feeling of "looking at a forest and NOT knowing the names of >80% of the plants I can see" is kind of pleasantly disorienting... there is so much to learn in other biomes!

(I've only ever seen one Metasequoia glyptostroboides that was planted as a specimen at the entrance to a park, and probably can't recognize them, but my understanding is that they just don't look like a coastal redwood or even grow very well where coastal redwoods naturally grow. My confidence for Sequoiadendron giganteum is in between. There could hypothetically be a fourth kind of redwood that is rare. Or it might be that half the coastal redwoods I "very confidently recognize" are male and half are female in some weird way (or maybe 10% are have even weirder polyploid status than you'd naively expect?) and I just can't see the subtle distinctions (yet)? With science and the material world, in my experience, I simply can't achieve the kind of subjective feeling of confident correctness that exists in math.)

In general, subjectively, for me, "random ass guesses" (even the ones that turn out right (but by random chance you'd expect them to mostly be wrong)) feel very very different from coherently-justified, well-understood, broadly-empirically-supported, central, contextualized, confident, "correct" conclusions because they lack a subjective feeling of "confidence".

And within domains where I (and presumably other people?) are basically confident, I claim that there's a distinct feeling which shows up in one's aversions to observation or contemplation about things at the edge of awareness. This is less reliable, and attaching the feelings to Bayesian credence levels is challenging and I don't know how to teach it, and I do it imperfectly myself...

...but (1) without subjective awareness of confidence and (2) the ability to notice aversion (or lack thereof) to tangential and potentially relevant evidence...

...I wouldn't say that epistemic progress is impossible. Helicopters, peregrine falcons, F-16s, and bees show that there are many ways to fly.

But I am saying that if I had these subjective senses of confidence and confusion lesioned from my brain, I'd expect to be, mentally, a bit like a "bee with only one wing" and not expect to be able to make very much intellectual progress. I think I'd have a lot of difficulty learning math, much less being able to tutor the parts of math I'm confident about.

(I'm not sure if I'd be able to notice the lesion or not. It is an interesting question whether or how such things are neurologically organized, and whether modular parts of the brain are "relevant to declarative/verbal/measurable epistemic performance" in coherent or redundant or complimentary ways. I don't know how to lesion brains in the way I propose, and maybe it isn't even possible, except as a low resolution thought experiment?)

In summary, I don't think "feeling the subjective difference between believing something true and believing something false" is necessary or sufficient for flawless epistemology, just that it is damn useful, and not something I'd want to do without.

This bit irked me because it is inconsistent with a foundational way of checking and improving my brain that might be enough by itself to recover the whole of the art:

Being wrong feels exactly like being right.

This might be true in some specific situation where a sort of Epistemic Potemkin Village is being constructed for you with the goal of making it true... but otherwise, with high reliability, I think it is wrong.

Being confident feels very similar in both cases, but being confidently right enables you to predict things at the edge of your perceptions and keep "guessing right" and you kinda just get bored, whereas being confidently wrong feels different at the edges of your perceptions, with blindness there, or an aversion to looking, or a lack of curiosity, or a certainty that it is neither interesting nor important nor good".

If you go confidently forth in an area where you are wrong, you feel surprise over and over and over (unless something is watching your mind and creating what you expect in each place you look). If you're wrong about something, you either go there and get surprised, or "just feel" like not going there, or something is generating the thing you're exploring.

I think this is part of how it is possible to be genre-savvy. In fiction, there IS an optimization process that IS laying out a world, with surprises all queued up "as if you had been wrong about an objective world that existed by accident, with all correlations caused by accident and physics iterated over time". Once you're genre-savvy, you've learned to "see past the so-called surprises to the creative optimizing author of those surprises".

There are probably theorems lurking here (not that I've seen in wikipedia and checked for myself, but it makes sense), that sort of invert Aumann, and show that if the Author ever makes non-trivial choices, then an ideal bayesian reasoner will eventually catch on.

If creationism was true, and our demiurge had done a big complicated thing, then eventually "doing physics" and "becoming theologically genre-savvy" would be the SAME thing.

This not working (and hypotheses that suppose "blind mechanism" working very well) is either evidence that (1) naive creationism is false, (2) we haven't studied physics long enough, or (3) we have a demiurge and is it is a half-evil fuckhead who aims to subvert the efforts of "genre-savvy scientists" by exploiting the imperfections of our ability to update on evidence.

(A fourth hypothesis is: the "real" god (OntoGod?) is something like "math itself". Then "math" conceives of literally every universe as a logically possible data structure, including our entire spacetime and so on, often times almost by accident, like how our universe is accidentally simulated as a side effect every time anyone anywhere in the multi-verse runs Solomonoff Induction on a big enough computer. Sadly, this is basically just a new way of talking that is maybe a bit more rigorous than older ways of talking, at the cost of being unintelligible to most people. It doesn't help you predict coin flips or know the melting point of water any more precisely, so like: what's the point?)

But anyway... it all starts with "being confidently wrong feels different (out at the edges, where aversion and confusion can lurk) than being confidently right". If that were false, then we couldn't do math... but we can do math, so yay for that! <3

Load More