You see either something special, or nothing special.
Table 2's caption is confusing to read. I think this is because in most of what people write about around here, cross-context fusions are positively valenced by default, and “in the context of” doesn't quite capture the scenario. Something like “misapplying the mindset of one House while working for another” (emphasis on changes) would be much clearer.
I actually think that last one just sounds straightforwardly (hah) right? Note shapes express subdivisions of duration that correspond to common rhythmic structures of music, so if jazz music often uses an uneven subdivision at one level but follows the broad structure otherwise, then skewing the meaning of that level in the note shapes is bending the map toward the logical shape of the territory.
I agree, and if the author also agrees with this or something like it, I think the post would be easier to read if something like that were described in the preface.
The “???” in the row below “Not-so-local modification process” for the corporation case should perhaps be something like “Culture and process”?
Small but repeated error: you mean “Ginkgo Bioworks”, right?
I don't think it's not describable, only that such a description being received by someone whose initial mental state is on “thinking about wanting to get better at switching away from thinking” won't (by default) play the role of effective advice, because for that to work, it needs to be empowered by the recipient processing the message using a version of what it's trying to describe. If you already have the pattern for that, then seeing that part described may act as a signal to flatten the chain, as it were; if you don't, then advice in the usual sense has a high chance of falling flat starting from the mental state you're processing it in, and you might need something more directly experiential (or at least more indirect and koan-like) to get the necessary start.
If I may jump in a bit: I'm not sure ‘advice’ can actually hit the right spot here, for “getting out of the car”-style reasons—in this case, something like “trying to look up ‘how to put down the instruction manual and start operating the machine’ in the instruction manual”. That is, if “receiving advice” is a “thinking”-type activity in mental state, the framing obliterates the message in transit. So in some ways the best available answer would be something like “stop waiting for an answer to that question”, but even that is inherently corruptible once put into words, per above. And while there are plausibly more detailed structures that can be communicated around things like “how do you set up life patterns that create the preconditions for that switch more consistently”, those require a lot more shared context to be useful, and it's really easy to go down a rabbit hole of those as a way of not switching to doing, if there's emotional blocks or other self-defending inertia in the way of switching. I don't know if any of that helps.
Dear people writing in the TeX-based math notation here who want to include full-word variables: putting the word in raw leads to subtly bad formatting. If you just write “cake”, this gets typeset as though it were c times a times k times e, as in this example which admittedly doesn't actually show how awkward it can get depending on the scale: . It's more coherent if you put a \mathrm{} around each word to typeset it as a single upright word, like so: .
Assuming this is the important distinction, I like something like “isolated”/“integrated” better than either of those.
I don't fully agree with gears, but I think it's worth thinking about. If you're talking about “proportion of people who sincerely think that way”, and if we're in the context of outreach, I doubt that matters as much as “proportion of people who will see someone else point at you and make ‘eww another AI slop spewer’ noises, then decide out of self-preservation that they'd better not say anything positive about you or reveal that they've changed their mind about anything because of you”. Also, “creatives who feel threatened by role displacement or think generative AI is morally equivalent to super-plagiarism (whether or not this is due to inaccurate mental models of how it works)” seems like an interest group that might have disproportionate reach.
But I'm also not sure how far that pans out in importance-weighting. I expect my perception of the above to be pretty biased by bubble effects, but I also think we've (especially in the USA, but with a bunch of impact elsewhere due to American cultural-feed dominance) just gone through a period where an overarching memeplex that includes that kind of thing has had massive influence, and I expect that to have a long linger time even if the wave has somewhat crested by now.
On the whole I am pretty divided about whether actively skirting around the landmines there is a good idea or not, though my intuition suggests some kind of mixed strategy split between operators would be best.