Indeed. I imagine it'd have to happen in four steps:
As you say, investigate each cognitive function independently. They won't show the kind of independency psychometrics prefers, since there are overlaps between the different functions, but it'd be a good start.
If that one proves robust, then investigate the axis between the introverted and extraverted modes of the four basic types. My hunch is these four axes would take the form of four bimodal distributions.
Then, if that one also proves robust, investigate the existence and distribution of stable stacks. There are 40,320 possible stacks considering all permutations of all eight functions. My hunch is we'd find a very long-tailed normal distribution, with a small number of common stacks in the ±98% range. Maybe those are the MBTI 16, maybe not.
And then, finally, if the "stacks exist" hypothesis proves valid, study them over long periods of time to observe whether they change, and how.
It then scores the answers across 4 axes
I've read about the MBTI for a while. Not in extreme depth, but also not via the simplifications provided by corporate heads. In depth enough to understand the basics of Jungian psychology on which the MBTI is based, though. So what I will say is likely going to differ significantly from what you learned in this course.
So, the most important thing is, the (real) MBTI four letters do not represent extremes on four different axes. That they do is one such simplification.
The core of the Jungian hypothesis on personality is that there are eight distinct cognitive functions, that is, eight basic ways the mind processes and organizes external and internal information.
These eight cognitive functions form two opposite pairs: Sensing vs Intuition, and Thinking vs Feeling, any of which may operate in either an Extraverted or an Introverted mode. Notice that it isn't that Introversion and Extroversion form an axis, but rather that, say, "Introverted Thinking" and "Extroverted Thinking" form two very distinct modes of Thinking, to the point they cannot be considered the same cognitive process at all.
Jung considered every person to have all eight cognitive functions operating in them, but at very different weights, with a dominant one. In his system, I'd be someone who's using Introverted Thinking as my default cognitive function almost 24/7, only varying this when needed under specific circumstances. So, for him, there were eight personality types, depending on which cognitive function is dominant for every person.
Myers and Briggs studied his works on the topic, and thought it was incomplete. They hypothesized that specifying a single cognitive function as dominant wasn't enough to properly describe how the person functions. In their view, it was also necessary to take into account the cognitive function used secondarily. In my case, the secondary function I use the most is Extraverted Intuition.
Hence, for Myers and Briggs, my personality is defined as being primarily an Introverted Thinker, who uses Extraverted Intuition to fill the gaps where Introverted Thinking doesn't cut it. And that's it.
What are the four letters then?
They're a needlessly convoluted way to say the exact same thing.
In the MBTI system, the two letters in the middle inform what my two main cognitive functions are. Since I use Intuition and Thinking, they're "NT". But that doesn't say which of these is my main cognitive function and which is the secondary, nor which is Introverted or which is Extroverted. That's what the other two letters say. The "I" at the beginning informs that my main function, whether it's the Thinking or the Intuition, is of the Introverted type. And the final letter finally informs whether that "I" applies to the "N" or to the "T". In my case, the fourth letter would be "P", meaning my main function is the "T" one, which thus is the one the "I" affects.
Yes, that's completely nuts. It'd be much, much easier to use something like "IT/EN".
And this brings another aspect of their system. They consider that the main and secondary cognitive functions always have opposite "-version". Hence, by specifying that my main type, Thinking, is of the Introverted type, that automatically assumes the secondary one, Intuition, is Extraverted.
There are a few more details. Basically, the third and fourth most used cognitive functions come from the determination of the first two. In my case, my third and fourth most used cognitive functions would be, respectively, Introverted Sensing (opposite to the second), and Extraverted Feeling (opposite to the first). And the other four would fall behind at positions fifth to eighth. The full set is my so-called "cognitive stack".
TL;DR then: the four letters are not axes, they're a very, very confusing way to say that, from the eight cognitive functions Jung identified, I hierarchize them following this specific sequence of priorities. By default, most of the time, I use this one, and then the others with lower and lower priority, following that sequence. There are (presumably) 16 standard stacks, and maybe several non-standard pathological ones. And all that the four MBTI letters inform is which of the 16 cognitive stacks applies in my case.
This, fundamentally, is the reason why the MBTI doesn't correlate well, or at all, with the Big Five: the MBTI has no axes in a traditional psychometric sense. It's an ordinal hierarchy of preferred cognitive processes, not a cardinal set of values or a standard distribution.
And the easiest way, by far, to identify one's MBTI is to simply read the detailed descriptions of the eight cognitive functions. One of them almost always pops up as "yeah, that's how I think most of the time", with another popping up as "yeah, I also use this one a lot, not as much as that one, but still a lot", the other six being stuff one clearly rarely uses.
Now, is any of this scientific? I don't know. I have read many attempts at determining this, but all of them assume the four letters represent four axes that can then be psychometrically evaluated, which absolutely has nothing to do with what Jung was talking about, and I'm not aware of any psychological study about the validity, or lack thereof, of his hypothesis about the eight cognitive functions themselves (maybe there are?), much less, assuming they're valid, of Myers and Briggs specific assertion they almost always come in 16 stacks (maybe they do, maybe they don't, maybe they vary over time, etc.).
For my own anecdotal case, I find Introverted Thinking coupled with Extraverted Intuition, as described by Jung, covers a lot of how I function. Not everything by far, but a lot. So it's useful. More than that, I cannot really say.
Hope this helps!
EDIT: Correction on my third and fourth functions and other minor clarifications.
I'd say this is the point at which one starts looking into current state-of-the-art psychology (and some non-scientific takes too) to begin understanding all the variability in human behavior and cognition, and which kinds of advantages and disadvantages each provides from different perspectives, from the individual, to the sociological, to the evolutive.
Much of that disappointment is solved by that. Some of it deepens. The overall effect is a net positive though.
Unfortunately, they aren't rational. I developed this theme a little bit more in another reply, but to put it simply, in the US GAI is being pursued by insane individuals. No rational argument can stop someone who believes in that. And the other sides will try to protect themselves from these.
Admittedly, nuclear weapons are not a perfect analog for AI due to many reasons, but I think it is a reasonable analog.
We've had extreme luck when it comes to nuclear weapons. We not only had several close calls that were deescalated by particularly noble individuals doing the right thing, but also, back when the URSS had barely developed theirs and the US alone had a whole stockpile of warheads, we had the good luck of its leadership also being somewhat moral and refusing to turn nukes into a regular weapon, which was followed by MAD forcing everyone to kind of stay so even when the other side asked nicely whether they could bomb a third party. Weren't for that long sequence of good luck after good luck, and we'd now be living in an annihilated world, or at the very least a post-apocalyptic one.
With this in mind, I wanted to ask out of curiosity, what % risk do you think there needs to be for annihilation to occur?
I have no idea, really. All I can infer is that it's unlikely any major power will stop trying to achieve GAI unless:
a) Either a massively severe accident due to misaligned not-quite-GAI-yet happens that by its sheer, absolute horror puts the Fear-Of-God in our civilian and military leaders for a few generations;
b) Or a long sequence of reasonably severe accidents happens, each new one worse than the last, with AI companies repeatedly and consistently failing at fixing the underlying cause, this in turn making military leaders deeply wary of deploying advanced AI systems, and civilian leaders enacting restrictions on what AI is allowed to touch.
Absent either of those, I doubt the pursuit of GAI will stop no matter what X-risk analysts say. Or at least, I myself cannot imagine any kind of argument that'd convince, say, the CPC to stop their research when those on the other side spearheading theirs are massively powerful nutjobs? And then, what argument could be provided that'd stop someone who believes in this? So, neither will stop, which means GAI will happen. And then we'll need to count on luck again, this time with:
i) Either GAI going FOOM as Yudkowsky believes, but for some reason continuing to like humans enough not to turn us into computronium;
ii) Or Hanson being right and FOOM not happening, followed by:
ii.1) Either things being slow enough to "merely" lead to a or b, above;
ii.2) Or things being so immensely slow we can actually fix them.
I have no opinion on whether FOOM is or isn't likely. I've read the entire discussion and all I know is both sets of arguments sound reasonable to me.
I’m assuming that - and please correct me if I’m misinterpreting here - “extinguish” here means something along the lines of, “remove the ability to compete effectively for resources (e.g. customers or other planets)” not “literally annihilate”.
I wish that were the case, but my reference is imagining a paranoid M.A.D. mentality coupled with a Total War scenario unbounded by moral constraints, that is, all sides thinking all the other sides are X-risks to them.
In practice things tend not to get that bad most of the time, but sometimes they do, and much of military preparation concern mitigation of these perceived X-risks, the idea being that if "our side" becomes so powerful it can in fact annihilate the others, and in consequence the others submit without resisting, then "our side" may be magnanimous towards them conditional on their continued subservience and submission, but if they resist to the point of becoming an X-risk towards us, then removing them from the equation entirely is the safest defense from the X-risk they pose us.
A global consensus on stopping GAI development due to its X-risk for all life passes through a global consensus, by all sides, that none of the other sides is an X-risk to any of side. Once everyone agrees on this, then they all together agreeing to deal with a global X-risk becomes feasible. Before that, only if they all see that global X-risk as more urgent and immediate than the many local-to-them X-risks.
Unfortunately, those in positions of power won't listen. From their perspective it's simply absurd to suggest that a system that currently directly causes, at most, a few dozen induced suicide deaths per year, may explode into death of all life. They have no instinctive, gut feeling for exponential growth, so it doesn't exist for them. And even if they acknowledge there's a risk, their practical reasoning moves more along arms-race lines:
"If we stop and don't develop AGI before our geopolitical enemies because we're afraid of a tiny risk of an extinction, they will develop it regardless, then one of two things happen: either global extinction, or our extinction in our enemies' hands. Which is why we must develop it first. If it goes well, we extinguish them before they have a chance to do it to us. If it goes bad, it'd have gone bad anyway in their or our hands, so that case doesn't matter."
Which is to say they won't care until they see thousands or millions of people dying due to rogue GAIs. Then, and only then, they'd start thinking in terms of maybe starting talks about perchance organizing an international meeting to perhaps agree on potential safeguards that might start being implemented after the proper committees are organized and the adequate personal selected to begin defining...
But obviously, factory farm animals feel more pain than crickets. The question is just how much pain?
This paper is far from a complete answer, but it may help:
This isn't a dichotomy. We can farm animals while making their lives reasonably comfortable. Their moments of pain would be few up to and until they reach the age for slaughter, which itself can be made stress-free and painless.
Here in Brazil, for example, we have huge ranches where cattle move around freely. Cramping them all in a tiny area to maximize productivity at the cost of making their lives extremely uncomfortable, as in the US factory farm system, may happen here, but I'm not personally aware of it so unusual that is. The US could do it the same way, as it isn't like the country lacks territory where cattle could roam freely, but since this isn't required by law, and factory farming is more profitable, this is rare, with the end result of free-roaming meat being sold at a much higher premium than it should.
Brazilian chickens, on the other hand, are typically cramped together the same as in the US, unless one opts to buy eggs from small family-owned farms, who mostly let them roam freely.
Knowing truth doesn't provide, by itself, human connection. In the Mormon church you had a community, people with whom you interacted and had a common ground, shared interests, and collective goals. When one breaks with such a community, without having first established a new one, the result may be extreme loneliness.
The way to fix that is to find a new community. Many atheists and rationalists schedule periodic meetings to interact with each other and talk in person, so depending on your need of connection that might suffice. If not, there are church-like organizations that require no profession of faith and welcome atheists, which is particularly effective if one's been raised with church attendance and miss that. In the US Unitarian Universalism is one of the oldest movements along those lines, with the form of Protestant Christianity minus the belief system, but there are others. This CBS article lists several: Inside the "secular churches" that fill a need for some nonreligious Americans.
If you're not particularly attached with atheism itself, you also have the option of exploring personal religiosity and communities that go along with those, which basically means constructing your own religion from your own experiences, which can be induced through mean ranging from meditation and self-suggestion all the way to psychedelic trips. Doing that while remaining 99% a rationalist isn't particularly difficult, the cost being embracing compartmentalization. But then, if that's what it takes for one to find enough meaning in the world that they want to continue on it, I'd say it's a price well worth paying. It's what I myself do, and it hasn't caused me any major problem, my take simply being that, if what I perceive is true, science will eventually catch-up, and if it isn't, as long as I'm not trying to assert it above the perfectly legitimate skepticism of others, then shrugs.
So, my suggestion, in order, would be: meet other atheists and rationalists in real life with some regularity; if that isn't enough, try a church-like atheist/agnostic/agnostic-friendly community; and if that still isn't enough, do your own thing with others doing similarly.