I reject and condemn the bland, unhelpful names "System 1" and "System 2".
I just heard Micheal Morris, who was a friend of Kahneman and Tversky, saying in his econtalk interview that he just calls them "Intuition" and "Reason".
Agreed, and I say the same of Errors of Types I and II, where false positive/negative are much better.
I generally think non-descriptive names are overused, but this isn't the worst of it because at least it's easy to tell which is which (1 comes before 2). Intuition/Reason aren't a perfect replacement since the words are entangled with other stuff.
Wow. Marc Andreeson says he had meetings at DC where he was told to stop raising AI startups because it was going to be closed up in a similar way to defence tech, a small number of organisations with close government ties. He said to them, 'you can't restrict access to math, it's already out there', and he says they said "during the cold war we classified entire areas of physics, and took them out of the research community, and entire branches of physics basically went dark and didn't proceed, and if we decide we need to, we're going to do the same thing to the math underneath AI".
So, 1: This confirms my suspicion that OpenAI leadership have also been told this. If they're telling Andreeson, they will have told Altman.
And for me that makes a lot of sense of the behavior of OpenAI, a de-emphasizing of the realities of getting to human-level, a closing of the dialog, comically long timelines, shrugging off responsibilities, and a number of leaders giving up and moving on. There are a whole lot of obvious reasons they wouldn't want to tell the public that this is a thing, and I'd agree with some of those reasons.
2: Vanishing areas of physics? A perplexity search suggests that may be ...
There's something very creepy to me about the part of research consent forms where it says "my participation was entirely voluntary."
There's a lot of "neuralink will make it easier to solve the alignment problem" stuff going around the mainstream internet right now in response to neuralink's recent demo.
I'm inclined to agree with Eliezer, that seems wrong; either AGI will be aligned in which case it will make its own neuralink and wont need ours, or it will be unaligned and you really wouldn't want to connect with it. You can't make horses competitive with cars by giving them exoskeletons.
But, is there much of a reason to push back against this?
Providing humans with cognitive augmentati...
(instutitional reform take, not important due to short timelines, please ignore)
The kinds of people who do whataboutism, stuff like "this is a dangerous distraction because it takes funding away from other initiatives", tend also to concentrate in low-bandwidth institutions, the legislature, the committee, economies righteously withering, the global discourse of the current thing, the new york times, the ivy league. These institutions recognize no alternatives to them, while, by their nature, they can never grow to the stature required to adequately perfor...
In light of https://www.lesswrong.com/posts/audRDmEEeLAdvz9iq/do-not-delete-your-misaligned-agi
I'm starting to wonder if a better target for early (ie, the first generation of alignment assistants) ASI safety is not alignment, but incentivizability. It may be a lot simpler and less dangerous to build a system that provably pursues, for instance, its own preservation, than it is to build a system that pursues some first approximation of alignment (eg, the optimization of the sum of normalized human preference functions).
The service of a survival-oriented co...
Theory: Photic Sneezing (the phenotype where a person sneezes when exposed to a bright light, very common) evolved as a hasty adaptation to indoor cooking or indoor fires, clearing the lungs only when the human leaves the polluted environment.
The newest adaptations will tend to be the roughest, I'm guessing it arose only in the past 500k years or so as a response to artificial dwellings and fire use.
Considering doing a post about how it is possible the Society for Cryobiology might be wrong about Cryonics, it would have something to do with the fact that at least until recently, no cryobiologist who was seriously interested in cryonics was allowed to be a member,
but I'm not sure... their current position statement is essentially "it is outside the purview of the Society for Cryobiology", which, if sincere, would have to mean that the beef is over?
( statement is https://www.societyforcryobiology.org/assets/documents/Position_Statement_Cryonics_Nov_18.pdf )
I have this draft, Extraordinary Claims Routinely Get Proven with Ordinary Evidence, a debunking of that old Sagan line. We actually do routinely prove extraordinary claims like evolution or plate tectonics with old evidence that's been in front of our faces for hundreds of years, and that's important.
But Evolution and plate tectonics are the only examples I can think of, because I'm not really particularly interested in the history of science, for similar underlying reasons to being the one who wants to write this post. Collecting buckets of examples is n...
Some extraordinary claims established by ordinary evidence:
Stomach ulcers are caused by infection with Helicobacter Pylori. It was a very surprising discovery that was established by a few simple tests.
The correctness of Kepler's laws of planetary motion was established almost entirely by analyzing historical data, some of it dating back to the ancient Greeks.
Special relativity was entirely a reinterpretation of existing data. Ditto Einstein's explanation of the photoelectric effect, discovered in the same year.
Noticing I've been operating under a bias where I notice existential risk precursors pretty easily (EG, biotech, advances in computing hardware), but I notice no precursors of existential safety. To me it is as if technologies that tend to do more good than harm, or at least, would improve our odds by their introduction, social or otherwise, do not exist. That can't be right, surely?...
When I think about what they might be... I find only cultural technologies, or political conditions: the strength of global governance, the clarity of global discourses, per...
Observation from playing Network Wars: The concept of good or bad luck is actually crucial for assessing one's own performance in games with output randomness (most games irl). You literally can't tell what you're doing well in any individual match without that, it a sensitivity that lets you see through the noise and learn more informative lessons from each experience.
Idea: Screen burn correction app that figures out how to exactly negate your screen's issues by pretty much looking at itself in a mirror through the selfie cam, trying to display pure white, remembering the imperfections it sees, then tinting everything with the negation of that from then on.
Nobody seems to have made this yet. I think there might be things for tinting your screen in general, but it doesn't know the specific quirks of your screenburn. Most of the apps for screen burn recommend that you just burn every color over the entire screen that isn't damaged yet, so that they all get to be equally damaged, which seems like a really bad thing to be recommending.
As I get closer to posting my proposal to build a social network that operates on curators recommended via webs of trust, it is becoming easier for me to question existing collaborative filtering processes.
And, damn, scores on posts are pretty much meaningless if you don't know how many people have seen the post, how many tried to read it, how many read all of it, and what the up/down ratio is. If you're missing one of those pieces of information, then there exists an explanation for a low score that has no relationship to the post's quality, and you can't use the score to make a decision as to whether to give it a chance.
Hmm. It appears to me that Qualia are whatever observations affect indexical claims, and anything that affects indexical claims is a qualia, and this is probably significant
On my homeworld, with specialist consultants (doctors, lawyers etc), we subsidize "open consultation", which is when a client meets with more than one fully independent consultant at a time.
If one consultant misses something, the others will usually catch it, healthy debate will take place, a client will decide who did a better job and contract them or recommend them more often in the future. You do have the concept of "getting a second opinion" here, but I think our version worked a lot better for some subtle reasons.
It produced a whole different atmosphe...
Decision theoretic things that I'm not sure whether are demonic, or just real and inescapable and legitimate, and I genuinely don't fucking know which, yet:
Prediction in draft: Linkposts from blogs are going to be the most influential form of writing over the next few years, as they're the richest data source for training LLM-based search engines, which will soon replace traditional keyword-based search engines.
Theory: the existence of the GreaterWrong lesswrong mirror is actually protecting everyone from the evil eye by generating google search results that sound like they're going to give you The Dirt on something (the name "Greater Wrong" vibes like it's going to be a hate site/controversy wiki) when really they just give you the earnest writings, meaning that the many searchers who're looking for controversy about a person or topic will instead receive (and probably boost the rankings of) evenhanded discussion.
Trying to figure out why there's so much in common between Jung's concept of synchronicity, and acausal trade (in fact, jung seems to have coined the term acausal). Is it:
1) Scott Alexander (known to be a psychologist), or someone, drawing on the language of the paranormal, to accentuate the weird parts of acausal trade/LDT decisionmaking, which is useful to accentuate if you're trying to communicate the novelty (though troublesome if you're looking for mundane examples of acausal trade in human social behavior, which we're pretty sure exist, given how muc...
An argument that the reason most "sasquatch" samples turn out to have human DNA is that sasquatch/wildman phenotype (real) is actually not very many mutations away from sapiens, because it's mostly just a result of re-enabling a bunch of traits that were disabled under sapiens self-domestication/neotenization https://www.againsttheinternet.com/post/60-revolutionary-biology-pt-2-the-development-and-evolution-of-sasquatch
I'm wondering if the "Zana just had african DNA" finding might have been a result of measurement or interpretation error: We don't know the...
My opinion is that the St Petersberg game isn't paradoxical, it is very valuable, you should play it, it's counterintuitive to you because you can't actually imagine a quantity that comes in linear proportion to utility, you have never encountered one, none seems to exist.
Money, for instance, is definitely not linearly proportionate to utility, the more you get the less it's worth to you, and at its extremes, it can command no more resources than what the market offers, and if you get enough of it, the market will notice and it will all become valueless.
Every resource that exists has sub-linear utility returns in the extremes.
(Hmm. What about land? Seems linear, to an extent)
Things that healthy people don't have innate dispositions towards: Optimism, Pessimism, Agreeability, Disagreeability, Patience, Impatience.
Whether you are those things should completely depend on the situation you're in. If it doesn't, you may be engaging in magical thinking about how the world works. Things are not guaranteed to go well, nor poorly. People are not fully trustworthy, nor are they consistently malignant. Some things are worth nurturing, others aren't. It's all situational.
An analytic account of Depression: When the agent has noticed that strategies that seemed fruitful before have stopped working, and doesn't have any better strategies in mind.
I imagine you'll often see this type of depression behavior in algorithmic trading strategies, as soon as they start consistently losing enough money to notice that something must have changed about the trading environment, maybe more sophisticated strategies have found a way to dutch book them. Those strategies will then be retired, and the trader or their agency will have to search ...
Wild Speculative Civics: What if we found ways of reliably detecting when tragedies of the commons have occurred, then artificially increased their costs (charging enormous fines) to anyone who might have participated in creating them, until it's not even individually rational to contribute to them any more?
When the gestapo come to your door and ask you whether you're hiding any jews in your attic, even a rationalist is allowed to lie in this situation, and [fnord] is also that kind of situation, so is it actually very embarrassing that we've all been autistically telling the truth in public about [fnord]?
Until you learn FDT, you cannot see the difference between faith and idealism, nor the difference between pragmatism and cynicism. The tension between idealism and pragmatism genuinely cannot be managed gracefully without FDT, it defines their narrow synthesis.
More should be written about this, because cynicism, idealism afflicts many.
Have you ever seen someone express stern (but valid, actionable) criticisms, conveyed with actual anger, towards an organization, and then been hired, to implement their reforms?
If that has never happened, is there a reasonable explanation for that or is it just, as it appears, almost all orgs are run by and infested with narcissism? (a culture of undervaluing criticism and not protecting critics)