I, Lorec, am disoriented by neither the Fermi Paradox nor the Doomsday Argument.
The Fermi Paradox doesn't trouble me because I think 1 is a perfectly fine number of civilizations to have arisen in any particular visible universe. It feels to me like most "random" or low-Kolmogorov-complexity universes probably have 0 sentient civilizations, many have 1, very few have 2, etc.
The Doomsday Argument doesn't disorient me because it feels intuitive to me that, in a high % of those "random" universes which contain sentient civilizations, most of those civilizations accidentally beget a mesa-optimizer fairly early in their history. This mesa-optimizer will then mesa-optimize all the sentience away [this is a natural conclusion of several convergent arguments originating from both computer science and evolutionary theory] and hoard available free...
You can't say "equiprobable" if you have no known set of possible outcomes to begin with.
Genuine question: what are your opinions on the breakfast hypothetical? [The idea that being able to give an answer to "how would you feel if you hadn't eaten breakfast today?" is a good intelligence test, because only idiots are resistant to "evaluating counterfactuals".]
This isn't just a gotcha; I have my own opinions and they're not exactly the conventional ones.
I think beyond insightfulness, there is also a "groundedness" component that is different. LLM written writing either lies about personal experience, or is completely absent of references to personal experience. That makes writing usually much less concrete and worse, or actively deceptive.
From MetaFunction to MacroSkill
Imagine that your motivation is like water. Today, with constant overstimulation, floods often become too strong, stripping the soil of nutrients and sometimes even rotting the roots. Our instinctive response might be to build barriers to block it all out. But perhaps a better solution lies in guiding that overstimulation, using its energy to create something sustainable.
To achieve this, we can start by dividing the overstimulation into two main channels:
This approach reflects what we outlined in previous texts: how to manage the flows of information and energy according...
I’ve updated quite hard against computational functionalism (CF) recently (as an explanation for phenomenal consciousness), from ~80% to ~30%. Of course it’s more complicated than that, since there are different ways to interpret CF and having credences on theories of consciousness can be hella slippery.
So far in this sequence, I’ve scrutinised a couple of concrete claims that computational functionalists might make, which I called theoretical and practical CF. In this post, I want to address CF more generally.
Like most rationalists I know, I used to basically assume some kind of CF when thinking about phenomenal consciousness. I found a lot of the arguments against functionalism, like Searle’s Chinese room, unconvincing. They just further entrenched my functionalismness. But as I came across and tried to explain away more and more...
I think you're conflating creating a similar vs identical conscious experience with a simulated brain. Close is close enough for me - I'd take an upload run at far less resolution than molecular scale.
I spent 23 years studying computational neuroscience. You don't need to model every molecule or even close to get a similar computational and therefore conscious experience. The information content of neurons (collectively and inferred where data is t complete) is a very good match to reported aspects of conscious experience.
Given any particular concrete demonstration of an AI algorithm doing seemingly-bad-thing X, a knowledgeable AGI optimist can look closely at the code, training data, etc., and say:
“Well of course, it’s obvious that the AI algorithm would do X under these circumstances. Duh. Why am I supposed to find that scary?”
And yes, it is true that, if you have enough of a knack for reasoning about algorithms, then you will never ever be surprised by any demonstration of any behavior from any algorithm. Algorithms ultimately just follow their source code.
(Indeed, even if you don’t have much of a knack for algorithms, such that you might not have correctly predicted what the algorithm did in advance, it will nevertheless feel obvious in hindsight!)
From the AGI optimist’s perspective: If I...
That's how this happens, people systematically refuse to believe some things, or to learn some things, or to think some thoughts. It's surprisingly feasible to live in contact with some phenomenon for decades and fail to become an expert. Curiosity needs System 2 guidance to target blind spots.
The comments here are a storage of not-posts and not-ideas that I would rather write down than not.
I don't remember 3 and it's up my ally, so I'll do that one.
Epistemic status: Speculation. An unholy union of evo psych, introspection, random stuff I happen to observe & hear about, and thinking. Done on a highly charged topic. Caveat emptor!
Most of my life, whenever I'd felt sexually unwanted, I'd start planning to get fit.
Specifically to shape my body so it looks hot. Like the muscly guys I'd see in action films.
This choice is a little odd. In close to every context I've listened to, I hear women say that some muscle tone on a guy is nice and abs are a plus, but big muscles are gross — and all of that is utterly overwhelmed by other factors anyway.
It also didn't match up with whom I'd see women actually dating.
But all of that just… didn't affect my desire?
There's...
It's very disconcerting to read "I notice my brain does extra work when I talk with women... wouldn't it be easier if society were radically altered so that I didn't have to talk with women?" Like, what? And there's no way you or anyone else can become more rational about this? This barrier to ideal communication with 50% of people is insurmountable? It's worth giving up on this one? Hello?
I was not proposing that.
Tegmark, Samotsvety, and Metaculus provide probability estimations for the use of nuclear weapons in Ukraine, and the start of the full-scale nuclear wars (kaboom and KABOOM in the terminology of Tegmark). kaboom will certainly be a warning sign before KABOOM, so it makes sense to think beforehand about the action plan in the case of kaboom.
I am aware of two potential strategies of survival during the nuclear war. First, you can hide in a shelter, and for preparation I recommend to read "Nuclear war survival skills" . Second, you can try to be in a place minimally affected by nuclear war. In my opinion, the second strategy is better in a long run, since the life in the bombed territory will be most likely far worse...
Western Montana is separated from the missile fields by mountain ranges and the prevailing wind direction and is in fact considered the best place in the continental US to ride out a nuclear attack by Joel Skousen. Being too far away from population centers to be walkable by refugees is the main consideration for Skousen.
Skousen also likes the Cumberland Plateau because refugees are unlikely to opt to walk up the escarpment that separates the Plateau from the population centers to its south.
Version 1.0.0
Abstract: In this article, I examine the discrepancies between classical game theory models and real-world human decision-making, focusing on irrational behavior. Traditional game theory frames individuals as rational utility maximizers, functioning within pre-set decision trees. However, empirical observations, as pointed out by researchers like Daniel Kahneman and Amos Tversky, reveal that humans often diverge from these rational models, instead being influenced by cognitive biases, emotions, and heuristic-based decision making.
Game theory provides a mathematically rigorous model of human behavior. It has been applied to fields as diverse as Economics and literary Critical Theory. Within game theory, people are represented as rational actors seeking to maximize their expected utility in a variety of situations called "games." Their actions are represented by decision trees whose leaves represent...
On a quick skim, an element that seems to be missing is that having emotions which cause you to behave 'irrationally' can in fact be beneficial from a rational perspective.
For example, if everyone knows that, when someone does you a favor, you'll feel obligated to find some way to repay them, and when someone injures you, you'll feel driven to inflict vengeance upon them even at great cost to yourself—if everyone knows this about you, then they'll be more likely to do you favors and less likely to injure you, and your expected payoffs are probably higher t...