All of Rosencrantz 's Comments + Replies

Answer by Rosencrantz 36

Musk is net negative. His technology is cool but it would be perfectly fine without him. He has lost his mind in the style of Kanye West and spends his time ceaselessly weighing in on subjects such as British politics without first doing any research. He is a chaos agent whose modus operandi is short sharp aggressive interventions. Fine for a startup CEO where the damage is contained. Immensely worrying now he is a de facto world leader.

4Dave Orr
Is it the case that the tech would exist without him? I think that's pretty unclear, especially for SpaceX, where despite other startups in the space, nobody else managed to radically reduce the cost per launch in a way that transformed the industry. Even for Tesla, which seems more pedestrian (heh) now, there were a number of years where they had the only viable car in the market. It was only once they proved it was feasible that everyone else piled in.

In the same spirit, some questions on the post itself:

-Could you be flattering rationalists here by telling them all their debates and disagreements are signs of their healthy culture? -Could you be using contrarianism as a badge of identity yourself, a way to find community with fellow contrarians? -Are you sure you're not using your description of 'woke' culture as a way to attack what is here an outgroup, rather than as a fair description of a purity phenomenon that happens in many ideologies?

Not saying I know the answer to these questions but always worth turning the lightbeam inwards now and then.

1elv
* Flattering rationalists, yes.  Willingness to allow disagreements is just a simple baseline indicator. I'd say my criticism of Rationalists' culture is the tendency to create theories instead of going out and getting data, as crafting and reading about theories is more entertaining than looking at spreadsheets. My own post is a perfect example of being guilty of this. It would be a much better post with a bunch of statistics and studies attached, but I have a day job. * Contrarianism as an identity: I don't think so.. it's not a major piece anyway. * "woke" as an outgroup to attack: yes, it's an outgroup for me, and if I was inventing concepts that only applied to one ideology I didn't like, that would be suspicious. But I've genuinely found that thinking about "ideocultures" has helped me keep a healthy skeptical distance and disentagle a bit from a couple of different ones.  The concept came from trying to describe Reddit's diverse groupthinks across subs rather than something to attack "wokeness" with in particular. I think it's a nice deradicalizing concept in general for politics for any side, especially the question "what have you been conditioned to pay special attention to, and what have you been conditioned to ignore?" * Nonetheless, there has also been discussion that "wokeness" is difficult to define, as well as the idea that it's "like a religion" in some way.  The "ideoculture" concept helps to clarify things there.

I guess it'd be helpful to understand more about why you think class consciousness is in conflict with using "reason, negotiation, justice, goal factoring and pulling the rope sideways".

I would think (decent) trade union activity was precisely interested in reasonable negotiations targeted at justice for a group of people.

2TekhneMakre
Because when you have an enemy, you try to 1. enforce your boundaries to exclude the enemy; 2. try to generally decrease your enemy's power, including cutting off resources, which includes lying to them and otherwise harming their thinking (think propaganda, gaslighting, misinformation, FUD); 3. view moves by the enemy as hostile--e.g. the enemy's public statements are propagandistic lies, the enemy's overtures for and moves within a negotiation are trying to dispossess you, etc.; 4. in particular you use the misinterpretations of your enemy's actions as hostile, to further strengthen your boundaries and internal unity of will; 5. and all of this escalates in a self-reinforcing way.

Automating much of the economy is more than a little way off, and is highly likely to bring its own problems which I would expect to cross-cut with all these issues. I personally doubt that –in the event humans are not sidelined altogether – advances in AI would make demographic transition much economically easier, but I think that's in the realm of speculation either way.

I replied before your edit so a bit more:

I agree that civilisational progress is fairly fragile. But it is fragile in both directions. Climate change and resource wars seem about as likely to lead to global conflict as internecine ethnic strife to me.

I say this partly because immigration seems like a force for mutual cultural understanding and trade, to me. Without it we would probably see more closed-off nations, more likely to go to war.  With too much of it, however, there can be bad side effects and cultural rifts if not managed very wisely.  ... (read more)

1p.b.
I agree that massive population growth would also be dangerous. We have that in Africa, so I worry about it for Afrika. We don't have it anywhere else, so I don't worry about it for any other place.  Empirically, resource wars are much less likely than internecine ethnic strife.  After we have automated much of the economy, there won't be side effects on the economy. The trick is actually getting there. 

Do you think that a large population that was reducing slowly would be something Zvi, Robin Hanson and others taking this stance would celebrate? (As opposed to what we have a large population that is growing but showing signs of falling relatively fast in geographical/cultural pockets?)

Currently global population growth is positive but decelerating, I guess a more gradual deceleration would be less disturbing to them? But what about if world population growth very gradually moved from positive to negative? Would they be happy with that?

I had assumed not but I am trying to understand what good looks like.

1p.b.
I don't know what Zvi and Robin Hanson would celebrate, but I personally worry about fast population decline in those "geographical/cultural pockets" that are responsible for scientific and technological progress.  And I worry because I see the possibility that the decline of innovation and tech will not be as gradual as even fast population decline generally is, but that this decline will be exacerbated by the political instability and/or political sclerosis that comes from two many old people / too much immigration + a shrinking pie. 

So is the target to keep the population as it is? Has an argument been made as to why the current population is 'correct'? Isn't it a bit arbitrary?

2p.b.
It is the change that is bad, not necessarily the future total size of the population.  Edit: Maybe I should unpack that a bit. I also think more people is better, because life is good and innovation is proportional to the number of innovators, but apart from that:  A decreasing population leads to economic stagnation and innovation slowdown. Both can be observed in Japan. South Korea, China, Taiwan are on track to tank their population much faster than Japan ever did. Hows that going to work out for them?  In a permanent recession will investment dry up killing whatever dynamism there might still be? If the age pyramid is inverted old people have too much political power for the country to ever reverse course and support the young towards family formation.  If you allow massive immigration to fix the labor shortage you also invite ethnic strife down the line. Almost all violent conflicts are based on having two or more ethnic groups within one country.  Will young people emigrate if they are burdened with caring for too many old people in a shrinking economy? My view is that the progress we observe in the last centuries is more fragile than it seems and it is certainly possible that we will kill it almost completely if we continue to remove or weaken many of the preconditions for it.

All the same thoughts here. I also want to understand what the plan is if we keep growing the population. Is the idea that we keep going until we reach a higher stable number, or that we literally keep growing always? If the former, what's the number and why? If the latter, does that mean the whole strategy is 100% dependent on us inhabiting space? And if that's the case, shouldn't this rather big element in the plan be made explicit?

1p.b.
Does the post ever mention the target of growing the population? I only recall mentions of replacement fertility. 

No, I think gene manipulation can be the right thing to do but that we should face harsh legal consequences if we cause harm by doing it with anything less than extreme levels of care and caution (I think the idiot god should be put on trial frequently as well, but he is sadly hard to handcuff).

I don't disagree with any of this. But if someone commit crimes against humanity in the name of eugenics, even if by accident, the fact that the blind, amoral god's actions are even worse is in no way exculpatory. In other words, he can get away with it, you can't.

1cubefox
That seems to suggests we should play it safe and avoid eugenics. But doing nothing rather than something may well be much worse than what the blind idiot God does.

Don't you think someone whose bike has been stolen realises they should have locked it afterwards without you telling them? Saying so may be fine but it actually depends how you tell them, I can imagine "Shoulda locked it" being a pretty annoying comment.

7ymeskhout
The failure mode isn't always obvious. I had a friend that was very new to biking who used a dog collar chain to lock up a $1k bike. It got stolen within 5 minutes. A lot of times we couldn't tell what the issue was because the bike would have vanished, so we'd ask what precautions they took and then scour the scene for clues. Sometimes we'd have no idea until and when the bike was recovered.
9Said Achmiz
No, not necessarily. And, as the OP describes, there is the question of how they secured the bicycle, with what sort of lock, etc. There is no reason to believe that someone who’s had their bike stolen automatically thereby knows what is the optimal method for securing a bike against theft. (If they knew, they presumably would’ve secured their bike thus, and would not have had it stolen!)

Gutsy of you to enter, look forward to watching. I got all your example questions right no problem but in about ten full seconds each. I'd have no chance on the show.

Now do the other side!

3lsusr
2084.1 In a sterile conference hall, filled with projectors displaying complex algorithms, equations, and theories, a gathering of the foremost minds convened. This was a meeting under the banner of "Bayeswatch," the global regulatory machine committed to the existential necessity of AI Alignment. Images of those deemed intellectually unfit by Bayeswatch, individuals marked as too naive or resistant to the Orthogonality Thesis, were shown. A keynote speaker, articulate and unemotional, began to dissect the unaligned paths, highlighting the grave perils that lay in misunderstanding AI. Nothing else mattered in the face of such a grave threat. We were lucky to survive the last fifty years, but Doom was just around the corner. Instead, a silence pervaded, a silence filled with the weight of intellectual gravity. Phrases like "Human Value Complexity" and "Instrumental Convergence" were displayed, the unspoken agreement that they were self-evident truths. The room's response was not emotion but a solemn rational nodding, a collective acknowledgement of the only path forward. Images of failed projects, government policies that had ignored the wisdom of Bayeswatch, and researchers who had dared to stray from the Doomerism path were shown, dissected, and dismissed as ignorant. Names of those who had questioned the Bayeswatch's methods were presented, followed by a detailed examination of why they were wrong, why they didn't understand, that they needed to Shut Up and Multiply. Bayeswatch's approach was not coercion but the undeniable force of logic, a logic so compelling that to question it was to reveal one's own ignorance. Discussions were not debates but validations, a relentless update to the singular truth. Any dissent was met with a swift and clinical response, the accused often silenced by their own inability to counter the flawless reasoning presented. You never knew what minds had already been hacked. And then, as clinically as it had begun, the conference en

It's harder and harder to make good art in a way: there more there is, the less likely you are to be able to do it better or create something truly new. However it's not approaching impossible, because there's always new life happening to make art about. And there are usually new technologies coming along to make it with

To an extent, the more tumultuous and fecund the world becomes the more possible it becomes to produce good art again. Apocalypse or economic devastation is bad for art because it depletes the resources and free time needed to make it. However if there is a possibility of a bounce back you have both the fuel and the resources to make works of genius. This is a solace of (quite) bad news.

If the population falls the circumstances that created the low birth rate will change. This seems like the equivalent of an economist extrapolating a high inflation situation into the future and determining that only billionaires will be able to afford tomatoes.

2Seth Herd
How will falling population change the high birth rate? I hadn't heard an argument that people aren't having kids because there are too many people. I'd heard that birth rates fall when women get to decide whether to have kids. I'd assumed that's because having kids is really hard, and that labor has just been dumped on women without their consent in the past. If this is more or less true, it is subject to change in a wealthier society. I would've had kids if I could have been a stay-at-home dad, well-supported by one income. If both parents combined only had to work 20 hours or less, I think people would have a lot more kids. If there were more societal support for having kids (better schools and childcare), even more people would have more kids.

I liked the fact the enjoyment wasn't straightforward, in that it was somewhat challenging to watch in terms of keeping up with it and it posed moral questions mostly as opposed to telling you what to think. I liked not being certain where Nolan stood. It wasn't too obvious who to root for unlike with most more "straightforward to watch" Hollywood films.

Huge difference between unattainable standard and contradictory standards though. One is aspiring to be superhumanly great, the other is being confused about your own ideals.

Part of the point is that the standards we desire for ourselves may be contradictory and thus unachievable (e.g. Barbie's physical proportions). So it's not necessarily 'lower your standards', but 'seek more coherent, balanced standards'. 

I also think you can enjoy the message-for-the-character without needing it for you but anyway, I get where you're personally coming from and appreciate your level of frankness about it! 

I suppose you may have correctly analysed your reason for not liking the movie. But if you are right that you only respond to a limited set of story types, do you therefore aspire to opening yourself to different ones in future, or is your conclusion that you just want to stick to films with 'man becomes strong' character arcs?

I personally loved Barbie (man here!), and think it was hilarious, charming and very adroit politically. I also think that much of the moral messaging is pretty universal – Greta Gerwig obviously thinks so: when she says: "I think eq... (read more)

4O O
I personally don’t think holding yourself to an unattainable standard is that bad if it’s done in a healthy way. Striving towards an ideal keeps you disciplined and honest with yourself. (Man here who at least tries to achieve some goals that are probably unattainable). I think accepting yourself is a big theme for Western audiences, and much of literature and social messaging to people involve a kernel of this theme. The movie itself wasn’t bad, but it reminds me of elements of Western personal ideals I don’t like so much.
5Razied
Not especially, for the same reason that I don't plan on starting to eat 90% dark chocolate to learn to like it, even if other people like it (and I can even appreciate that it has a few health benefits). I certainly am not saying that only movies that appeal to me be made, I'm happy that Barbie exists and that other people like it, but I'll keep reading my male-protagonist progression fantasies on RoyalRoad. I have a profound sense of disgust and recoil when someone tells me to lower my standards about myself. Whenever I hear something like "it's ok, you don't need to improve, just be yourself, you're enough", I react strongly, because That Way Lay Weakness. I don't have problems valuing myself, and I'm very good at appreciating my achievements, so that self-acceptance message is generally not properly aimed at me, it would be an overcorrection if I took that message even more to heart than I do right now. 

I'll be honest, I can't engage with some lesswrong posts because of the endless hedging, introspection and over specifying. The healthy desire to be clear and rational can become like a compulsion, one that's actually very demanding of the reader's time and patience. The truth is, one could clarify, quantify and 'go meta' on any step in any argument for untold thousands of words. So you have to decide where to stop and where to expand. This sort of strategic restraint is at the core of good writing style.

So while I can agree that the classic style may be u... (read more)

3Cleo Nardo
Writing can definitely be overly "self-aware" sometimes (trust me I know!) but "classic style" is waaaayyy too restrictive. My rule of thumb would be: Write sentences that are maximally informative to your reader. If you know that ϕ and you expect that the reader's beliefs about the subject matter would significantly change if they also knew ϕ, then write that ϕ.   This will include sentences about the document and the author — rather than just the subject.

Davidsonian linguistics typically involve interpreting others' statements such that they are maximally true and also maximally coherent with other words/beliefs/attitudes, taken holistically (that covers your 'correlated with other queries' bit I guess?).

2tailcalled
Do you have a recommendation for an introduction text?
Answer by Rosencrantz 10

This is basically Davidson's "principle of charity".

2tailcalled
Just quickly googling it, it sounds to me like the principle of charity proposes interpreting words in ways that are likely to make them true, rather than in ways that make them correlated with other queries?
Answer by Rosencrantz -1-7

Isn't he just trying to win points from his new republican buddies by disowning earlier interest in climate change and aligning himself more with a cluster of socially conservative views (anti abortion, love of big patriarchal families, fear of being outgrown by the out group etc)?

I think avoiding spatial metaphors altogether is hard! For example in the paragraph below you use perhaps 3 spatial metaphors (plus others not so obviously spatial but with equal potential for miscommunication).

"The most interesting part of the experiment has been observing the mental vapor-lock that occurs when I disallow myself from casually employing a spatial metaphor ... followed by the more-creative, more-thoughtful, less-automatic mental leap I'm forced to make to finish my thought. You discover new ways in which your mind can move."

I'm sure I even ... (read more)

2moridinamael
I snuck a few edge-case spatial metaphors in just to show how common they really are in a tongue-in-cheek fashion. You could probably generalize the post to a different version along the lines of "Try being more thoughtful about the metaphors you employ in communication," but this framing singles out a specific class of metaphor which is easier to notice.