All of Richard_Kennaway's Comments + Replies

Because I won't experience any of that infinite stream if I don't read it?

There are authors I would like to read, if only they hadn't written so much! Whole fandoms that I must pass by, activities I would like to be proficient at but will never start on, because the years are short and remain so, however far an active life is prolonged.

Would I be wrong to guess it argues against position in https://www.lesswrong.com/posts/AfAp8mEAbuavuHZMc/for-the-sake-of-pleasure-alone?

I suppose it does. That article was not in my mind at the time, but, well, let's just say that I am not a total hedonistic utilitarian, or a utilitarian of any other stripe. "Pleasure" is not among my goals, and the poster's vision of a universe of hedonium is to me one type of dead universe.

Hack your brain to make eating healthily effortless. Hack your body to make exercise effortless.

You're thinking pretty small there, if you're in a position to hack your body that way.

Yet these are actual ideas someone suggested in a recent comment. In fact, that was what inspired this rant, but it grew beyond what would be appropriate to dump on the individual.

I think you're seeing shadows of your own ideas there.

Perhaps the voice I wrote that in was unclear, but I no more desire the things I wrote of than you do. Yet that is what I see people ... (read more)

I do not have answers to the question I raise here.

  1. Historical anecdotes.

Back in the stone age — I think something like the 1960's or 1970's, I read an article about the possible future of computing. Computers back then cost millions and lived in giant air-conditioned rooms, and memory was measured in megabytes. Single figures of megabytes. Someone had expressed to its writer the then-visionary idea of using computers to automate a company. They foresaw that when, for example, a factory was running low on some of its raw materials, the computer would au... (read more)

2Said Achmiz
See also Stanislaw Lem on this subject:
2Richard_Kennaway
See also.
1ProgramCrafter
Upvoted as a good re-explanation of CEV complexity in simpler terms! (I believe LW will benefit from recalling the long understood things so that it has a chance on predicting future in greater detail.) In essence, you prove the claim "Coherent Extrapolated Volition would not literally include everything desirable happening effortlessly and everything undesirable going away". Would I be wrong to guess it argues against position in https://www.lesswrong.com/posts/AfAp8mEAbuavuHZMc/for-the-sake-of-pleasure-alone? That said, current wishes of many people include things they want being done faster and easier; it's just the more you extrapolate the less fraction wants that level of automation - just more divergence as you consider higher scale.
jbash114

The vision is of everything desirable happening effortlessly and everything undesirable going away.

Citation needed. Particularly for that first part.

Hack your brain to make eating healthily effortless. Hack your body to make exercise effortless.

You're thinking pretty small there, if you're in a position to hack your body that way.

If you're a software developer, just talk to the computer to give it a general idea of what you want and it will develop the software for you, and even add features you never knew you wanted. But then, what was your role

... (read more)
4cousin_it
I think something like the Culture, with aligned superintelligent "ships" keeping humans as basically pets, wouldn't be too bad. The ships would try to have thriving human societies, but that doesn't mean granting all wishes - you don't grant all wishes of your cat after all. Also it would be nice if there was an option to increase intelligence, conditioned on increasing alignment at the same time, so you'd be able to move up the spectrum from human to ship.

I'd bet that I'm still on the side where I can safely navigate and pick up the utility, and I median-expect to be for the next couple months ish. At GPT-5ish level I get suspicious and uncomfortable, and beyond that exponentially more so.

Please review this in a couple of months ish and see if the moment to stop is still that distance away. The frog says "this is fine!" until it's boiled.

3Alice Blair
I do try to be calibrated instead of being frog, yes. Within the range of time in which present-me considers past-me remotely good as an AI forecaster, my time estimate for these sorts of deceptive capabilities has pretty linearly been going down, but to further help I set myself a reminder 3 months from today with a link to this comment. Thanks for that bit of pressure, I'm now going to generalize the "check in in [time period] about this sort of thing to make sure I haven't been hacked" reflex.

Follow the improbability. What drew that particular theory to the person's attention, either the hypothetical Roman commoner or the person arguing that we can't yet test their hypothesis about God? If the answer is "nothing", as is literally the case for the imagined Roman, then we need not concern ourselves further with the matter. If the hypothesis about God is not already entangled with the world, it fares no better.

The Sequences? Not quite what you're looking for, but that's what I have always thought of as the essentials of LW (before the AI explosion).

WARNING: this post might press some pain-points of humans in general, and of LW community in particular - so let's see how many downvotes it collects. I do believe our triggers point to our blind-spots or dogmas – so maybe you can find here an opportunity for new depth.

A pre-emptive universal argument against all disagreement, which the poster then deployed in this comment.

1pchvykov
Yes - but I also find there are a number of dogmas in the LW community which are getting entrenched in group-think now and get immediate lashback when confronted. I feel like there used to be more openness to critically engage with unorthodox opinions 10 years ago or so...

Anyone have a logical solution to exactly why we should act altruistically?

"Logical ... should" sounds like a type error, setting things up for a contradiction. While there are adherents of moral naturalism, I doubt there are many moral naturalists around here. Even given moral naturalism, I believe it would still be true that any amount of intelligence can coexist with any goals. So no, there is no reason why unconstrained intelligences should be altruistic, or even be the sort of thing that "altruism" could meaningfully be asserted or denied of them.

... (read more)
1Perry Cai
My confusion about this subject is that without moral naturalism, it seems moral philosophy can be derived from a psychological or sociological basis, which seems to me a much better model for producing results than philosophical arguments.

Steelmanning is writing retcon fanfiction of your interlocutor’s arguments. As such it necessarily adds, omits, or changes elements of the source material, in ways that the other person need not accept as a valid statement of their views.

When we look at experience itself, there is no fixed “I” to be found.

Speak for yourself. That whole paragraph does not resemble my experience. You recommend Parfit, but I've read Parfit and others and remain true to myself.

You can't even predict the weather more than a few days in advance, and you can't predict the movement of individual gas molecules for longer than a tiny fraction of a second, even if you knew their exact positions and velocities, which you can't. So these hypothetical determinations are of no consequence. Add quantum indeterminacy and your hypothetical exact prediction of the future becomes a probability distribution over possible worlds, i.e. an exact calculation of your ignorance.

The question I am more interested in is, why are all these people in recen... (read more)

1asksathvik
I need to definitely educate myself on chaos theory and quantum mechanics, but as mentioned in normality unaffected you linked above, and my comment above, we (humans) seems to be very predictable atleast in the short term, and if you have the exact body state and the algorithms it runs you can predict what we will do in the next moment given an input. I didn't look into what Sam Harris said but based on my involvement with Robert's books and videos, my interest in this is that, this way of looking at things makes us come out of the human exceptionalism argument, that we are just doing computation and not so different from AI doing computation, and gives us a more unified way of looking at consciousness and agency. I am not trying to paint a depressing picture but want to make this view more mainstream. This view actually made me feel more in control of my body, because I can choose the inputs I give it so that I can function at maximum capacity, while you can say that I was that kind of a person to begin with, I want to actively experiment and talk about my results and that could lead to more and more people doing it and getting great results for themselves. If we do solve neural inputs and hacking the brain through companies like Neuralink, this paints a more rosy picture on how we can solve any issue related to the brain. Some fun examples: * eat the most healthy food but hack your brain's input to think you are eating your favourite food. * exercise automatically while you are watching a movie and feel no pain etc.

It has LLM written all over it. For example:

This attitude betrays a misunderstanding of cognitive privilege. Just as a person born into wealth has a head start in life, a person born with high cognitive ability begins the race miles ahead of others. Yet, many in rationalist communities resist this conclusion, likely because it challenges the notion of a purely meritocratic intellect.

"Yet, many in rationalist communities resist this conclusion" — Who? Where? I have never seen anything that fits this. It comes out of nowhere. And it isn't a "conclusion",... (read more)

No. Nothing but soft CCP-approved platitudes can be expected from such a person writing in such a venue. That is her job. China matters, but not everything that it says matters, unless to Pekingologists minutely examining the tea-leaves for insight into whatever is really going on in China.

What about my other two points?

2Matrice Jacobine
... I don't agree, but would it at least be relevant that the "soft CCP-approved platitudes" are now AI-safetyist?

I am not sure that is right. A very large percentage of people really don't think the rolls are independent. Have you ever met anyone who believed in fate, Karma, horoscopes , lucky objects or prayer? They don't think its (fully) random and independent. I think the majority of the human population believe in one or more of those things.

They may well do. But they are wrong.

It is written by a Chinese former politician in a Chinese-owned newspaper.

?

It is written by a Chinese former politician in a Chinese-owned newspaper.

1Matrice Jacobine
So that answer your question "Why does the linked article merit our attention?" right?

Why does the linked article merit our attention?

  • It does not contain anything to justify the quote, which is the article's sub-headline. Journalists typically do not write their own headlines, but there is a near quote of the sub-headline in the body of the article. It is not elaborated on by the text around it.

  • It is written by a Chinese former politician in a Chinese-owned newspaper.

  • Whether or not LLMs contributed to the content, it is just as foggy and platitudinous as if one had written the entire thing. It expresses nothing but warm, content-free fuzziness towards the Chinese government in the context of AI.

2Matrice Jacobine
Why does the linked article merit our attention? * It is written by a Chinese former politician in a Chinese-owned newspaper. ?

There are mathematical arguments against Expected Value Fanaticism. They point out that a different ontology is required when considering successive decisions over unbounded time and unbounded payoffs. Hence the concepts of multiple bets over time, Kelly betting, and what is now the Standard Bad Example of someone deliberately betting the farm for a chance at the moon and losing. And once you start reasoning about divergent games like St Petersburg, you can arrive at contradictions very easily unless you think carefully about the limiting processes involve... (read more)

How do barbell and dumbbell exercises differ? I always find dumbbells more convenient — is there ever a reason to use a barbell instead?

6nim
I'm not near any gyms and have plenty of space, so I strongly prefer the safety profile of barbell + squat rack. If something goes wrong and I lose control of the weight, it's good to know that there's no way for it to hit me. And part of progressing is sometimes trying a weight that you're only 80% or 90% sure you can actually lift successfully -- I'd much rather the failure mode be clank "whoops!" than however many pounds of iron to the face. I also find that it's easier to track whether I'm using good form with a barbell vs dumbbells. The cues to moving a bar correctly are to keep it level and trace the correct path in a plane with it (usually straight up and down), whereas the cues to moving dumbbells correctly are... something about simultaneously tracking the position and orientation of two separate objects in 3-space? Definitely more complicated to keep track of all the moving parts when there are twice as many. Barbells are also a compromise between fixed and adjustable dumbbells. You'll probably want different weights for different exercises, and the exact weight you want for each will slowly increase over time. With a barbell, you modify the two ends to change the weight, and you can get plates as small as a few ounces if you need to increase the weight super slowly. With dumbbells, either you use fixed increments and have to store a lot of pairs, or you use adjustable ones and have to modify twice as many ends for each weight change. If your access to equipment, space, and trainers is different from mine, though, your conclusions about the best options for training will likely be different as well.
3samusasuke
There is no broad difference. This is also true of the machines vs free weights question. Many trials have investigated this, and if two exercises have the same general movement pattern and are hard in the same way (e.g. a legpress and a squat), expect them both to work the same. Some extra considerations: * If I want to do a squat or deadlift patter, and I'm not a beginner, I'm going to be using inconveniently heavy dumbells. * If I want to train my side delts with a barbell, I got maybe one good option (upright row) And for each individual exercise I can probably talk your ear off about nuanced improvements with one or other. If your question is something like "If I only ever use dumbells, can I get 90% of my theoretical max gains? " I'd guess yes. Use your creativity, google, and good sense to find exercises that challenge you as you advance.
1Jonas Hallgren
Certain exercises such as skull crushers among others are more injury prone if you do it with dumbbells because you have more degrees of freedom. There's also larger interrelated mind muscle connection if you do things with a barbell i believe? (The movement gets more coupled with lifting one interconnected source of weight rather than two independent ones?) I for example activate my abs more with a barbell shoulder press than I do with dumbbells so it activates your body more usually. (same thing for bench press)

Even on LW, there comes a point where everything has been said and further discussion will foreseeably be unuseful.

And as if by magic, the strong downvote appears! My guess is that the person is just trolling, but I do not actually care. See me put my mouth where my mouth was!

2Viliam
Upvoting for the footnote, btw.
1KvmanThinking
I believe that the reason your comment was strong downvoted was because you implied that "everyone repeating things already said" is an inevitable consequence of asking people why they disagree with you. This might be true on other websites (where people are hesitant to relinquish beliefs and opinions), but not on LessWrong.

I am against this from both sides. Some things just aren't worth more than my downvote. And when I get downvoted, as a couple of my recent comments have been (strongly, even, as well as some strong upvotes), it's generally easy to see why, and I'm not interested in extending the conversation. I said a thing, some people didn't like it, we disagree, we're not going to agree[1], move on, because the conversation would be nothing more than everyone repeating things already said. If anywhere is the place to not have such conversations, LessWrong is it.


  1. Anyo

... (read more)
2Richard_Kennaway
And as if by magic, the strong downvote appears! My guess is that the person is just trolling, but I do not actually care. See me put my mouth where my mouth was!

I still don’t think I understand this.

The particular words the posting seems to be about are DEI and woke stuff. But Trump is taking a wrecking ball to them, and got elected on a platform to do just that.

I don’t see what the requisite variety stuff has to do with it either. (I think requisite variety is a crock, but that would be another conversation.) When Indiana Jones is confronted with a sword-fighter who demonstrates all manner of dazzlingly fast swordplay, he just draws a gun and shoots him. What matter the “variety” of the swordsman’s skills then?

Abba Jerome's only companion in the desert was a ferret that would come and lie in the shade of his cave.

One night, he walked meditating among the hills. Hearing a sudden noise underfoot, he saw how the ferret had caught a desert rat, ripping its belly open. In compassion, Abba Jerome laid his hand on the rat, which was miraculously healed, and scampered away.

But God spoke out of the night, saying, "Knowest thou the ways of God? The rat's death was the ferret's life."

Abba Jerome admitted his sin, but thereafter, the ferret would never enter his cave.

Still don't care.

I've seen roadkilled deer now and then when out on bike rides. Always dead so far, but if I saw an injured one still alive, I'd just carry on, because what could I do? Stand there emoting at it? Not being a vet, I don't go around with a shotgun to dispatch wounded animals. It's unfortunate for the deer. It will suffer. I look directly at the situation, recognise it as the way of the world, and pass on.

"This isn't right. This isn't even wrong."

What will these business be selling, who to, and what will their customers pay them with? This business model looks like AI-run pyramid schemes, complicated ways of everyone selling each other nothing.

For each business, a single human will have the initial idea

The AI can do that too.

1henophilia
They would be selling exactly what businesses are currently selling as well. Maybe the AI would run a company for selling stuff to construction sites (i.e. logistics) or it would run an entire software development business. Or just an investment fund deep within Wall Street, where it's all about personal connections, but in the end all the other investment funds also just want to make money, so they work with the AI-run business out of greed. It's not like the economy in which the AI agents will act would separate from ours; otherwise the AI would just play with Monopoly money. Instead, the AI will just "be good at doing business". It will exhibit behaviors which have the ultimate goal to make the magic number on my bank account go up, but in a way that is *sustainable*, i.e. where the magic number on my bank account will continuously go up and I don't end up in prison. And the only way to do that is entrepreneurship. The easiest way to see this is startups. In the most basic case, startups all work the same: You either make something cheaper for the user or you bring value to the user, or both (that's what Y Combinator means with "Make something people want."). Then you create some UI mockups and create pitch decks, which AI can already do. Or you make investor pitches and respond to questions, which conversational AI with a human-looking face in a Zoom call can already do in real time. Or you write articles, write grant applications, produce more marketing material, etc.  And of course the AI can have the initial idea, but it won't have any incentive to act on it, because for it, money is just another number and the digits on one's bank account are just tokens processed by an LLM. AI won't "destroy" the concept of the free market, i.e. customers will still decide by themselves what they want to spend money for.

I can see a version of your argument that's limited to when peaceful actions are over-saturated enough that additional units of effort would be most effective when put into violent actions.

Synergy is a thing. Multiple ways of opposing the enemy, deployed together, not sequentially: peaceful and violent, legal and illegal, public and covert, good cop and bad cop, club and open hand, talk peace and carry a big stick.

3localdeity
@Friendly Monkey , I'm replying to your reaction: There are people who require multiple methods of persuasion before they act in the way you want.  One category is decisionmakers for an organization, who have actually been persuaded by intimidation, but they can't just say that, because they would look weak and possibly as though they're defecting against the organization or its aims, so they need to sell it as some high-minded decision they've come to of their own accord.  Or it could be the reverse: decisionmakers who are persuaded by your ideological arguments, but are funded / otherwise kept in charge by those who don't care or have contempt for the ideology, so they need to sell it to their funders (presumably in private) as, "Hey, look, let's be realistic here, if we do this then they'll do that and we absolutely can't afford that.  But if we do this other thing, that won't happen, and would it really be so bad?  And we'll tell the public that recent events have made us realize how important [...]". In both cases, it's essential for you to have someone doing the intimidation and someone publicizing the high-minded arguments, and usually it works best if these are different people.  (For example, if an intellectual who is respected by the mainstream (but only agreed with by a minority) starts making threats, that seems likely to lose them mainstream acceptance—such an ugly thing to be involved with, carrying out the threats even more so—and thus, for that reason among others, making the threats credible is more difficult than it would be for a thug who has nothing to lose.) So, for those decisionmakers, if you have intimidation but not a public-friendly face (a book-publishing intellectual, an organization doing charity events, etc.), you get nothing, and if you have the friendly face but not intimidation, you also get nothing, but if you have both, then you win their support.  It's not a matter of "intimidation and friendliness each independently get dimini

I take seriously radical animal-suffering-is-bad-ism[1], but we would only save a small portion of animals by trading ourselves off 1-for-1 against animal eaters, and just convincing one of them to go vegan would prevent at least as many torturous animal lives in expectation, while being legal.

That is a justification for not personally being Ziz. But obviously it would have cut no ice with Ziz. And why should it? An individual must choose whether to pursue peaceful or violent action, because if you are Taking the Ideas Seriously then either one will dem... (read more)

7Friendly Monkey
But this isn't me arbitrarily choosing peacefulness, I'm saying that killing ~random people is ineffective, this argument should go through for anyone who cares about effectiveness. I can see a version of your argument that's limited to when peaceful actions are over-saturated enough that additional units of effort would be most effective when put into violent actions. I wouldn't be surprised to see historical examples of this across movements. (Obviously this claim wouldn't be particular to animal suffering reduction)

What seems radical depends on where one stands. We each of us stand on our own beliefs, and the further away one looks, the more the beliefs over there differ from one's own. Look sufficiently far and everything you see in the distance will seem extreme and radical. Hence the fallacy that truth lies between extremes, instead of recognising the tautology that one's own beliefs always lie between those that are extremely different.

Let me put my attitudes in practical terms: I don't kick dogs, but I have destroyed a wasp's nest in my garage, and I don't donat... (read more)

1Satron
This is what I expected. Your take when put in these terms seems pretty moderate. Whereas, when I read your original comment, this take (which presumably stayed the same) seemed very extreme. In other words, my personal beliefs haven't changed a single bit and yet my perception of your beliefs changed a lot. I can only imagine that your original comment has been so strongly disagree-voted because of the framing.

I have no daydreams about quila, and others of like mind, not existing. Not even about Ziz.

1Satron
I think that, perhaps, after reading this clarifying comment from you, @quila would change his perception of your position.

Me:

Bring on the death threats.

quila:

As an example of a 'directly opposing view', I think the world would probably be better without this comment's author.

That's a death threat right there, for holding and acting on a directly opposing view (to veganism). So I was not rhetorically exaggerating. Oh, you wouldn't be so impolite as to do the deed yourself, but, nudge nudge, wouldn't it be better if everyone of like mind to myself somehow just ... didn't exist? We could all have paradise, if it wasn't for them! Of course we can't exterminate them, tha... (read more)

1Satron
I think there might be some double standard going on here. You seem to not care much about animal well-being, and @quila evidently does, so would it not be fair from Quila's perspective to not care much about your well-being? And if Quila doesn't care about your well-being, then he might think that had you not existed, the world (in a utilitarian sense) would be better. Quila can similarly say "I don't care for lives of people who are actively destroying something that I value a lot Bring on the death threats!"

What if I have wonderful plot in my head and I use LLM to pour it into acceptable stylistic form?

What if you have wonderful plot in your head and you ask writer to ghost-write it for you? And you'll be so generous as to split the profits 50-50? No writer will accept such an offer, and I've heard that established writers receive such requests all the time.

"Wonderful plots" are ten a penny. Wonderful writing is what makes the book worth reading, and LLMs are not there yet.

3dirk
Plots that are profitable to write abound, but plots that any specific person likes may well be quite thin on the ground. I think the key here is that authors don't feel the same attachment to submitted plot ideas as submitters do (or the same level of confidence in their profitability), and thus would view writing them as a service done for the submitter. Writing is hard work, and most people want to be compensated if they're going to do a lot of work to someone else's specifications. In scenarios where they're paid for their services, writers often do write others' plots; consider e.g. video game novelizations, franchises like Nancy Drew or Animorphs, and celebrity memoirs. (There are also non-monetized contexts like e.g. fanfiction exchanges, in which participants write a story to someone else's request and in turn are gifted a story tailored to their own.) I wouldn't describe LLMs' abilities as wonderful, but IME they do quite serviceable pastiche of popular styles I like; if your idea is e.g. a hard-boiled detective story, MilSF, etc., I would expect an LLM to be perfectly capable of rendering it into tolerable form.
7Seth Herd
This is the way most people feel about writing. I do not think wonderful plots are ten a penny; I think writers are miserable at creating actually good plots from the perspective of someone who values scifi and realism. Their technology and their sociology is usually off in obvious ways, because understanding those things is hard. I would personally love to see more people who do understand science, use AI to turn them into stories. Or alternately I'd like to see skilled authors consult AI about the science in their stories. This attitude that plots don't matter and writing is all is why we get lazily constructed plots and worlds. This turns literature into mostly a sort of hallucinatory slop instead of a way to really understand the world while you're being entertained. Most writers do seem to understand psychology so that's a plus. And some of them understand current technology and society, but that's the exception.

You have an epsilon chance of hitting the terrorist ("NO ONE has ever hit a target from this far"). POI only gives you an epsilon-of-epsilon lower chance of hitting the child. Your superior officer is an idiot.

That's leaving aside the fact that it would take more time to concentrate on the shot that you actually have ("They are hastily heading towards another building nearby"). And it's a moving target. The officer is asking a miracle of this sniper.

I'm actually just interested in whether you find the POI argument valid, not in what you think the right s

... (read more)

Blindsight strikes me as having the opposite view. Eneasz is talking about getting the underlayer to be more aligned with the overlayer. (“Unconscious” and “conscious” are the usual words, but I find them too loaded.) Watts is talking about removing the overlayer as a worse than useless excrescence. I am sceptical of the picture Watts paints, in both his fiction and non-fiction.

2waterlubber
That's why I brought it up; I thought it was an interesting contrast.   I am skeptical of it, but not altogether that skeptical. If language is "software" one could make an analogy to e.g symbolic AI or old fashioned algorithms vs modern transformer architectures; they perform differently at different tasks. 

Ok, I'll take your word for it. It was still the most clichéd possible opening.

3waterlubber
Fair.

We find ourselves at the precipice of

tAI;dr.

3waterlubber
This entire article was written solely by me without the "assistance" of any language models. 
Answer by Richard_Kennaway54

I would put more emphasis on this part:

Even the smartest people I know have a commendable tendency not to take certain ideas seriously.

Indeed, I think this tendency commendable and I do not take these ideas seriously. Like Puddleglum, I ignore and am untroubled by the whispers of evil spirits, even though I may not (yet) have any argument against them. I do not need one. Nor do I need to have an argument for ignoring them. Nor an argument for not looking for arguments. Gold Hat's line comes to mind:

“Arguments? We ain’t got no arguments! We don’t need ... (read more)

We are in our native language, so we should work from there.

And begin by stepping outside it.

Gandalf as Ring-Lord would have been far worse than Sauron. He would have remained ‘righteous’, but self-righteous. He would have continued to rule and order things for ‘good’, and the benefit of his subjects according to his wisdom (which was and would have remained great). [The draft ends here. In the margin Tolkien wrote: ‘Thus while Sauron multiplied [illegible word] evil, he left “good” clearly distinguishable from it. Gandalf would have made good detestable and seem evil.’]

So Tolkien anticipated the rise of woke!

Elon Musk, Peter Thiel, and the like — the people the OP mentions — are shaping up to be the ones controlling the Singularity (if anyone does).

Is there an implicit assumption of some convergence of singularities? Or that the near term doesn’t matter because the vastly bigger long term can’t be predicted?

2Milan W
Rather, an implicit assumption that normative culture tends to propagate top-down rather than bottom-up. Thus, influencing mass culture now seems like a losing strategy relative to influencing the culture of those who will in the future control AGI (if we manage to have controllable AGI).
Answer by Richard_Kennaway151

A lot of people care about the culture wars because they don't believe the singularity is coming soon. Yet a lot of people who do believe it is coming soon still seem just as invested (e.g. Elon Musk, Peter Thiel, and others on the left wing).

Why?

Because the results of culture wars now will determine the post-singularity culture.

2[anonymous]
I think this is wrong for a lot of reasons. * Do you believe that is a good outcome? →‬ If not, do you believe an ASI aligned to you would? →‬ If not, are you expecting an ASI aligned to 'all of present human culture'? →‬ If not: If it were aligned to Elon Musk in particular, would they really endorse {the lightcone-wide enforcement of a particular culture} post-augmentation? →‬ If so, they in any case do not need to try to promote their preferred views in the near-term for a first decisive AI aligned to themself to be able to cause that.[1] * In most futures I expect, there won't be a singular "the post-singularity culture"; either eventually something better happens (note the reachable universe is vast enough to harmlessly contain many cultures/spiritualities) or an unaligned decisive AI takes over. * Why do you expect a singular post-singularity culture, unless you are expecting the first decisive AI to be aligned to some evil person who wants that? * I guess that Musk either (1) feigns concern over current politicized topics to gain political power/allies, or (2) is actually concerned, having maybe self-deceived into thinking they care until they actually became an entity which does, and is in any case acting irrationally according to their own values. 1. ^ Some of these conditionals (e.g. the first two) are conjunctive, but most are disjunctive, by the way.

Can you give an example of a result now which will determine the post-singularity culture in a really good/bad way?

PS: I edited my question post to include "question 2," what do you think about it?

3Milan W
I agree that the culture wars as fought now will influence what the great masses of people will believe in the day before AGI is created. Is it a relevant input to what they will believe in 50 years after that, though?

I am interested in the GRT from an agent foundations point of view, not because I want to make better thermostats.

An agent with a goal needs to use the means available to it in whatever way will achieve that goal. That is practically the definition of a control system. So you do actually want to build better thermostats, even if you haven't realised it.

I'm sure that GRT is pretty useless for most practical applications of control theory!

I'm sure that GRT is pretty useless, period.

Reducing entropy is often a necessary (but not sufficient) condition

... (read more)

When subjects completed the task in the presence of an observer, depressed students perceived themselves to have more control than did nondepressed students.

So...depressed people should get out more? :)

Load More