Would I be wrong to guess it argues against position in https://www.lesswrong.com/posts/AfAp8mEAbuavuHZMc/for-the-sake-of-pleasure-alone?
I suppose it does. That article was not in my mind at the time, but, well, let's just say that I am not a total hedonistic utilitarian, or a utilitarian of any other stripe. "Pleasure" is not among my goals, and the poster's vision of a universe of hedonium is to me one type of dead universe.
Hack your brain to make eating healthily effortless. Hack your body to make exercise effortless.
You're thinking pretty small there, if you're in a position to hack your body that way.
Yet these are actual ideas someone suggested in a recent comment. In fact, that was what inspired this rant, but it grew beyond what would be appropriate to dump on the individual.
I think you're seeing shadows of your own ideas there.
Perhaps the voice I wrote that in was unclear, but I no more desire the things I wrote of than you do. Yet that is what I see people ...
I do not have answers to the question I raise here.
Back in the stone age — I think something like the 1960's or 1970's, I read an article about the possible future of computing. Computers back then cost millions and lived in giant air-conditioned rooms, and memory was measured in megabytes. Single figures of megabytes. Someone had expressed to its writer the then-visionary idea of using computers to automate a company. They foresaw that when, for example, a factory was running low on some of its raw materials, the computer would au...
The vision is of everything desirable happening effortlessly and everything undesirable going away.
Citation needed. Particularly for that first part.
Hack your brain to make eating healthily effortless. Hack your body to make exercise effortless.
You're thinking pretty small there, if you're in a position to hack your body that way.
...If you're a software developer, just talk to the computer to give it a general idea of what you want and it will develop the software for you, and even add features you never knew you wanted. But then, what was your role
I'd bet that I'm still on the side where I can safely navigate and pick up the utility, and I median-expect to be for the next couple months ish. At GPT-5ish level I get suspicious and uncomfortable, and beyond that exponentially more so.
Please review this in a couple of months ish and see if the moment to stop is still that distance away. The frog says "this is fine!" until it's boiled.
Follow the improbability. What drew that particular theory to the person's attention, either the hypothetical Roman commoner or the person arguing that we can't yet test their hypothesis about God? If the answer is "nothing", as is literally the case for the imagined Roman, then we need not concern ourselves further with the matter. If the hypothesis about God is not already entangled with the world, it fares no better.
The Sequences? Not quite what you're looking for, but that's what I have always thought of as the essentials of LW (before the AI explosion).
WARNING: this post might press some pain-points of humans in general, and of LW community in particular - so let's see how many downvotes it collects. I do believe our triggers point to our blind-spots or dogmas – so maybe you can find here an opportunity for new depth.
A pre-emptive universal argument against all disagreement, which the poster then deployed in this comment.
Anyone have a logical solution to exactly why we should act altruistically?
"Logical ... should" sounds like a type error, setting things up for a contradiction. While there are adherents of moral naturalism, I doubt there are many moral naturalists around here. Even given moral naturalism, I believe it would still be true that any amount of intelligence can coexist with any goals. So no, there is no reason why unconstrained intelligences should be altruistic, or even be the sort of thing that "altruism" could meaningfully be asserted or denied of them.
...Steelmanning is writing retcon fanfiction of your interlocutor’s arguments. As such it necessarily adds, omits, or changes elements of the source material, in ways that the other person need not accept as a valid statement of their views.
When we look at experience itself, there is no fixed “I” to be found.
Speak for yourself. That whole paragraph does not resemble my experience. You recommend Parfit, but I've read Parfit and others and remain true to myself.
You can't even predict the weather more than a few days in advance, and you can't predict the movement of individual gas molecules for longer than a tiny fraction of a second, even if you knew their exact positions and velocities, which you can't. So these hypothetical determinations are of no consequence. Add quantum indeterminacy and your hypothetical exact prediction of the future becomes a probability distribution over possible worlds, i.e. an exact calculation of your ignorance.
The question I am more interested in is, why are all these people in recen...
It has LLM written all over it. For example:
This attitude betrays a misunderstanding of cognitive privilege. Just as a person born into wealth has a head start in life, a person born with high cognitive ability begins the race miles ahead of others. Yet, many in rationalist communities resist this conclusion, likely because it challenges the notion of a purely meritocratic intellect.
"Yet, many in rationalist communities resist this conclusion" — Who? Where? I have never seen anything that fits this. It comes out of nowhere. And it isn't a "conclusion",...
No. Nothing but soft CCP-approved platitudes can be expected from such a person writing in such a venue. That is her job. China matters, but not everything that it says matters, unless to Pekingologists minutely examining the tea-leaves for insight into whatever is really going on in China.
What about my other two points?
I am not sure that is right. A very large percentage of people really don't think the rolls are independent. Have you ever met anyone who believed in fate, Karma, horoscopes , lucky objects or prayer? They don't think its (fully) random and independent. I think the majority of the human population believe in one or more of those things.
They may well do. But they are wrong.
It is written by a Chinese former politician in a Chinese-owned newspaper.
?
It is written by a Chinese former politician in a Chinese-owned newspaper.
Why does the linked article merit our attention?
It does not contain anything to justify the quote, which is the article's sub-headline. Journalists typically do not write their own headlines, but there is a near quote of the sub-headline in the body of the article. It is not elaborated on by the text around it.
It is written by a Chinese former politician in a Chinese-owned newspaper.
Whether or not LLMs contributed to the content, it is just as foggy and platitudinous as if one had written the entire thing. It expresses nothing but warm, content-free fuzziness towards the Chinese government in the context of AI.
There are mathematical arguments against Expected Value Fanaticism. They point out that a different ontology is required when considering successive decisions over unbounded time and unbounded payoffs. Hence the concepts of multiple bets over time, Kelly betting, and what is now the Standard Bad Example of someone deliberately betting the farm for a chance at the moon and losing. And once you start reasoning about divergent games like St Petersburg, you can arrive at contradictions very easily unless you think carefully about the limiting processes involve...
How do barbell and dumbbell exercises differ? I always find dumbbells more convenient — is there ever a reason to use a barbell instead?
Even on LW, there comes a point where everything has been said and further discussion will foreseeably be unuseful.
And as if by magic, the strong downvote appears! My guess is that the person is just trolling, but I do not actually care. See me put my mouth where my mouth was!
I am against this from both sides. Some things just aren't worth more than my downvote. And when I get downvoted, as a couple of my recent comments have been (strongly, even, as well as some strong upvotes), it's generally easy to see why, and I'm not interested in extending the conversation. I said a thing, some people didn't like it, we disagree, we're not going to agree[1], move on, because the conversation would be nothing more than everyone repeating things already said. If anywhere is the place to not have such conversations, LessWrong is it.
Anyo
I still don’t think I understand this.
The particular words the posting seems to be about are DEI and woke stuff. But Trump is taking a wrecking ball to them, and got elected on a platform to do just that.
I don’t see what the requisite variety stuff has to do with it either. (I think requisite variety is a crock, but that would be another conversation.) When Indiana Jones is confronted with a sword-fighter who demonstrates all manner of dazzlingly fast swordplay, he just draws a gun and shoots him. What matter the “variety” of the swordsman’s skills then?
Abba Jerome's only companion in the desert was a ferret that would come and lie in the shade of his cave.
One night, he walked meditating among the hills. Hearing a sudden noise underfoot, he saw how the ferret had caught a desert rat, ripping its belly open. In compassion, Abba Jerome laid his hand on the rat, which was miraculously healed, and scampered away.
But God spoke out of the night, saying, "Knowest thou the ways of God? The rat's death was the ferret's life."
Abba Jerome admitted his sin, but thereafter, the ferret would never enter his cave.
Still don't care.
I've seen roadkilled deer now and then when out on bike rides. Always dead so far, but if I saw an injured one still alive, I'd just carry on, because what could I do? Stand there emoting at it? Not being a vet, I don't go around with a shotgun to dispatch wounded animals. It's unfortunate for the deer. It will suffer. I look directly at the situation, recognise it as the way of the world, and pass on.
"This isn't right. This isn't even wrong."
What will these business be selling, who to, and what will their customers pay them with? This business model looks like AI-run pyramid schemes, complicated ways of everyone selling each other nothing.
For each business, a single human will have the initial idea
The AI can do that too.
I can see a version of your argument that's limited to when peaceful actions are over-saturated enough that additional units of effort would be most effective when put into violent actions.
Synergy is a thing. Multiple ways of opposing the enemy, deployed together, not sequentially: peaceful and violent, legal and illegal, public and covert, good cop and bad cop, club and open hand, talk peace and carry a big stick.
I take seriously radical animal-suffering-is-bad-ism[1], but we would only save a small portion of animals by trading ourselves off 1-for-1 against animal eaters, and just convincing one of them to go vegan would prevent at least as many torturous animal lives in expectation, while being legal.
That is a justification for not personally being Ziz. But obviously it would have cut no ice with Ziz. And why should it? An individual must choose whether to pursue peaceful or violent action, because if you are Taking the Ideas Seriously then either one will dem...
What seems radical depends on where one stands. We each of us stand on our own beliefs, and the further away one looks, the more the beliefs over there differ from one's own. Look sufficiently far and everything you see in the distance will seem extreme and radical. Hence the fallacy that truth lies between extremes, instead of recognising the tautology that one's own beliefs always lie between those that are extremely different.
Let me put my attitudes in practical terms: I don't kick dogs, but I have destroyed a wasp's nest in my garage, and I don't donat...
I have no daydreams about quila, and others of like mind, not existing. Not even about Ziz.
Me:
Bring on the death threats.
quila:
As an example of a 'directly opposing view', I think the world would probably be better without this comment's author.
That's a death threat right there, for holding and acting on a directly opposing view (to veganism). So I was not rhetorically exaggerating. Oh, you wouldn't be so impolite as to do the deed yourself, but, nudge nudge, wouldn't it be better if everyone of like mind to myself somehow just ... didn't exist? We could all have paradise, if it wasn't for them! Of course we can't exterminate them, tha...
What if I have wonderful plot in my head and I use LLM to pour it into acceptable stylistic form?
What if you have wonderful plot in your head and you ask writer to ghost-write it for you? And you'll be so generous as to split the profits 50-50? No writer will accept such an offer, and I've heard that established writers receive such requests all the time.
"Wonderful plots" are ten a penny. Wonderful writing is what makes the book worth reading, and LLMs are not there yet.
You have an epsilon chance of hitting the terrorist ("NO ONE has ever hit a target from this far"). POI only gives you an epsilon-of-epsilon lower chance of hitting the child. Your superior officer is an idiot.
That's leaving aside the fact that it would take more time to concentrate on the shot that you actually have ("They are hastily heading towards another building nearby"). And it's a moving target. The officer is asking a miracle of this sniper.
...I'm actually just interested in whether you find the POI argument valid, not in what you think the right s
Blindsight strikes me as having the opposite view. Eneasz is talking about getting the underlayer to be more aligned with the overlayer. (“Unconscious” and “conscious” are the usual words, but I find them too loaded.) Watts is talking about removing the overlayer as a worse than useless excrescence. I am sceptical of the picture Watts paints, in both his fiction and non-fiction.
Ok, I'll take your word for it. It was still the most clichéd possible opening.
We find ourselves at the precipice of
tAI;dr.
I would put more emphasis on this part:
Even the smartest people I know have a commendable tendency not to take certain ideas seriously.
Indeed, I think this tendency commendable and I do not take these ideas seriously. Like Puddleglum, I ignore and am untroubled by the whispers of evil spirits, even though I may not (yet) have any argument against them. I do not need one. Nor do I need to have an argument for ignoring them. Nor an argument for not looking for arguments. Gold Hat's line comes to mind:
“Arguments? We ain’t got no arguments! We don’t need ...
We are in our native language, so we should work from there.
And begin by stepping outside it.
Gandalf as Ring-Lord would have been far worse than Sauron. He would have remained ‘righteous’, but self-righteous. He would have continued to rule and order things for ‘good’, and the benefit of his subjects according to his wisdom (which was and would have remained great). [The draft ends here. In the margin Tolkien wrote: ‘Thus while Sauron multiplied [illegible word] evil, he left “good” clearly distinguishable from it. Gandalf would have made good detestable and seem evil.’]
So Tolkien anticipated the rise of woke!
Elon Musk, Peter Thiel, and the like — the people the OP mentions — are shaping up to be the ones controlling the Singularity (if anyone does).
Is there an implicit assumption of some convergence of singularities? Or that the near term doesn’t matter because the vastly bigger long term can’t be predicted?
A lot of people care about the culture wars because they don't believe the singularity is coming soon. Yet a lot of people who do believe it is coming soon still seem just as invested (e.g. Elon Musk, Peter Thiel, and others on the left wing).
Why?
Because the results of culture wars now will determine the post-singularity culture.
Can you give an example of a result now which will determine the post-singularity culture in a really good/bad way?
PS: I edited my question post to include "question 2," what do you think about it?
I am interested in the GRT from an agent foundations point of view, not because I want to make better thermostats.
An agent with a goal needs to use the means available to it in whatever way will achieve that goal. That is practically the definition of a control system. So you do actually want to build better thermostats, even if you haven't realised it.
I'm sure that GRT is pretty useless for most practical applications of control theory!
I'm sure that GRT is pretty useless, period.
...Reducing entropy is often a necessary (but not sufficient) condition
When subjects completed the task in the presence of an observer, depressed students perceived themselves to have more control than did nondepressed students.
So...depressed people should get out more? :)
There are authors I would like to read, if only they hadn't written so much! Whole fandoms that I must pass by, activities I would like to be proficient at but will never start on, because the years are short and remain so, however far an active life is prolonged.