I learned a lot from him and I STILL have a bad vibe about him. People can be correct, useful, and also unsafe. (Primarily, I suspect him to be high on scales of narcissism, to which I'm very sensitive. Haven't met the guy personally, but his text reeks of it. Doesn't negate his genius; just negates my will to engage with him in any other dimension.)
I am not the best at writing thorough comments because I am more of a Redditor than a LessWronger, but I just want you to know that I read the entire post over the course of ~2.5 hours and I support you wholeheartedly and think you're doing something very important. I've never been part of the rationalist "community" and don't want to be (I am not a rationalist, I am a person who strives weakly for rationality, among many other strivings), particularly after reading all this, but I definitely expected better out of it than I've seen lately. But perhaps I s...
Well, as I attempted to express in the original comment, I am not a Shaiva, but rather I had mystical experiences and things as a teen that led me to invent my own religion from scratch which has similarities with various other belief systems, and Kashmir Shaivism is one of them. For the most part however it's just a kind of background element of my existence, part of my ontology, and not something I put much attention towards actively anymore. In practice I'm effectively an atheist physicalist like everyone else here. It's just... there's also something t...
I think you're wrong that Level 4 is rare. It describes everyday reality in the upper strata of any cult - the psychopathic leadership class who make up shit for everyone else to believe, or claim to believe for status, or etc, and compete for status among those followers. And there are a LOT of cults in modern society, including organizations not traditionally perceived as cults, such as political ideologies.
I think people have "criticized" Minecraft for being unclear what the point is, and being more of a toy or sandbox than a "game."
Myself included. I can't play Minecraft; it's far too open-ended, and makes me feel anxious and overwhelmed. "Wtf am I supposed to do??" I want a game to give me two or three choices max at every decision point, not an infinite vector space of possibilities through which I cannot sort or prioritize.
This post though is about one of my big obsessions: trying to figure out how to design a game (computer or tabletop or both) which ma...
If I were Bob I'd have told her to fuck off long ago and stopped letting some random person berate me for being lazy just like my parents always have. This is basically guilt-tripping, not a beneficial way of approaching any kind of motivation, and it is absolutely guaranteed to produce pushback. But then, I'm probably not your target audience, am I?
Btw just to be clear, I think Said Achmiz explained my reaction better than I, who habitually post short reddit-tier responses, can. My specific issue is that Alice seems to be acting as if it's any of her busi...
I really enjoy this sequence but there's a sticking point that's making me unable to continue until I figure it out. It seems to me rather obvious that... utility functions are not shift-invariant! If I denominate option A at 1 utilon and option B at 2 utilons, that means I am indifferent between a certain outcome of A and a 50% probability of B - and this is no longer the case if I shift my utility function even slightly. Ratios of utilities mean something concrete and are destroyed by translation. Since your entire argument seems to rest on that inexplicably not being the case, I can't see how any of this is useful.
I understand all this logically, but my emotional brain asks, "Yeah, but why should I care about any of that? I want what I want. I don't want to grow, or improve myself, or learn new perspectives, or bring others joy. I want to feel good all the time with minimal effort."
When wireheading - real wireheading, not the creepy electrode in the brain sort that few people would actually accept - is presented to you, it is very hard to reject it, particularly if you have a background of trauma or neurodivergence that makes coping with "real life" difficult to beg...
Neither of these really describes what childhood is for. Both of them are inventions of the modern WEIRD society. I'd suggest you read "Anthropology of Childhood: Cherubs, Chattels, Changelings" for a wider view on the subject... it's pretty bleak though. The very idea that there is such a thing as an optimal childhood parents ought to strive to provide their children... is also a modern, Western, extremely unusual idea, and throughout most of history, in most cultures, they were just... little creatures that would eventually be adults and till then either...
To be honest, I look forward to AI partners. I have a hard time seeing the point of striving to have a "real" relationship with another person, given that no two people are really perfectly compatible, no one can give enough of their time and attention to really satisfy a neverending desire for connection, etc. I expect AIs to soon enough be better romantic companions - better companions in all ways - than humans are. Why shouldn't I prefer them?
Those stories are surprisingly coherent and compelling. They were actually fun to read!
I'm not sure how useful the concept of boundary placement rebellion is, though. It certainly is a thing, but it's also something basically everyone engages in. I pretty much constantly do it... though maybe that says more about me than anything...
I'm never really sure what there's any point in saying. My main interests have nothing to do with AI alignment, which seems to be the primary thing people talk about here. And a lot of my thoughts require the already existing context of my previous thoughts. Honestly, it's difficult for me to communicate what's going on in my head to anyone.
No, it's called "lying". The text that he produces as a result of these social pressures does not reflect his actual thought processes. You can't judge a belief on the basis of a bunch of ex post facto arguments people make up to rationalize it - the method by which they came to hold the belief is much more informative, and for those of us with very roundabout styles of thinking (such as myself) being forced into this self-censorship and modification of our thought patterns into something "coherent" and easy to read actually destroys all the evidence of how we actually came to the idea, and thus destroys much of your ability to effectively examine its validity!
Thinking and coming to good ideas is one thing.
Communicating a good idea is another thing.
Communicating how you came to an idea you think is good is a third thing.
All three are great, none of them are lying, and skipping the "communicating a good idea" one in hopes that you'll get it for free when you communicate how you came to the idea is worse (but easier!) than also, separately, figuring out how to communicate the good idea.
(Here "communicate" refers to whatever gets the idea from your head into someone else's, and, for instance, someone beginning to r...
I feel the same as Adrian and Cato. I am very much the opposite of a rigorous thinker - in fact, I am probably not capable of rigor - and I would like to be the person who spews loads of interesting off the wall ideas for others to parse through and expand upon those which are useful. But that kind of role doesn't seem to exist here and I feel very intimidated even writing comments, much less actual posts - which is why I rarely do. The feeling that I have to put tremendous labor into making a Proper Essay full of citations and links to sequences and detailed arguments and so on - it's just too much work and not worth the effort for something I don't even know anyone will care about.
This makes me wonder if some proportion of "masculine" gay men are actually transwomen (of the early onset type) with autoandrophilia. I may even fit into that category myself. I didn't care about masculinity and in fact found it somewhat abhorrent and not-me-ish until I started getting off to more masculine looking guys in porn. (When I first saw porn when I was 12 I mainly focused on twinks and wanted to look like them, and there's still a part of me that feels that way, which wars with the part that wants to bulk up because masc dudes are also hot - and...
I feel like consequentialists are more likely to go crazy due to not being grounded in deontological or virtue-ethical norms of proper behavior. It's easy to think that if you're on track to saving the world, you should be able to do whatever is necessary, however heinous, to achieve that goal. I didn't learn to stop seeing people as objects until I leaned away from consequentialism and toward the anarchist principle of unity of means and ends (which is probably related to the categorical imperative). E.g. I want to live in a world where people are respect...
Ooh! I don't know much about the theory of reinforcement learning, could you explain that more / point me to references? (Also, this feels like it relates to the real reason for the time-value of money: money you supposedly will get in the future always has a less than 100% chance of actually reaching you, and is thus less valuable than money you have now.)
It seems to me that the optimal schedule by which to use up your slack / resources is based on risk. When planning for the future, there's always the possibility that some unknown unknown interferes. When maximizing the total Intrinsically Good Stuff you get to do, you have to take into account timelines where all the ants' planning is for nought and the grasshopper actually has the right idea. It doesn't seem right to ever have zero credence of this (as that means being totally certain that the project of saving up resources for cosmic winter will go perf...
1. Who are the customers actually buying all these products so that the auto-corporations can profit? They cannot keep their soulless economy going without someone to sell to, and if it's other AIs, why are those AIs buying when they can't actually use the products themselves?
2. What happened to the largest industry in developed countries, the service industry, which fundamentally relies on having an actual sophont customer to serve? (And again, if it's AIs, who the hell created AIs that exist solely to receive services they cannot actually enjoy, and how ...
I've never had a job in my life - yes really, I've had a rather strange life so far, it's complicated - but I've been reading and thinking about topics which I now know are related to operations for years, trying to design (in my head...) a system for distributing the work of managing a complex organization across a totally decentralized group so that no one is in charge, with the aid of AI and a social media esque interface. (I've never actually made the thing, because I keep finding new things I need to know, and I'm not a software engineer, just a desig...
I don't know what to think about all that. I don't know how to determine what the line is between having qualia and not. I just feel certain that any organism with a brain sufficiently similar to those of humans - certainly all mammals, birds, reptiles, fish, cephalopods, and arthropods - has some sort of internal experience. I'm less sure about things like jellyfish and the like. I suppose the intuition probably comes from the fact that the entities I mentioned seem to actively orient themselves in the world, but it's hard to say.
I don't feel comfortable ...
I don't know anything about colab, other than that the colab notebooks I've found online take a ridiculously long time to load, often have mysterious errors, and annoy the hell out of me. I don't know enough AI-related coding stuff to use it on my own. I just want something plug and play, which is why I mainly rely on KoboldAI, Open Assistant, etc.
We're not talking about sapience though, we're talking about sentience. Why does the ability to think have any moral relevance? Only possessing qualia, being able to suffer or have joy, is relevant, and most animals likely possess that. I don't understand the distinctions you're making in your other comment. There is one, binary distinction that matters: is there something it is like to be this thing, or is there not? If yes, its life is sacred, if no, it is an inanimate object. The line seems absolutely clear to me. Eating fish or shrimp is bad for the sa...
Just to be That Guy I'd like to also remind everyone that animal sentience means vegetarianism, at the very least (and because of the intertwined nature of the dairy, egg, and meat industries, most likely veganism) is a moral imperative, to the extent that your ethical values incorporate sentience at all. Also, I'd go further to say that uplifting to sophonce those animals that we can, once we can at some future time, is also a moral imperative, but that relies on reasoning and values I hold that may not be self-evident to others, such as that increasing the agency of an entity that isn't drastically misaligned with other entities is fundamentally good.
Welcome! And yes, this is a thing people have talked about a lot, particularly in the context of outer versus inner alignment (the outer optimizer, evolution, designed an inner optimizer, humans, who optimize for different things, like pleasure etc, than evolution does, but ended up effectively becoming a "singularity" from its point of view). It's cool that you noticed this on your own!
Has any tried training LLMs with some kind of "curriculum" like this? With a simple dataset that starts with basic grammar and simple concepts (like TinyStories), and gradually moves onto move advanced/abstract concepts, building on what's been provided so far? I wonder if that could also lead to more interpretable models?
This is a fantastically good point. I've often seen this failure mode and not had a name for it, such as when someone I know complains about his political opponents having a self-contradictory ideology - I always have to correct him that in fact, different people in roughly the same camp are contradicting one another, but each individual perspective is self-consistent. Now I have a name for that phenomenon!
That paper about economic drivers of biological complexity is fascinating! In particular I am amazed I never noticed that lekking is an auction. The paper lends some credence to my intuition that capitalism is actually isomorphic to the natural state. Are you the Phelps that was involved in writing it?
Also: I wonder if you'd be interested in my vague notion that genes trade with one another using mutability as a currency.
This reminds me strongly of Wittgenstein's notion of "family resemblances" as a more reasonable replacement for definitions. The way mental illnesses are diagnosed in the DSM is similar - if you have X out of N possible symptoms, then you have the disease. Maybe womanhood (forgive my comparison with a disease!) is similar nowadays.
Eliezer, or somebody better at talking to humans than him, needs to go on conservative talk shows - like, Fox News kind of stuff - use conservative styles of language, and explain AI safety there. Conservatives are intrinsically more likely to care about this stuff, and to get the arguments why inviting alien immigrants from other realms of mindspace into our reality - which will also take all of our jobs - is a bad idea. Talk up the fact that the AGI arms race is basically as bad as a second cold war only this time the "bombs" could destroy all of human c...
Vibes tend to be based on pattern matching, and are prone to bucket errors, so it's important to watch out for that - particularly for people with trauma. For instance, I tend to automatically dislike anyone who has even one mannerism in common with either of my parents, and it takes me quite a while to identify exactly what it is that's causing it. It usually isn't their fault and they're quite nice people, but the most annoying part is it doesn't go away just because I know that. This drastically reduces the range of people I can feel comfortable around.