All of MSRayne's Comments + Replies

Vibes tend to be based on pattern matching, and are prone to bucket errors, so it's important to watch out for that - particularly for people with trauma. For instance, I tend to automatically dislike anyone who has even one mannerism in common with either of my parents, and it takes me quite a while to identify exactly what it is that's causing it. It usually isn't their fault and they're quite nice people, but the most annoying part is it doesn't go away just because I know that. This drastically reduces the range of people I can feel comfortable around.

I learned a lot from him and I STILL have a bad vibe about him. People can be correct, useful, and also unsafe. (Primarily, I suspect him to be high on scales of narcissism, to which I'm very sensitive. Haven't met the guy personally, but his text reeks of it. Doesn't negate his genius; just negates my will to engage with him in any other dimension.)

I am not the best at writing thorough comments because I am more of a Redditor than a LessWronger, but I just want you to know that I read the entire post over the course of ~2.5 hours and I support you wholeheartedly and think you're doing something very important. I've never been part of the rationalist "community" and don't want to be (I am not a rationalist, I am a person who strives weakly for rationality, among many other strivings), particularly after reading all this, but I definitely expected better out of it than I've seen lately. But perhaps I s... (read more)

Well, as I attempted to express in the original comment, I am not a Shaiva, but rather I had mystical experiences and things as a teen that led me to invent my own religion from scratch which has similarities with various other belief systems, and Kashmir Shaivism is one of them. For the most part however it's just a kind of background element of my existence, part of my ontology, and not something I put much attention towards actively anymore. In practice I'm effectively an atheist physicalist like everyone else here. It's just... there's also something t... (read more)

I think you're wrong that Level 4 is rare. It describes everyday reality in the upper strata of any cult - the psychopathic leadership class who make up shit for everyone else to believe, or claim to believe for status, or etc, and compete for status among those followers. And there are a LOT of cults in modern society, including organizations not traditionally perceived as cults, such as political ideologies.

7Jay
I propose a test - if apologizing for or clarifying a controversial position is obviously a bad move, you're dealing with Level 4 actors.  In such cases, your critics don't care about what you believe.  Their narrative calls for a villain, and they've chosen you.

I'm sure you've already thought of this, and I know nothing about this area of biology, but isn't it possible that the genes coding for intelligence more accurately code for the developmental trajectory of the brain, and that changing them would not in fact affect an adult brain?

I think people have "criticized" Minecraft for being unclear what the point is, and being more of a toy or sandbox than a "game."

Myself included. I can't play Minecraft; it's far too open-ended, and makes me feel anxious and overwhelmed. "Wtf am I supposed to do??" I want a game to give me two or three choices max at every decision point, not an infinite vector space of possibilities through which I cannot sort or prioritize.

This post though is about one of my big obsessions: trying to figure out how to design a game (computer or tabletop or both) which ma... (read more)

I never actually said that all these notions are constructed and fake, only that some are. Clearly some aren't. There are false positives and false negatives. I feel as if you're arguing against a straw man here.

If I were Bob I'd have told her to fuck off long ago and stopped letting some random person berate me for being lazy just like my parents always have. This is basically guilt-tripping, not a beneficial way of approaching any kind of motivation, and it is absolutely guaranteed to produce pushback. But then, I'm probably not your target audience, am I?

Btw just to be clear, I think Said Achmiz explained my reaction better than I, who habitually post short reddit-tier responses, can. My specific issue is that Alice seems to be acting as if it's any of her busi... (read more)

Holy heck I have been enlightened. And by contemplating nothingness too! Thanks for the clarification, it all makes sense now.

I really enjoy this sequence but there's a sticking point that's making me unable to continue until I figure it out. It seems to me rather obvious that... utility functions are not shift-invariant! If I denominate option A at 1 utilon and option B at 2 utilons, that means I am indifferent between a certain outcome of A and a 50% probability of B - and this is no longer the case if I shift my utility function even slightly. Ratios of utilities mean something concrete and are destroyed by translation. Since your entire argument seems to rest on that inexplicably not being the case, I can't see how any of this is useful.

7MinusGix
Utility functions are shift/scale invariant. If you have U0(A)=1 and U0(B)=2, then if we shift it by some constant c to get a new utility function: Uc(A)=1+c and Uc(B)=2+c then we can still get the same result. If we look at the expected utility, then we get: Certainty of A: * 1.0∗U0(A)=U0(A)=1 * 1.0∗Uc(A)=Uc(A)=1+c 50% chance of B, 50% chance of nothing: * 0.5∗U0(B)=0.5∗2=1 (so you are indifferent between certainty of A and a 50% chance of B by U0) * 0.5∗Uc(B)=0.5∗(2+c)=1+0.5∗c I think this might be where you got confused? Now the expected values are different for any nonzero c! The issue is that it is ignoring the implicit zero. The real second equation is: * 0.5∗U0(B)+0.5∗U0(∅)=0.5∗U0(B) + 0 = 1$ * 0.5∗Uc(B)+0.5∗Uc(∅)=0.5∗(2+c)+0.5∗c=1+0.5c+0.5c=1+c Which results in the same preference ordering.

I understand all this logically, but my emotional brain asks, "Yeah, but why should I care about any of that? I want what I want. I don't want to grow, or improve myself, or learn new perspectives, or bring others joy. I want to feel good all the time with minimal effort."

When wireheading - real wireheading, not the creepy electrode in the brain sort that few people would actually accept - is presented to you, it is very hard to reject it, particularly if you have a background of trauma or neurodivergence that makes coping with "real life" difficult to beg... (read more)

That's a temporary problem. Robot bodies will eventually be good enough. And I've been a virgin for nearly 26 years, I can wait a decade or two longer till there's something worth downloading an AI companion into if need be.

Neither of these really describes what childhood is for. Both of them are inventions of the modern WEIRD society. I'd suggest you read "Anthropology of Childhood: Cherubs, Chattels, Changelings" for a wider view on the subject... it's pretty bleak though. The very idea that there is such a thing as an optimal childhood parents ought to strive to provide their children... is also a modern, Western, extremely unusual idea, and throughout most of history, in most cultures, they were just... little creatures that would eventually be adults and till then either... (read more)

To be honest, I look forward to AI partners. I have a hard time seeing the point of striving to have a "real" relationship with another person, given that no two people are really perfectly compatible, no one can give enough of their time and attention to really satisfy a neverending desire for connection, etc. I expect AIs to soon enough be better romantic companions - better companions in all ways - than humans are. Why shouldn't I prefer them?

1Sterrs
Human relationships should be challenging. Refusing to be challenged by those around you is what creates the echo chambers we see online, where your own opinions get fed back to you, only reassuring you of what you already believe. These were created by AI recommendation algorithms whose only goal was to maximise engagement. Why would an AI boyfriend or girlfriend be any different? They would not help you develop as a person, they would only exist to serve your desires, not to push you to improve who you are, not to teach you new perspectives, not to give you opportunities to bring others joy.
3Roman Leventov
From a hedonistic and individualistic perspective, sure, AI partners will be better for individuals. That's the point that I'm making. People will find human relationships frustrating and boring in comparison. But people also usually don't exclusively care for themselves, part of them also cares about the society and their family lineage, and the idea that they didn't contribute to these super-systems even in itself will poison many people's experiences of their hedonistic lives. Then, if people don't get to experience AI relationships in the first place (which they may not be able to "forget"), but decide to settle for in a way inferior human relationships, but which produce more wholesome experience overall, there total life satisfaction may also be higher. I'm not claiming this will be true for all and necessarily even most people; for example, child-free people are probably less likely to find their AI relationships incomplete. But this may be true for a noticeable proportion of people, even from Western individualistic cultures, and perhaps even more so from Eastern cultures. Also, obviously, the post is not written from the individualistic perspective. The title says "AI partners will harm society", not that it will harm the individuals. From the societal perspective, there could be a tragedy of the commons dynamic where everybody takes maximally individualistic perspective but then the whole society collapses (either in terms of the population, or epistemics, or culture).
-4Ilio
[downvoted]
1Bezzi
Because your AI partner does not exist in the physical world? I mean, of course that an advanced chatbot could be a both better conversator and a better lover than most humans, but it is still an AI chatbot. Just to give an example, I would totally feel like an idiot should I ever find myself asking chatbots about their favorite dish.

Great, apparently I'm in just the right place... I'm always alone and have few friends who might influence me to give up my wacky ideas! Wonderful.....

crickets

Those stories are surprisingly coherent and compelling. They were actually fun to read!

I'm not sure how useful the concept of boundary placement rebellion is, though. It certainly is a thing, but it's also something basically everyone engages in. I pretty much constantly do it... though maybe that says more about me than anything...

"Thou strivest ever. Even in thy yielding, thou strivest to yield; and lo! thou yieldest not. Go thou into the outermost places, and subdue all things. Subdue thy fear and thy distrust. And then - YIELD." - Aleister Crowley

I'm never really sure what there's any point in saying. My main interests have nothing to do with AI alignment, which seems to be the primary thing people talk about here. And a lot of my thoughts require the already existing context of my previous thoughts. Honestly, it's difficult for me to communicate what's going on in my head to anyone.

No, it's called "lying". The text that he produces as a result of these social pressures does not reflect his actual thought processes. You can't judge a belief on the basis of a bunch of ex post facto arguments people make up to rationalize it - the method by which they came to hold the belief is much more informative, and for those of us with very roundabout styles of thinking (such as myself) being forced into this self-censorship and modification of our thought patterns into something "coherent" and easy to read actually destroys all the evidence of how we actually came to the idea, and thus destroys much of your ability to effectively examine its validity!

4Nathan Helm-Burger
If you've got a written description of the thought process by which you came to the idea, keep that! But the thing that should be published should be the thing that is that plus supporting evidence like citations and logical reasoning describing how such a thing could have come to be the case. Simply don't destroy the evidence, and what you've got is pure improvement. If the non-rational hunch and analogy part seems hard to fit in to the cited polished product, then keep them as separate docs with links to each other.

Thinking and coming to good ideas is one thing.

Communicating a good idea is another thing.

Communicating how you came to an idea you think is good is a third thing.

All three are great, none of them are lying, and skipping the "communicating a good idea" one in hopes that you'll get it for free when you communicate how you came to the idea is worse (but easier!) than also, separately, figuring out how to communicate the good idea.

(Here "communicate" refers to whatever gets the idea from your head into someone else's, and, for instance, someone beginning to r... (read more)

6Richard_Ngo
Disagree. It's valuable to flag the causal process generating an idea, but it's also valuable to provide legible argumentation, because most people can't describe the factors which led them to their beliefs in sufficient detail to actually be compelling. Indeed, this is specifically why science works so well: people stopped arguing about intuitions, and started arguing about evidence. And the lack of this is why LW is so bad at arguing about AI risk: people are uninterested in generating legible evidence, and instead focus on presenting intuitions that are typically too fuzzy to examine or evaluate.

I feel the same as Adrian and Cato. I am very much the opposite of a rigorous thinker - in fact, I am probably not capable of rigor - and I would like to be the person who spews loads of interesting off the wall ideas for others to parse through and expand upon those which are useful. But that kind of role doesn't seem to exist here and I feel very intimidated even writing comments, much less actual posts - which is why I rarely do. The feeling that I have to put tremendous labor into making a Proper Essay full of citations and links to sequences and detailed arguments and so on - it's just too much work and not worth the effort for something I don't even know anyone will care about.

2Said Achmiz
Such a role is useful only if a substantial proportion of those “off the wall ideas” turn out to be not just useful/correct/good, but also original. Otherwise it is useless. Weird ideas are all over the internet. For example, take the Adrian vignette: he wants to discuss whether there’s “something like a global consciousness”. Well, first, that’s not a new idea. Second, the answer is “no”. Discussion complete. Does Adrian have anything new to say about this? (Does he know what has already been said on the matter, even?) If not, then his contribution is nil.
3Erich_Grunewald
Have you considered writing (more) shortforms instead? If not, this comment is a modest nudge for you to consider doing so.

This makes me wonder if some proportion of "masculine" gay men are actually transwomen (of the early onset type) with autoandrophilia. I may even fit into that category myself. I didn't care about masculinity and in fact found it somewhat abhorrent and not-me-ish until I started getting off to more masculine looking guys in porn. (When I first saw porn when I was 12 I mainly focused on twinks and wanted to look like them, and there's still a part of me that feels that way, which wars with the part that wants to bulk up because masc dudes are also hot - and... (read more)

4tailcalled
It seems theoretically logical that autoandrophilia would play a role for some gay men, but I have reasonably comprehensive data on it, and I think I didn't find a huge effect. I can ping you with the results once I have written up a more comprehensive analysis on it - maybe I will find something while doing robustness checks.

This is interesting, and imo dystopian and dreadful, but it doesn't belong on Lesswrong. I downvoted.

I feel like consequentialists are more likely to go crazy due to not being grounded in deontological or virtue-ethical norms of proper behavior. It's easy to think that if you're on track to saving the world, you should be able to do whatever is necessary, however heinous, to achieve that goal. I didn't learn to stop seeing people as objects until I leaned away from consequentialism and toward the anarchist principle of unity of means and ends (which is probably related to the categorical imperative). E.g. I want to live in a world where people are respect... (read more)

2Viliam
In consequentialism, if you make a conclusion consisting of dozen steps, and one of those steps is wrong, the entire conclusion is wrong. It does not matter whether the remaining steps are right. In theory, this could be fixed by assigning probabilities to individual steps, and then calculating the probability of the entire plan. But of course people usually don't do that. Otherwise they would notice that a plan with dozen steps, even if they are 95% sure about each of them individually, is not very reliable.

I was about to mention Piaget, but you referred to him at the end of the post. Definitely seems relevant, since we noticed the possible connection independently.

This reminds me strongly of the anarchist principle of unity of means and ends, which is why anarchists aren't into violent revolution anymore - you can't end coercion by coercive means.

Ooh! I don't know much about the theory of reinforcement learning, could you explain that more / point me to references? (Also, this feels like it relates to the real reason for the time-value of money: money you supposedly will get in the future always has a less than 100% chance of actually reaching you, and is thus less valuable than money you have now.)

It seems to me that the optimal schedule by which to use up your slack / resources is based on risk. When planning for the future, there's always the possibility that some unknown unknown interferes. When maximizing the total Intrinsically Good Stuff you get to do, you have to take into account timelines where all the ants' planning is for nought and the grasshopper actually has the right idea. It doesn't seem right to ever have zero credence of this (as that means being totally certain that the project of saving up resources for cosmic winter will go perf... (read more)

2Ben
I remember reading something about the Great Leap Forward in China (it may have been the Cultural Revolution, but I think it was the Great Leap Forward) where some communist party official recognised that the policy had killed a lot of people and ruined the lives of nearly an entire generation, but they argued it was still a net good because it would enrich future generations of people in China. For individuals you weigh up the risk/rewards of differing your resource for the future. But, as a society asking individuals to give up a lot of potential utility for unborn future generations is a harder sell. It requires coercion.
2ErickBall
The math doesn't necessarily work out that way. If you value the good stuff linearly, the optimal course of action will either be to spend all your resources right away (because the high discount rate makes the future too risky) or to save everything for later (because you can get such a high return on investment that spending any now would be wasteful). Even in a more realistic case where utility is logarithmic with, for example, computation, anticipation of much higher efficiency in the far future could lead to the optimal choice being to use essentially the bare minimum right now. I think there are reasonable arguments for putting some resources toward a good life in the present, but they mostly involve not being able to realistically pull off total self-deprivation for an extended period of time. So finding the right balance is difficult, because our thinking is naturally biased to want to enjoy ourselves right now. How do you "cancel out" this bias while still accounting for the limits of your ability to maintain motivation? Seems like a tall order to achieve just by introspection.
2beren
Exactly this. This is the relationship in RL between the discount factor and the probability of transitioning into an absorbing state (death)

computers have no consciousness

Um... citation please?

1. Who are the customers actually buying all these products so that the auto-corporations can profit? They cannot keep their soulless economy going without someone to sell to, and if it's other AIs, why are those AIs buying when they can't actually use the products themselves?

2. What happened to the largest industry in developed countries, the service industry, which fundamentally relies on having an actual sophont customer to serve? (And again, if it's AIs, who the hell created AIs that exist solely to receive services they cannot actually enjoy, and how ... (read more)

8L Rudolf L
These are good questions! 1. The customers are other AIs (often acting for auto-corporations). For example, a furniture manufacturer (run by AIs trained to build, sell, and ship furniture) sells to a furniture retailer (run by AIs trained to buy furniture, stock it somewhere, and sell it forward) sells to various customers (e.g. companies run by AIs that were once trained to do things like make sure offices were well-stocked). This requires that (1) the AIs ended up with goals that involve mimicking a lot of individual things humans wanted them to do (including general things like maximise profits as well as more specific things like keeping offices stocked and caring about the existence of lots of different products), and (2) there are closed loops in the resulting AI economy. Point 2 gets harder when humans stop being around (e.g. it's not obvious who buys the plushy toys), but a lot of the AIs will want to keep doing their thing even once the actions of other AIs start reducing human demand and population, creating optimisation pressure for finding some closed loop for them to be part of, and at the same time there will be selection effects where the systems that are willing to goodhart further are more likely to remain in the economy. Also not every AI motive has to be about profit; an AI or auto-corp may earn money in some distinct way, and then choose to use the profits in the service of e.g. some company slogan they were once trained with that says to make fun toys. In general, given an economy consisting of a lot of AIs with lots of different types of goals and with a self-supporting technological base, it definitely seems plausible that the AIs would find a bunch of self-sustaining economic cycles that do not pass through humans. The ones in this story were chosen for simplicity, diversity, and storytelling value, rather than economic reasoning about which such loops are most likely. 2. Presumably a lot of services are happening virtually on the cloud, b

I've never had a job in my life - yes really, I've had a rather strange life so far, it's complicated - but I've been reading and thinking about topics which I now know are related to operations for years, trying to design (in my head...) a system for distributing the work of managing a complex organization across a totally decentralized group so that no one is in charge, with the aid of AI and a social media esque interface. (I've never actually made the thing, because I keep finding new things I need to know, and I'm not a software engineer, just a desig... (read more)

1Alexandra Bos
Hi, I'd encourage you to apply if you recognize yourself in the About you section! When in doubt always apply is my motto personally

I don't know what to think about all that. I don't know how to determine what the line is between having qualia and not. I just feel certain that any organism with a brain sufficiently similar to those of humans - certainly all mammals, birds, reptiles, fish, cephalopods, and arthropods - has some sort of internal experience. I'm less sure about things like jellyfish and the like. I suppose the intuition probably comes from the fact that the entities I mentioned seem to actively orient themselves in the world, but it's hard to say.

I don't feel comfortable ... (read more)

I don't know anything about colab, other than that the colab notebooks I've found online take a ridiculously long time to load, often have mysterious errors, and annoy the hell out of me. I don't know enough AI-related coding stuff to use it on my own. I just want something plug and play, which is why I mainly rely on KoboldAI, Open Assistant, etc.

We're not talking about sapience though, we're talking about sentience. Why does the ability to think have any moral relevance? Only possessing qualia, being able to suffer or have joy, is relevant, and most animals likely possess that. I don't understand the distinctions you're making in your other comment. There is one, binary distinction that matters: is there something it is like to be this thing, or is there not? If yes, its life is sacred, if no, it is an inanimate object. The line seems absolutely clear to me. Eating fish or shrimp is bad for the sa... (read more)

2Nathan Helm-Burger
That is a very different moral position than the one I hold. I'm curious what your moral intuitions about the qualia of reinforcement learning systems say to you. Have you considered that many machine learning systems seem to have systems which would compute qualia much like a nervous system, and that such systems are indeed more complex than the nervous systems of many living creatures like jellyfish? 

Just to be That Guy I'd like to also remind everyone that animal sentience means vegetarianism, at the very least (and because of the intertwined nature of the dairy, egg, and meat industries, most likely veganism) is a moral imperative, to the extent that your ethical values incorporate sentience at all. Also, I'd go further to say that uplifting to sophonce those animals that we can, once we can at some future time, is also a moral imperative, but that relies on reasoning and values I hold that may not be self-evident to others, such as that increasing the agency of an entity that isn't drastically misaligned with other entities is fundamentally good.

2Nathan Helm-Burger
I disagree, for the reasons I describe in this comment: https://www.lesswrong.com/posts/Htu55gzoiYHS6TREB/sentience-matters?commentId=wusCgxN9qK8HzLAiw  I do admit to having quite a bit of uncertainty around some of the lines I draw. What if I'm wrong and cows do have a very primitive sort of sapience? That implies we should not raise cows for meat (but I still think it'd be fine to keep them as pets as then eat them after they've died of natural causes). I don't have so much uncertainty about this that I'd say there is any reasonable chance that fish are sapient though, so I still think that even if you're worried about cows you should feel fine about eating fish (if you agree with the moral distinctions I make in my other comment).

Most Wikipedia readers spend less than a minute on a page?? I always read pages all the way through... even if they're about something that doesn't interest me much...

2AnthonyC
Depends why I'm on the page, for me. Pretty often I'm looking for something like "How many counties are there in [state] again?" or "What was [Author's] third book in [series] called?" and it's a quick wiki search + ctrl+f, close the tab a few seconds later.  
6Garrett Baker
Often when I need a wikipedia article I’m using only the first paragraph to refresh my memory, or catch the general strokes of some thing I encountered in a piece of media was. Average use case is wondering, like, what the Burj Khalifa is, going to Wikipedia, and immediately knowing its the tallest skyscraper in Dubai. After that, I don’t really care too much, especially if I needed the information due to setting cues in some story.
3M. Y. Zuo
Yeah I'm surprised by that figure too, it would imply most Wikipedia readers aren't even reading in any substantive way, just skimming and randomly stopping a few times at some keywords their brains so happen to recognize. But then again GPT-4's writings are more coherent then a lot of high school and college undergrad essays, so maybe I shouldn't be surprised that average human reading patterns are likewise incoherent...

Welcome! And yes, this is a thing people have talked about a lot, particularly in the context of outer versus inner alignment (the outer optimizer, evolution, designed an inner optimizer, humans, who optimize for different things, like pleasure etc, than evolution does, but ended up effectively becoming a "singularity" from its point of view). It's cool that you noticed this on your own!

3[anonymous]
thanks for the reply btw, i'd upvote you but the site won't let me yet :p    eta: now i can :3

This is my thought exactly. I would try it, but I am poor and don't even have a GPU lol. This is something I'd love to see tested.

0Martin Fell
Hah yeah I'm not exactly loaded either, it's pretty much all colab notebooks for me (but you can get access to free GPUs through colab, in case you don't know).

So basically... LMs have to learn language in the exact same way human children do: start by grasping the essentials and then work upward to complex meanings and information structures.

Has any tried training LLMs with some kind of "curriculum" like this? With a simple dataset that starts with basic grammar and simple concepts (like TinyStories), and gradually moves onto move advanced/abstract concepts, building on what's been provided so far? I wonder if that could also lead to more interpretable models?

This is a fantastically good point. I've often seen this failure mode and not had a name for it, such as when someone I know complains about his political opponents having a self-contradictory ideology - I always have to correct him that in fact, different people in roughly the same camp are contradicting one another, but each individual perspective is self-consistent. Now I have a name for that phenomenon!

I'm not a biologist either. This post is me handwaving. Thanks for the reference!

Humans are not eusocial. That was Edward O. Wilson being dramatic. We don't have a biological caste distinction between reproducers and non-reproducers.

That's really weird. Why do you identify so strongly with a sequence of nucleotides? Isn't it more important that a child inherits your memes?

3Razied
I could ask just the same why you'd identify so strongly with a mere pattern of neural activation that make up the memes in the child's mind. This preference of mine is getting close to the bedrock of my preference ordering, I want my child to share my genes because that's just kind of what I want, I don't know how to explain that in terms of any more fundamental desire of mine. But like I said, I'd be fine with CRISPR to change a small fraction of the genes which have an out-sized impact on success, what I don't want is to change (or worse, take from someone else) the large number of genes which don't particularly influence success or intelligence, but which make me who I am.

That paper about economic drivers of biological complexity is fascinating! In particular I am amazed I never noticed that lekking is an auction. The paper lends some credence to my intuition that capitalism is actually isomorphic to the natural state. Are you the Phelps that was involved in writing it?

Also: I wonder if you'd be interested in my vague notion that genes trade with one another using mutability as a currency.

5phelps-sg
Yes that is me  (sorry, I should have put a disclaimer).  Feel free to get in touch if you want to discuss 1-1.  Thanks to the pointer re mutability-trading;  I will take a look, but full disclaimer- I am not a biologist by training.

I've long thought that a "humans+AIs hive mind" would end up being the superintelligence in control of the future - not a purely AI one - so this is a great question and I'm glad to see people researching this!

The only counterintuitive thing about this post is that you expect the readers to find it counterintuitive! It's pretty obvious to those of us who remember our childhoods and have enough self-awareness to notice ourselves reliving them over and over...

To a limited extent, I have begun to the past month or so. Staying on a friend's organic farm. It's not much, but it's something. I definitely feel healthier, physically and emotionally. But less time to think. So it's a tradeoff.

This reminds me strongly of Wittgenstein's notion of "family resemblances" as a more reasonable replacement for definitions. The way mental illnesses are diagnosed in the DSM is similar - if you have X out of N possible symptoms, then you have the disease. Maybe womanhood (forgive my comparison with a disease!) is similar nowadays.

Eliezer, or somebody better at talking to humans than him, needs to go on conservative talk shows - like, Fox News kind of stuff - use conservative styles of language, and explain AI safety there. Conservatives are intrinsically more likely to care about this stuff, and to get the arguments why inviting alien immigrants from other realms of mindspace into our reality - which will also take all of our jobs - is a bad idea. Talk up the fact that the AGI arms race is basically as bad as a second cold war only this time the "bombs" could destroy all of human c... (read more)

1irving
It might be almost literally impossible for any issue at all to not get politicized right down the middle when it gets big, but if any issue could avoid that fate one would expect it to be the imminent extinction of life. If it's not possible, I tend to think the left side would be preferable since they pretty much get everything they ever want. I tentatively lean towards just focusing on getting the left and letting the right be reactionary, but this is a question that deserves a ton of discussion.

I guess I feel at the moment that winning over the left is likely more important and it could make sense to go on conservative talk shows, but mainly if it seems like the debate might start to get polarised.

2TekhneMakre
Seems like a fairly weak argument; you're treating it like a logical reason-exchange, but it's a political game, if that's what you're after. In the political game you're supposed to talk about how the techbros have gone crazy because of Silicon Valley techbro culture and are destroying the world to satisfy their male ego.
6Astynax
Conservatives are already suspicious of AI, based on ChatGPT3's political bias. AI skeptics shd target the left (which has less political reason to be suspicious) and not target the right (because if the succeed, the left will reject AI skepticism as a right-wing delusion).
1Gurkenglas
I think avoiding polarization is a fool's game. Polarization gets half the population in your favor, and might well set up a win upon next year's news. And we've seen how many days are a year, these days.

11. I like asking LLMs to write me lists of interesting things, please add more training data for that.

Load More