StartAtTheEnd

Nobody special, nor any desire to be. Just sharing my ideas when I appear to know better than the person I'm responding to, or when I believe I have something interesting to share/add. I'm not a serious nor a formal person, and if you're more knowledgeable than intelligent, you probably won't like me as I lack academic rigor.

Feel free to correct me when I make mistakes. I'm too certain of myself as my ideas are rarely challenged. Crocker's rules are fine! When playing intellectual (I do on here) I find that social things only get in the way, and when I socialize I find that intellectual things get in the way, so I separate them.

Finally, beliefs don't seem to be a measure of knowledge and intelligence alone, but a result of experiences and personality. Those who have had similar experiences and thoughts already will recognize what I say, and those who don't will mostly perceive noise.

Wiki Contributions

Comments

Sorted by

I think the new communication systems could be a catalyst, but that stopping at this conclusion obscures the actual cause of cancel culture. I think the answer is something like what Kaczynski said about oversocialization, and that social media somehow worsens the social dynamics responsible. I think it's an interesting question how exactly these dynamics work socially and psychologically, so for me, "it's the new communication systems" is not a conclusion but a possible first step in finding the answer

My own expectation is that limitations result in creativity. Writers block is usually a result of having too many possibilities/choices. If I tell you "You can write a story about anything", it's likely harder for you to think of anything than if I tell you "Write a story about an orange cat". In the latter situation, you're more limited, but you also have something to work with.

I'm not sure if it's as true for computers as it is for humans (that would imply information-theoretic factors), but there's plenty of factors in humans, like analysis paralysis and the "See also" section of that page

If that is really his view, Sam Harris didn't think things through at all, nor did he think very deeply.

Qualia is created by the brain, not by anything external. Touching a hot stove feels bad because we are more likely to survive when we feel this way. There's no reason why it can't feel pleasurable to damage yourself, it just seems like a bad design choice. The brain uses qualia to reward and punish us so that we end up surviving and reproducing. Our defense mechanisms are basically just toying with us because it helps us in the end (it's merely the means to survival), and our brains somewhat resist our attempts at hacking our own reward mechanisms because those who could do that likely ended up dying more often.

You could use Harris Arguments to imply that objective beauty exists, too. This is of course also not correct.

The argument also implies that all life or all consciousness can feel positive and negative qualia, but that's not necessarily true. He should have written "made our corner of the universe suck less, for us, according to us. (What if a change feel bad for us but causes great suffering to some alien race?)

Lastly, if these philosophers experienced actual, severe suffering for long periods of time, they would likely realize that suffering isn't even the issue, but suffering that one feels is meaningless. Meaningful pain is not bothersome at all, and it doesn't even need to reduce further pain. Has Harris never read "man's search for meaning" or other works which explain this?

Thank you! Writing is not my strong suit, but I'm quite confident about the ideas. I've written a lot, so it's alright if you don't want to engage with all of it. No pressure!

I should explain the thing about suffering better:
We don't suffer from the state of the world, but from how we think about it. This is crucial. When people try to improve other peoples happiness, they talk about making changes to reality, but that's the least effective way they could go about it.
I believe this is even sufficient. That we can enjoy life as it is now, without making any changes to it, by simply adopting a better perspective on things.

For example, inequality is a part of life, likely an unavoidable one (The Pareto principle seems to apply in every society no matter its type). And even under inequality, people have been happy, so it's not even an issue in itself. But now we're teaching people in lower positions that they're suffering from injustice, that they're pitiful, that they're victims, and we're teaching everyone else that life could be a paradise, if only evil and immoral influences weren't preventing it. But this is a sure way to make people unhappy with their existence. To make them imagine how much better things could be, and make comparisons between a naive ideal and reality. Comparison is the thief of joy, and most people are happy with their lot unless you teach them not to be.
Teaching people about suffering doesn't cause it per se, but if you make people look for suffering, they will find it. If you condition your perception to notice something unpleasant, you will see it everywhere. Training yourself to notice suffering may have side-effects. I have a bit of tinnitus, and I got over it by not paying it any attention. It's only like this that my mind will start to filter it away, so that I can forget about it.

The marketing perspective

I don't think you need pain to motivate people to change, the carrot is as good at the stick. But you need one of the two at minimum (curiousity and other such drives make you act naturally, but do so by making it uncomfortable not to act and rewarding to act)
I don't think that suffering is bearable because of reward itself, but because of perceived value and meaning. Birth is really painful, but the event is so meaningful that the pain becomes secondary. Same for people who compete in the olympics, they have found something meaningful enough that a bit of physical pain is a non-issue.
You can teach this to people, but it's hard to apply. It's better to help them avoid the sort of nihilism which makes them question whether things are worth it. I think one of the causes of modern nihilism is a lack of aesthetics. 

My 2nd perspective

I don't think understanding translates directly into power. It's a common problem to think "I know what I should be doing, but I can't bring myself to do it". If understanding something granted you power over it, I'd practically be a wizard by now.
You can shift the problem that people attack, but if they have actual problems which put them in danger, I think their focus should remain on these. You can always create dissatisfaction by luring them towards better futures, in a way which benefits both them and others at the same time.

I'm never motivated by moral arguments, but some self-help books are alluring to me because they prey on my selfishness in a healthy manner which also demands responsibility and hard work.

As for the third possibility, that sounds a bit pessimistic. But I don't think it would be a worthless outcome as long as the image of what could be isn't a dangerous delusion. Other proposed roads to happiness include "Destroy your ego", "Be content with nothing", "Eat SSRIs forever", and various self-help which asks you to "hustle" and overwork.

who want to prevent human extinction

I see! That something deeper than preventing suffering. I even think that there's some conflicts between the two goals. But motivating people towards this should be easier since they're preventing their own destruction as well, and not just helping other people.

it is difficult to know what the far-reaching consequences of this hypothetical world would be

It really is. But it's interesting to me how both of us haven't used this information to decrease our own suffering. It's like I can't value things if they come too easy, and like I want to find something which is worth my suffering. 
But we can agree that wasted suffering is a thing. That state of indecision, being unable to either die or live, yield or fight back, fix the cause of suffering or come to terms with it.
The scarcity mindset is definitely a problem, but many resources are limited. I think a more complex problem would be that people tend to look for bad actions to avoid, rather than positive actions to adopt. It's all "we need to stop doing X" and "Y is bad" and "Z is evil". It's all about reduction, restrictions, avoidance. It simply chokes us. Many good people trap themselves with excessive limitations and become unable to move freely. To simply use positives likes "You should be brave", "You should stand up for what you believe in", "You should accept people for who they are" would likely help improve this problem.

there are certain important aspects of human psychology that I'm still unsure about

I think pain and such are thresholds between competing things. If I'm tired and hungry, whether or not I will cook some food depends on which of the two cause the greatest discomfort.
When procrastinating I've also found that deadlines helped me. Once I was backed into a corner and had to take action, I suddenly did. I ran away for as long as I could. The stress from deadlines might also result in dopamine and adrenaline, which help in the short term.
"Acceptance of suffering" is a bit ambigious. Accepting something usually reduces the suffering it causes, and accepting suffering lessens it too. But one can get too used to suffering, which makes them wait too long before they change anything, like the "This is fine" meme or the boiling frog that I mentioned earlier

Spread logical decisionmaking

Logic can defend against mistakes caused by logic, but we did not destroy ourselves in the past when we were less logical than now. I also don't think that logic reduces suffering. Many philosophers have been unhappy, and many people with down syndrome are all smiles. Less intelligent people often have a sort of wisdom about them, often called "street smarts" when observed, but I think that the lack of knowledge leads them to make less map-territory errors. They're nearer to reality because they have less knowledge which can mislead them.

I personally think that intellect past a certain level gives humans the ability to deliberately manipulate their suffering

I don't think any human being is intelligent enough to do this (Buddha managed, but the method was crude, reducing not only suffering). What we can do, is manipulate our reward systems. But this leaves us feeling empty, as we cannot fake meaning. Religion basically tells us to live a good life according to a fixed structure, and while most people don't like this lack of freedom, it probably leads to more happiness in the long run (for the same reason that neuroticism and conscientiousness are inversely correlated)

since human meaning is constructed by our psychological systems

Yes, the philosophical question of meaning, and the psychology of meaning are different. To solve meaninglessness by proving external meaning (this is impossible, but lets assume you could), is like curing depression by arguing that one should be happy. Meaning is basically investment, engagement, and involvement in something which feels like it has substance.

I recommend just considering humanity as a set of axioms. Like with mathematical axioms, this gives us a foundation. Like with mathematics, it doesn't matter that this foundation is arbitrary, for no "absolute" foundation can exist (in other words, no set of axioms are more correct than any other. Objectivity does not exist, even in mathematics, everything is inherently relative).
Since attemping to prove axioms is silly, considering human nature (or yourself) as sets of axioms allows you not to worry about meaning and values anymore. If you want humanity to survive, you no longer have to justify this preference.

Maybe you can point me to a source of information that will help me see your perspective on this? 

That would be difficult as it's my own conclusion. But do you know this quote by Taleb?
"I am, at the Fed level, libertarian;
at the state level, Republican;
at the local level, Democrat;
and at the family and friends level, a socialist."
The smaller the scope, the better. The reason stupid people are happier than smart people is because their scope of consideration is smaller. Being a big fish in a small pond feels good, but increase your scope of comparison to an entire country, and you become a nobody. Politics makes people miserable because the scope is too big, it's feeding your brain with problems that you have no possibility of solving by yourself. "Community" is essential to human well-being because it's cohersion on a local level. "family values" are important for the same reason. Theres more crime in bigger cities than smaller ones. Smaller communities have less crazy behaviour, they're more down-to-earth. A lot of terrible things emerge when you increase the scale of things.
Multiple things on a smaller scale does not seem to have a cost. One family can have great coherence. You can have 100 families living side by side, still great. But force them all to live together in one big house, and you will notice the cost of centralization. You will need hierarchies, coordination, and more rules. This is similar to urbanization. It's also similar to how the internet went from being millions of websites, to becoming a few 100 popular websites. It's even similar to companies merging into giants that most people consider evil.
An important antidote is isolation (gatekeeping, borders, personal boundaries, independence, seperation of powers, the single-responsibility-principle, live and let live philosophies, privacy and other rights, preservation).
I wish it was just "reduced efficiency" which was the problem. And sadly, it seems that they optimal way to increase the efficiency between many things is simply to force them towards similarity. For society, this means the destruction of different cultures, the destruction of different ways of thinking, the destruction of different moralities and different social norms.

I presume you're referring to management claiming credit

It's much more abstract than that. The amount of countries, brands, languages, accents, standards, websites, communities, religious, animals, etc. are all decreasing in numbers. All slowly tending towards 1 thing having monopoly, with this 1 thing being the average of what was merged.

Don't worry if you don't get last few points. I've tried to explain them before, but I have yet to be understood.

I wonder about what basis/set of information you're using to make these 3 claims?

Once a moloch problem has been started, you "either join or die", like you said. But we can prevent moloch problems from occuring in the first place, by preventing the world from becoming legible enough. For this idea, I was inspired by "Seeing like a state" and this

There's many prisoners-dilemma like situations in society, which do not cause problems simply because people don't have enough information to see them. If enough people cannot see them, then the games are only played by a few people. But that's the only solution to Moloch: Collectively agree not to play (or, I suppose, never stop playing in the first place). The amount of moloch-like problems has increased as a side-effect of the increased accessibility of information. Dating apps ruined dating by making it more legible. As information became more visible, and people had more choices and could make more informed decisions, they became less happy. The hidden information in traditional dating made it more "human", and less materialistic as well. Since rationalists, academics and intellectuals in general want to increase the openness of information and seem rather naive about the consequences, I don't want to become either. 

I agree with the factors leading to human extinction. My solution is "go back". This may not be possible, and like you say, we need to use intelligence and technology to go forwards instead. But like the alignment problem, this is rather difficult. I haven't even taught myself high-level mathematics, I've noticed all this through intuition alone.
I think letting small disasters happen naturally could help us prevent black-swan like events. Just like burning small patches of trees can prevent large forest fires. Humanity is doing the opposite. By putting all its eggs in one basket and making things "too big to fail", we make sure that once a disaster happens, it hits hard.

Related to all of this: https://slatestarcodex.com/2017/03/16/book-review-seeing-like-a-state/ (the page mentions black swan risks, Taleb, Ribbonfarms, legibility and centralization). I actually had most of these thought before I knew about this page, so that gives me some confidence that I'm not just connecting unrelated concepts like a schizophrenic.

My argumentation is a little messy, but I don't want to invest my life in understanding this issue or anything. Kaczynski's books have a few overlapping arguments with me, and the other books I know are even more crazy, so I can't recommend them. 

But maybe I'm just worrying over nothing. I'm extrapolating things as linear or exponential, but they may be s-shaped or self-correcting cycles. And any partial collapse of society will probably go back to normal or even bring improvements with it in the long run. A lot of people have ruined themselves worrying over things which turned out just fine in the end.

I like this post, but I have some problems with it. Don't take it too hard, as I'm not the average LW reader. I think your post is quite in line with what most people here believe (but you're quite ambitious in the tasks you give yourself, so you might get downvoted as a result of minor mistakes and incompleteness resulting from that). I'm just an anomaly who happened to read your post.

By bringing attention to tactical/emotionally pulling patterns of suffering, people will recognize it in their own life, and we will create an unfulfilled desire that only we have the solution for.

I think this might make suffering worse. Suffering is subjective, so if you make people believe that they should be suffering, or that suffering is justified, they may suffer needlessly. For example, poverty doesn't make people as dissatisfied with life as relative poverty does. It's when people compare themselves to others and realize that they could have it better, that they start disliking what they have at the moment. If you create ideals, then people will work towards archiving them, but they will also suffer from the gap between the current state and what's ideal. You may argue "the reward redeems the suffering and makes it bearable", and yes, but only as long as people believe that they're getting closer to the goal. Most positive emotion we experience is a result of feeling ourselves moving towards our goals.

Personal concurrent life-satisfaction is possible in-spite of punishment/suffering when punishment/suffering is perceived as a necessary sacrifice for an impending reward.

Yes, which is why one should not reduce "suffering" but "the causes of unproductive suffering". Just like one shouldn't avoid "pain", but "actions which are painful and without benefit". The conclusions of "mans search for meaning" was that suffering is bearable as long as it as meaning, that only meaningless suffering is unbearable. I've personally felt this as well. One of the times I was the most happy, I was also the most depressed. But that might just have been a mixed episode as is known from bipolar disorder.
I'm nitpicking, but I believe it's important to state that "suffering" isn't a fundamental issue. If I touch a flame and burn my hand, then the flame is the issue, not the pain. In fact, the pain is protecting me from touching the flame again. Suffering is good for survival, for the same reason that pain is good for survival. The proof is that evolution made us suffer, that those who didn't suffer didn't pass on their genes.

We are products of EA

I'm not sure this is true? EA seems to be the opposite of darwinism, and survival of the fittest has been the standard until recent (everyone suddenly cares about reducing negative emotions and unfairness, to an almost pathological degree). But even if various forces helped me avoid suffering, would that really be a good thing?

I personally grew the most as a person as a result of suffering. You're probably right that you were the least productive when you didn't eat, but suffering is merely a signal that change is necessary, and when you experience great suffering, you become open to the idea of change. It's not uncommon that somebody hits rock bottom and turns their lives around for the better as a result. But while suffering is bearable, we can continue enduring, until we suffer the death of a thousand papercuts (or the death of the boiling frog, by our own hands)
That said, growth is usually a result of internal pressure, in which an inconsistency inside oneself finally snaps, so that one can focus on a single direction with determination. It's like a fever - the body almost kills itself, so that something harmful to it can die sooner.

We are still in trouble if the average human is as stupid as I am.

Are you sure suffering is caused by a lack of intelligence, and not by too much intelligence? ('Forbidden fruit' argument) And that we suffer from a lack of tech rather than from an abundance of tech? (As Ted Kaczynski and the Amish seem to think)
Many animals are thriving despite their lack of intelligence. Any problem more complicated than "Get water, food and shelter. Find a mate, and reproduce" is a fabricated problem. It's because we're more intelligent than animals that we fabricate more difficult problems. And if something was within out ability, we'd not consider it a problem, which is why we always fabricate problems which are beyond our current capacities, which is how we trick ourselves into growth and improvement. Growth and improvement which somehow resulted in us being so powerful that we can destroy ourselves. Horseshoe crabs seem content with themselves, and even after 400 million years they just do their own thing. Some of them seem endangered now, but that's because of us? 

Bureaucracy

Caused by too much centralization, I think. Merging structures into fewer, bigger structures causes an overhead which doesn't seem to be worth it. Decentralizing everything may actually save the world, or at least decrease the feedback loop which causes a few entities to hog all the resources.

Moloch

Caused by too much information and optimization, and therefore unlikely to be solved with information and optimization. My take here is the same as with intelligence and tech. Why hasn't moloch killed us sooner? I believe it's because the conditions for moloch weren't yet reached (optimal strategies weren't visible, as the world wasn't legible and transparent enough), in which case, going back might be better than going forwards.

The tools you wish to use to solve human extinction are, from my perspective, what is currently leading us towards extinction. You can add AGI to this list of things if you want.

Great post!

It's a habit of mine to think in very high levels of abstraction (I haven't looked much into category theory though, admittedly), and while it's fun, it's rarely very useful. I think it's because of a width-depth trade-off. Concrete real-world problems have a lot of information specific to that problem, you might even say that the unique information is the problem. An abstract idea which applies to all of mathematics is way too general to help much with a specific problem, it can just help a tiny bit with a million different problems.

I also doubt the need for things which are so complicated that you need a team of people to make sense of them. I think it's likely a result of bad design. If a beginner programmer made a slot machine game, the code would likely be convoluted and unintuitive, but you could probably design the program in a way that all of it fits in your working memory at once. Something like "A slot machine is a function from the cartesian product of wheels to a set of rewards". An understanding which would simply the problem so that you could write it much shorter and simpler than the beginner. What I mean is that there may exist simple designs for most problems in the world, with complicated designs being due to a lack of understanding.

The real world values the practical way more than the theoretical, and the practical is often quite sloppy and imperfect, and made to fit with other sloppy and imperfect things.

The best things in society are obscure by statistical necessity, and it's painful to see people at the tail ends doubt themselves at the inevitable lack of recognition and reward.

I think there's a problem with the entire idea of terminal goals, and that AI alignment is difficult because of it.

"What terminal state does you want?" is off-putting because I specifically don't want a terminal state. Any goal I come up with has to be unachievable, or at least cover my entire life, otherwise I would just be answering "What needs to happen before you'd be okay with dying?"

An AI does not have a goal, but an utility function. Goals have terminal states, once you achieve them you're done, the program can shut down. An utility function goes on forever. But generally, wanting just one thing so badly that you'd sacrifice everything else for it.. Seems like a bad idea. Such a bad idea that no person has ever been able to define an utility function which wouldn't destroy the universe when fed to a sufficiently strong AI.

I don't wish to achieve a state, I want to remain in a state. There's actually a large space of states that I would be happy with, so it's a region that I try to stay within. The space of good states form a finite region, meaning that you'd have to stay within this region indefinitely, sustaining it. But something which optimizes seeks to head towards a "better state", it does not want to stagnate, but this is precisely what makes it unsustainable, and something unsustainable is finite, and something finite must eventually end, and something which optimizes towards an end is just racing to die. A human would likely realize this if they had enough power, but because life offers enough resistance, none of us ever win all our battles. The problem with AGIs is that they don't have this resistance.

The after-lives we have created so far are either sustainable or the wish to die. Escaping samsara means disappearing, heaven is eternal life (stagnation) and Valhalla is an infinite battlefield (a process which never ends). We wish for continuance. It's the journey which has value, not the goal. But I don't wish to journey faster.

I meant that they were functionally booleans, as a single condition is fulfilled "is rich", "has anvil", "AGI achieved". In the anvil example, any number past 1 corresponds to true. In programming, casting positive integers to booleans results in "true" for all positive numbers, and "false" in the case of zero, just like in the anvil example. The intuition carries over too well for me to ignore.

The first example which came to mind for me when reading the post was confidence, which is often treated as a boolean "Does he have confidence? yes/no". So you don't need any countable objects, only a condition/threshold which is either reached or not, with anything past "yes" still being "yes".

A function where everything past a threshold maps to true, and anything before it maps to false, is similar to the anvil example, and to a function like "is positive" (since a more positive number is still positive). But for the threshold to be exactly 1 unit, you need to choose a unit which is large enough. 1$ is not rich, and having one water droplet on you is not "wet", but with the appropriate unit (exactly the size of the threshold/condition) these should be functionally similar.

I'm hoping there is simple and intuitive mathematics for generalizing this class of problems. And now that I think about it, most of these things (the ones which can be used for making more of themselves) are catalysts (something used but not consumed in the process of making something). Using money to make more money, anvils to make more anvils, breeding more of a species before it goes extinct.

This probably makes more sense if you view it as a boolean type, you either "have an anvil" or you don't, and you either have access to fire or you don't. We view a lot of things as booleans (if your clothes get wet, then wet is a boolean). This might be helpful? It connects what might seem like a sort of edge case into something familiar.

But "something that relies on itself" and "something which is usually hard to get, but easy to get more of once you have a bit of it" are a bit more special I suppose. "Catalyst" is a sort of similar yet different idea. You could graph these concepts as dependency relations and try out all permutations to see if more types of problems exists

The short version is that I'm not sold on rationality, and while I haven't read 100% of the sequences it's also not like my understanding is 0%. I'd have read more if they weren't so long. And while an intelligent person can come up with intelligent ways of thinking, I'm not sure this is reversible. I'm also mostly interested in tail-end knowledge. For some posts, I can guess the content by the title, which is boring. Finally, teaching people what not to do is really inefficient, since the space of possible mistakes is really big.

Your last link needs an s before the dot.

Anyway, I respect your decision, and I understand the purpose of this site a lot better now (though there's still a small, misleading difference between the explanation of rationality and in how users are behaving. Even the name of the website gave the wrong impression).

Load More