StartAtTheEnd

Nobody special, nor any desire to be. Just sharing my ideas when I appear to know better than the person I'm responding to, or when I believe I have something interesting to share/add. I'm not a serious nor a formal person, and if you're more knowledgeable than intelligent, you probably won't like me as I lack academic rigor.

Feel free to correct me when I make mistakes. I'm too certain of myself as my ideas are rarely challenged. Crocker's rules are fine! When playing intellectual (I do on here) I find that social things only get in the way, and when I socialize I find that intellectual things get in the way, so I separate them.

Finally, beliefs don't seem to be a measure of knowledge and intelligence alone, but a result of experiences and personality. Those who have had similar experiences and thoughts already will recognize what I say, and those who don't will mostly perceive noise.

Wikitag Contributions

Comments

Sorted by

Well, we somehow changed smoking from being cool to being a stupid, expensive and unhealthy addiction. I think the method is about the same here. But the steps an individual can take are very limited. In politics, you have millons of people trying to convert other people into their own ideology, so if it was easy for an individual to change the values of society, we'd have extremists all over.

Anyway, you'd probably need to start a Youtube channel or something. Combining competence and simplicity, you could make content that most people could understand, and become popular doing that. "Hoe math" comes to mind as an example. Jordan Peterson and other such people are a little more intellectual, but there's also a large amount of people who do not understand them. Plus, if you don't run the account anonymously, you'd take some risks to your reputation proportional to how controversial your message is.

People in web3 often understand that deteriorating user privacy means more money than protecting it

That's a shame. Why are they in web3 in the first place, then? The only difference is the design, and from what I've seen, designs which give power to the users rather than some centralized mega-corporation.

Why does cybersecurity favour offence over defence?

I think this is due to attack-defense asymmetry. Attackers have to find just one vulnerability, defenders have to stop all attacks. I do however agree that very few people ask these questions.

I think Tor would scale no problem if more people used it, but it has the same problem has 8chan and the privacy-focused products and websites have: All the bad people (and those who were banned on most other sites) flock there first, and they create a scary environment or reputation, and that makes normal people not want to go there/use the service. Many privacy-oriented apps have the reputation of being used by criminals and pedophiles.

This problem would go away if there was more places where privacy was valued, since the "bad people" density would go down as the thing in question became more popular.

But I've noticed that everything gets worse over time. In order to have good products, we need new ones to be made. Skype sucked, then people jumped to Discord. Now Discord sucks, so people might soon jump to something new. It's both "enshittification" and incentives.

Taxes go up over time. We get more laws, more rules, more regulations, more advertisement, more ads. The more power a structure has, the worse it seems to treat those inside of it, and the less fair it becomes. Check out this 1999 ad for Google it's a process similar to corruption, and the only solution seems to be revolutions or collective agreements to seek out alternatives when things get bad enough. Replacing things is less costly than fixing them, which is probably why deaths and births exist. Nature just starts over in cycles, with the length of each cycle being inversely proportional to the size of the structure (average life span of companies in America seem to be 15 years, and the average life span of nations seem to be about 150 years, the average life span of a civilization seems to be 336 years)

So, in my mental model of the world, corruption and DNA damage is the same thing, enshittification is similar to cancer, and nothing lives forever because bloat/complexity/damage accumulates until the structure dies. But I can only explain how things are, coming up with solution is much more difficult.

If you know of any high-leverage ways

This seems like a problem of infinite regress. 

"Solving it is easy, just do X"

"The problems is that people don't do X, how do we make them?"  

"Just do Y"

"The problem is that people don't do Y, how do we make them?"

"Just do Z"

...

To name some power upstream factors, I'd say "Increase the social value of growth and maturity". I guess this is what we did in the past, actually. Then people started complaining that our standards were harsh because it made losers low value, and then they gave power and benefits to the status of victim, and then people started competing in playing the victim rather than in improving their character to something worthy of respect.

By the way, another powerful influence in the worsening of society seems to be large companies who play on social norms, personal needs, and social perception in order to make money. "Real men do ___", "___ is pretentious", "Doing ___ is cringe". Statements like this influence how people behave and what they strive for, since the vast majority of people want to appear in a way that others approve of. We must have fallen a long way as a society, for the only positive pressure I can think of is neo-nazis who encourage others to improve themselves (to read old books and lift weights)

Let's see .. People are doing away with family core values, claiming that it's getting in their way of freedom (but I think that it's an immature dislike of responsibility and obligation, with a dash of narcissism which makes people avoid actions which do not benefit them personally). Family bonds also seem to be weakning because of politics, some families split apart because of disagreements on who to vote for, and this is a new problem to me, I don't recall hearing of such things before 2016.

Another factor making things worse is that the media reports on the absolutely stupidest people that they can find, in order to make the "political enemy" look as bad as possible. But this has the side-effect of people overestimating themselves. If somebody felt they were a math genius for knowing basic trig functions, they'd walk around feeling smug, never pushing themselves into university-level maths.

Here's a quote from a book from 2005 (it's a book on dating by the way):

"TO GIVE you an impression of how much things have been dumbed down, consider the Lord of the Rings. Today, people treat it as an epic adult story that is a bit 'too long'. When it was published, it was a simple children's story. A simple children's story is now an adult epic! And is Alice in Wonderland now considered 'literature'? Perish the thought."

Youtube videos is not a bad idea, by the way!

The incentives are not in favour of it

That's a shame. When I search "web 3.0" the results seem to hint that people understand the problem they're trying to fix, and fixing the problem leads to structures which are resistant against giant companies, and this must improve privacy (if it doesn't, then the design will be the same as what it's replacing, just with somebody else in charge. So over time, corruption will kick in, and we'll be back where we started. The structure itself must be corruption-resistant)

There are people in the world who enjoy privacy and freedom and such, and it's not just criminals. But their products are not as mainsteam as they used to be, the only privacy-oriented one I frequently hear about is protonmail. Mega.io also claims to be pro-privacy... But somehow piracy is against its rules? If it can detect if I upload copyrighted content to my private storage, then it's not a private storage. I'm not sure how that works. Many services who claim to be secure and pro-privacy seem to be lying, or at least using these words loosely or in a relative rather than absolute sense. 

All good! I wrote a long response after all.

But what future do you value? Personally, I don't want to decrease the variances of life, but I do want to increase the stability. 

In either case, I think my answer is "Invest in the growth and maturation of the individual, not in the external structures that we crudely use to keep people in check" 

Can you convince all people who have surveillance powers to not use them

No, but we can create systems in which surveillance is impossible from an information-theoritic perspective. Web 3.0 will likely do this unless somebody stops it, and there's ways to stop it too (you could for instance argue that whoever create these systems are aiding criminals and terrorists)

Anxiety seems to be why individual people prefer transparency of information, but it's not why the system prefers it. The system merely exploits the weakness of the population to legitimize its own growth and to further its control of society.

Converting everyone to a single value system is not easy. But we can improve the average person and thus improve society in that way, or we can start teaching people various important things so that they don't have to learn them the hard way. One thing I'd like to see improved in society is parenting, it seems to have gotten worse lately, and it's leading to deterioration of the average person and thus a general worsening of society.

A society of weak people leads to fear, and fear leads to mistrust which leads to low-trust societies. By weak, I mean people who run away from trauma rather than overcoming it. You simply just need to process uncomfortable information successfully to grow, it's not even that difficult, it just requires a bunch of courage. We're all going to die sometime, but not all of us suffer from this idea and seek to run away by drinking or distracting ourselves with entertainment. Sometimes, it's even possible to turn unpleasant realities into optimism and hope, and this is basically what maturity and development is

I think this effect already happened, just not because of AI.

Nietzsche already warned against the possible future of us turning into "The last man", and the meme "Good times create weak men" is already a common criticism/explanation of newer cultures. There's also memes going around calling people "soy", and increases in cuckolding and other traits which seem to indicate falling testosterone levels (this is not the only cause, but I find it hard to put a name on the other causes as they're more abstract)

We're being domesticated by society/"the system". We've built a world where cunning is rewarded over physical aggression, in which standing out in any way is associated with danger, and in which we praise the suppression of human nature, calling it "virtue". Even LW is quite harsh on natural biases.

It's a common saying that the modern society and human nature are a poor fit, and that this leads to various psychological problems. But the average man has nowhere to aim is frustrations, and he has no way to fight back. The enemy of the average person is not anything concrete, they're being harassed by things which are downstream consequences of decisions made far away from them, by people who will never hear what their victims think about their ideas. I think this leads to a generation of "broken men". This is unlikely to change the genetics of society though, unless the most wolf-life of us fight back and get punished for it, or if those who suffer the least from these changes are least wolf-like (which I think may be the case).

Dogs survive much better than wolves in our current society, and I think it's fair to say that social and timid people survive better than aggressive people who stand up to that which offends them, and more so now than in the past (one can still direct their aggression at the correct targets, but this requires a lot more intelligence than aggressive people tend to have)

I think this is likely to continue, though, by which I mean to say that you don't seem incorrect. Did you use AI to write this article? If so, that would explain the downvotes you got. And a personal nitpick with the "Would this even be Bad?" section: "Mood stabilizing" is a misleading term, it actually means mood-reducing. Our "medical solutions" to people suffering in society are basically minor lobotomies. By making people less human, they become a better fit for our inhuman system. If you enjoy the thought of being domesticated, you're probably low on testosterone, or otherwise a piece of evidence that human beings have already been strongly weakened.

Predict and control... I'm not sure about that, actually. The world seems to be a complex system, which means that naive attempts at manipulating it often fail. I don't think we're using technology to control others in the manner that we can choose their actions for them, but we are decreasing the diversity of actions that one can take (for instance, anything which can be misunderstood seems to be no go now, as strangers will jump in to make sure that nothing bad is going on, as if it was their business to get involved in other peoples affairs). So our range of motion is reduced, but it's not locked to a specific direction which results in virtue or something.

I don't think that the world can be controlled, but I also think that attempts at controlling by force mistaken, as there's more upstream factors which influence most of society. For instance, if your population is buddhist, they will believe that treating others well is the best thing to do, which I think is a superior solution to placing CCTVs everywhere. The best solutions don't need force, and the one which use force never seem optimal (consider the war on drugs, the taboo on sexuality, attempts at stopping piracy, etc). I think the correct set of values is enough (but again, the receiver needs to agree that they're correct voluntarily). If everyone can agree on what's good, they will do what's good, even if you don't pressure them into doing so. 

I'm also keeping extinction events in mind and trying to combat them, I just do so from a value perspective instead. I'm opposed to creating AGIs, and we wouldn't have them if everyone else were opposed as well. Some people naively believe that AGIs will solve all their problems, and many don't place any special value on humanity (meaning that they don't resist being replaced by robots). But there's also many people like me who enjoy humanity itself, even in its imperfection.

I mean you as the owner of your machine can audit what packets are entering or exiting it

This is likely possible, yeah. But you can design things in such a way that they're simply secure - as it's impossible for them not to be. How do you prevent a lock from being hacked? You keep it mechanical rather than digital. I don't trust websites which promise to keep my password safe, but I trust websites which don't store my password in the first place (they could run it through a one-way hash). Great design makes failure impossible (e.g. atomic operations in banking transfers)

I’m curious about your thoughts on that. 

This would likely result in security, but it comes at a huge cost as well. I feel like there's better solutions, and not just for a specific organization, but for everyone. You could speak freely on the internet just 20 years ago (freely enough that you could tell the nuclear launch codes to strangers if you wanted to), so such a state is still near in a sense. Not only was it harder to spy on people back then, less people even wanted to do such a thing, and this change in mentality is important as well. I'm not trying to solve the problem in our current environment, I want to manipulate our environment to one in which the problem doesn't exist in the first place. We just have to resist the urge to collect and record everything (this collection is mainly done by malicious actors anyway, and mainly because they want to advertise to you so that you buy their products). You could go on vacation in a country which considers it bad taste to pry on others affairs and be more or less immune thanks to that alone, so you don't even need to learn opsec, you just need to be around people who don't know what that word means. You could also use VPNs which have no logs (if they're not lying of course) as nothing can be leaked if nothing is recorded. Sadly, the same forces which destroyed privacy are trying to destroy these methods, it's the common belief that we need to be safe, and that in order to be safe we need certaincy and control. I don't even think this is purely ideology, I think it's a psychological consequence of anxiety (consider 'control freaks' in relationships as well). Society is dealing with a lot of problems right now which didn't exist in the past not because they didn't happen, but because they weren't considered as problems. And if we don't consider things to be problems, then we don't suffer from them, so the people who are resonsible for creating the most suffering in life are those who point at imperfections (like discrimination and strict beauty standards) and convince everyone that life is not worth living until they're fixed.

Finally, people can leak information, but the human memory is not perfect, and people tend to paraphrase eachother, so "he said she said" situations are inherently difficult to judge. You have plausible deniability since nobody can prove what was actually said. I think all ambiguity translates into deniability, which is also why you can sometimes get away with threatening people - "It would be terrible if something bad happened to your family" is a threat, but you haven't actually shown any intent to break the law. Ambiguity is actually what makes flirting fun (and perhaps even possible), but systematizers and people in the autistism-cluster tend to dislike ambiguity, it never occurs to them that both ambiguity and certainty have pros and cons.

I mean politically

Politics is a terrible game. If possible, I'd like to return society to the state it had before everyone cared too much about political issues. Since this is not an area where reasonable ideas work, I suggest just telling people that dictators love surveillance (depending on the ideology of the person you're talking to, make up an argument for how surveillance is harmful). The consensus on things like censorship and surveillance seems to depend on the ideology one perceives it to support. Some people will say "We need to get rid of anonymity so that we can shame all these nazis!" but that same sort of person was strongly against censorship 13 years ago, because back then censorship was though to be what the evil elite used to oppress the common man. So the desire to protect the weak resulted in both "censorship is bad" and "censorship is good" being common beliefs, and it's quite easy for the media to force a new interpretation since people are easily manipulated.

By the way, I think "culture war" topics are against the rules, so I can only talk about them in a superficial and detached manner. Viligantes in the UK are destroying cameras meant to automate fining people, and as long as mentalities/attitudes like this dominate (rather than the belief that total surveillance somehow benefits us and makes us safe) I think we'll be alright. But thanks to technological development, I expect us to lose our privacy in the long run, and for the simple reason that people will beg the government to take away their rights.

Sorry in advance for the wordy reply. 

Can you identify the specific arguments from ISAIF that you find persuasive

Here's my version (which might be the same. I take responsibility for any errors, but no credit for any overlap with Ted's argument)

1: New technologies seem good at first/on the surface.

2: Now that something good is available, you need to adapt it (or else you're putting yourself or others at a disadvantage, which social forces will punish you for)

3: Now that the new technology is starting to be common, people find a way to exploit/abuse it. This is because technology is neutral, it can always be use for both good and bad things, you cannot seperate the two.

4: In order to stop abuse of said technology, you need to monitor its use, restrict access with proof of identity, to regulate it, or to create new and even stronger technology.

5: Now that you're able to regulate the new technology, you must do so. If you can read peoples private emails, and you choose not to, you will be accused of aiding pedophiles and terrorists (since you could arguably have caught them if you did not respect their privacy)

This dynamic has a lot of really bad consequences, which Ted also writes about. For instance, once gene editing is possible, why would we not remove genes which results in "bad traits"? If you do not take actions which makes society safer, you will be accused of making society worse. So we might be forced to sanitize even human nature, making everyone into inoffensive and lukewarm drones (as the traits which can result in great people and terrible people are the same, the good and the bad cannot be separated. This is why new games and movies are barely making any money, and it's why Reddit is dying. They removed the good together with the bad) 

 

I’m curious how long you think you will be able to slow it down and what your ideas for doing so are

I can slow it down for myself by not engaging in these new technologies (IoT, subscription-based technology, modern social media, etc.) and using fringe privacy-based technologies, or simply not making noise (If nothing you say escapes the environment in which you said it, you're likely safe. If what you said is not stored for longer periods of time, you're likely safe. If the environment you're in is sufficiently illegible, information is lost and you cannot be held accountable.

I'm also doing what I can to teach people that:

1: Good and Bad cannot be separated. You can only have both of them or none of them. I think this is axiomatically true,  which suggests that the Waluigi Effect occurs naturally (just like intrusive thoughts).

2: You cannot have your cake and eat it too. You can have privacy OR safety, you cannot have both. You cannot have a backdoor that only "the good guys" can access. You cannot have a space where vulnerable groups can speak out, without also having a space where terrorists can discuss their plans. You cannot have freedom of speech and an environment in which nothing offensive is said.

Most people in the web3 space are not taking internet anonymity as seriously as it needs to be

This is possibly true, but the very design of web3 (decentralization, encryption) makes it so that privacy is possible. If your design makes it so that large corporations cannot control your community, it also makes it so that the governement is powerless against it, as these are equal on a higher level of abstraction.

That can audit every single internet packet entering and exiting a machine

This sounds like more surveillance rather than less. I don't think this is an optimal solution. We need to create something in which no person is really in charge, if we want actual privacy. The result will look like the Tor network, and it will have the same consequences (like illegal drug trade). If a platform is not a safe place to sell drugs, it's also not a safe platform to speak out against totalitarianism or corruption, and it's also not a safe place to be a minority, and it's also not a safe place to a criminal. I think these are equivalent, you cannot separate good and bad.

I like talking in real life, as no records are kept. What did I say, what did I do? Nobody knows, and nobody will ever know. I don't have to rely on trust or probability here. Like with encryption, I have mathematical certainty, and that's the only kind of certainty which means anything in my eyes.

Self-destructing messages are safe as well, as is talking on online forums which will cease to exist in the future, taking all the information with them (what did I say on the voice chat of Counter Strike 1.4? Nobody knows)

Communities like LW have cognitive preferences for legibility, explicitness,  and systematizing, but I think the reason why Moloch did not bother humanity before the 1800s is because it couldn't exist. It seems like game-theoritic problems are less likely to occur when players don't have access to enough information to be able to optimize. This all suggests one thing: That information (and openness of information) is not purely good. It's sometimes a destructive force. The solution is simple to me: Minimize the storage and distribution of information.

edit: Fixed a few typos

I have considered automated mass-surveillance likely to occur in the future, and tried to prevent it, since about 20 years ago. It bothers me that so many people don't have enough self-respect to feel insulted by the infringement of their privacy, and that many people are so naive that they think surveillance is for the sake of their safety.

Privacy has already been harmed greatly, and surveillance is already excessive. And let me remind you that the safety we were promised in return didn't arrive.  

The last good argument against mass-surveillance was "They cannot keep an eye on all of us" but I think modern automation and data processing has defeated that argument (people have just forgotten to update their cached stance on the issue).

Enough ranting. The Unabomber argued for why increases in technology would necessarily lead to reduced freedom, and I think his argument is sound from a game theory perspective. Looking at the world, it's also trivial to observe this effect, while it's difficult to find instances in which the amount of laws have decreased, or in which privacy has been won back (also applies to regulations and taxes. Many things have a worrying one-way tendency). The end-game can be predicted with simple exterpolation, but if you need an argument it's that technology is a power-modifier, and that there's an asymmetry between attack and defense (the ability to attack grows faster, which I believe caused the MAD stalemate).

I don't think it's difficult to make a case for "1", but I personally wouldn't bother much with "2" - I don't want to prepare myself for something when I can help slow it down. Hopefully web 3.0 will make smaller communities possible, resisting the pathelogical urge to connect absolutely everything together. By which time, we can get separation back, so that I can spend my time around like-minded people rather than being moderated to the extent that no groups in existence are unhappy with my behaviour. This would work out well unless encryption gets banned.

The maximization of functions lead to the death of humanity (literally or figuratively), but so does minimization (I'm arguing that pro-surveillance arguments are moral in origin and that they make a virtue out of death)

I have more reasons for believing that Mensa members are below 130, but also for believing that they're above.

Below: Most online IQ tests are similar enough to the Mensa IQ test that the practice effect applies. And most people who obsess about their IQ scores probably take a lot of online IQ tests, memorizing most patterns (there's a limit to the practice effect, but it can still give you at least 10 points)

Above: Mensa tests for pattern recognition abilities, which in my experience correlates worse with academic performance than verbal abilities. Pattern recognition abilities also select for people with autism (they tend to score about 20 points higher on RPM-like pattern recognition tests (matrices) than on other subtests). These people will be smarter than they sound, because their low verbal abilities makes them appear stupid, even though their pattern recognition might be 2 standard deviations higher. So you get intelligent people with poor social skills, who sound much dumber than they are, and who tend to have more diagnoses than just autism. It's no wonder that these people go to forums like Mensa, or that they're less successful in life than their IQ would suggest. These people are also incredibly easy targets by the kind of people who go to r/iamverysmart so it's easy to build the public consensus that they're actually stupid, even when it isn't true.

However, in order for high intelligence to shine (and have worthy insights) even without formal education, IQs above 150 are likely needed. For in order to generate your own ideas and still be able to compete with the consensus (which is largely based off the theories of genuises like Tesla, Einstein, Neumann, Turing, Pavlov, etc.) you need to discover similar things yourself independently.

I think many rationalists are above 130. I don't like rationalist mentalities very much though. They seem to think that everything needs to have a source or a proof (a projected lack of confidence in their own discernment). They also tend to overestimate the value of knowledge (even sometimes using it as a synonym of intelligence). If somebodies IQ is, say, 110, I don't think they will ever have any great takes (even with years of studies) which a 140 IQ person couldn't run circles around given a week or two of thoughts. Ever seen somebody invest their whole life into something that you could dismantle or do better in 5 minutes? You could look at this and go "Rapid feedback is better because you approximate reality and update your beliefs faster, makes sense, but why overcompl- right, it's to make mone- to legitimize the only position in which they are thought to have value - because agile coaches are selling ideas/theory and rely on the illusion of substance of course"

People tend to get suspicious if you claim IQs above 125, and start analyzing data and looking for reasons to believe that the actual numbers are less. But I feel like such people really overestimate what an IQ in the 120s or 130s look like. If you go on the Mensa Forums, you will likely find that most of the comments seem rather dumb, and that the community generally appears dumber than LW.

A large number of people who report scoring in the 130s on IQ tests are not lying. If the number seems off but isn't, then what needs updating is the impression of what an IQ in the 130s look like.
I suppose that people dislike that some high IQ people just aren't doing very well in life, and prefer to think that they're lying about their scores

That's basically the exact same idea I came up with!

Your link says popularity ≈ beauty + substance, that's no different than my example of "success of a species = quality of offspring + quantity of offspring". I just generalized to a higher number of dimensions, such that for a space of N dimensions, the success of a person is the area spanned. So it's like these stat circles but n-dimensional where n is the number of traits of the person in question. I don't know if traits are best judged when multiplied or added together, but one could play around with either idea.

I'm not sure my insights say anything that you haven't already, but what I wanted to share is that you might be able to improve yourself by observing unsuccessful people and copying their trait in the dimension where you're lacking (this was voted 'wrong' above but I'm not sure why). And that if you want success, mimicking the actions of somebody who is ugly should be more effective, and this is rather unintuitive and amusing.

I also think it would be an advantage for an attractive person to experience what it's like not to be attractive for a while, getting used to this, and then becoming attractive again. Since he would have to make up for a deficit (he's forced to improve himself) and then when the advantage comes back, he'd be further than if he never had the period of being unattractive. And as is often the case with intelligent people, I never really had to study in school, but this made me unable to develop proper study habits. If I learned how below-average people made it through university, this would likely help me more than even observing the best performing student in my class.

A related insight is that if you want a good solution, you have to solve a worse problem. Want a good jacket for the cold weather? Find out what brands they use on Greenland, those are good, they have to be. Want to get rid of a headache? Don't Google "headache reliefs", instead, find out what people with migraines and cluster-headaches do, for they're highly motivated to find good solutions.

Anyway, I swear I came up with these ideas before you wrote your post, the similarity is a coincidence though it looks like I just wrote a worse version of your post. I was partly inspired by I Ching hexagram 42 which says something like "When the superior man perceives good, he imitates it; when he perceives faults, he eliminates them in himself"

Load More