halinaeth

found LW as a child, found it 10 years later through a fortunate series of coincidences :)

say hi from LW: @halina_eth on X

Wiki Contributions

Comments

Sorted by

Ah, good point! Over a long enough time period, not promising anything denies you the opportunity to showcase that you have a low "breaking promises" rate- hadn't factored that into the false negative/positive scenarios.

I see, these are great examples "destruction paths", thank you! What I'm hearing is essentially:

- in communities which gain prestige, infighting which causes collapse
- members dying out over time

I think these are different than what I'm observing in my community. Thinking about it, two patterns jump to mind:

- as our community gained prestige, members would start tearing down or attacking "rival" communities to gain in-group points. But this gives us a bad reputation & deters new members from wanting to join, so community doesn't gain "new blood" and calcifies. (seems parallel to the prestige > infighting problem you described!)
- our community has clearly delineated founders, and as it's a financially-based community (crypto community), people who criticize founders' choices are ostracized for creating "FUD" and ridiculed. Thus, now no one wants to criticize publicly for fear of being eaten alive, and I only hear people express discontent 1:1, never in public. (only once the community's performed much worse financially, did more people start expressing discontent publicly, but by then it was too late to give founders actionable feedback as they'd invested significant resources)

I wonder if communities that are financially based like crypto communities would tend to fall into the "tribalism > bad reputation > no newcomers" & "attack anyone who criticizes leadership" more often? For example is this failure mode more common in startups too?

Would love to know if anyone's written on dynamics like this- would love any links.

Yes to both! The lying model is great to have especially on the internet where everyone trolls for fun. But to Nathan's point especially as cost of intellectual labor goes to zero, the net benefits of investigating these cases would keep increasing. Seems worth a try to find some obscure low hanging fruit!

 

True or not, wouldn't you say the idea it illustrates is sound? No matter how small a percentage of the time, a nonzero number of people claiming ridiculous things are telling the truth (just framing it in a ridiculous way with wrong correlations).

If as a society we investigated these cases more often instead of dismissing them, would it lead to a net positive for humanity? For example, if everyone heard "drinking mud soup in this specific part of the world consistently cures X affliction", and dismissed it- wouldn't most pharmaceutical companies not have found their star compounds used in bestselling drugs?

To be clear, I agree that majority of these wild tales lead nowhere, but I wonder if it's worth investigating even for the minority of cases which lead somewhere unexpected.

Love this example! 

Reminds me of the "haunted apartment" case in Korea, where dogs kept going insane near a certain spot by the entrance of the apartment complex, and eventually investigators realized there was a malfunction that caused an electric current on the entrance floor, which the dogs' paws could feel but humans with shoes couldn't detect.

I wonder what other phenomena we're too quick to dismiss because they're framed in a way that sounds absurd.

How to Poison the Water?

I think we've all heard the saying about the fish and the water (the joke goes, and old fish asks young fish about the water, and the the young fish ask "what's water?).

I'm curious the key failure modes or methods that tend to "poison the water", or destroy/alter an organization/scene's culture or norms negatively. Are there major patterns that communities tend to fall into as they self destruct? 

Would love for anyone to share resources or general reflections on this- I'm currently part of a (unrelated) community where I see this happening, but am having a hard time putting into words exactly what's wrong.

An example of the type of helpful framework that I'd be looking for:
- Geeks, Mops, and Sociopaths

This example is super helpful! When people might take your information and act on it as assurance aka a "promise", you should stick to purely "information" style phrasing or be vague to avoid "promising".

Can you think of any instance where a "false negative" has been an issue, i.e. where people take an assurance as information, and that caused problems? Or is the main failure mode to look out for the "false positive"?

This a super helpful framework, thank you! 

How often would you say to stare at the abyss regarding job/career trajectory in general? Is annually too often? And how can you tell if your failure mode is staring too often vs not enough (staying somewhere too long vs not investing enough time to succeed)?

In general, if you're not happy with your level of success/achievement in life thus far and have tried several paths (about a year each), would you say generally one should keep pivoting each year? In other words, generally if you're not happy with the velocity or trajectory of success on your current path (given ~1 year of sustained effort), is pivoting usually the right answer?

Been wrestling with this question a ton myself- to pivot and start over, or keep working on something that doesn't give results as strong as I'd like.

Makes sense, thanks for the new vocab term!

Hi! New to the forums and excited to keep reading. 

Bit of a meta-question: given proliferation of LLM-powered bots in social media like twitter etc, do the LW mods/team have any concerns about AI-generated content becoming an issue here in a more targeted way?

For a more benign example, say one wanted to create multiple "personas" here to test how others react. They could create three accounts, and respond to posts always with all three accounts- one with a "disagreeable" persona, one neutral, and one "agreeable".

A malicious example would be if someone hated an idea or person, X, on the forums. They could use GPT-4o to brainstorm any avenues of attack on X, then create any amount of accounts which will always flag posts about X to criticize and challenge. Thus they could bias readers through both creating a false "majority opinion", as well as through sheer exposure & chance (someone skimming the comments might only see criticizing & skeptical ones).

Thanks for entertaining my random hypotheticals!

Load More