by [anonymous]
1 min read27th Jun 201679 comments

5

If it's worth saying, but not worth its own post (even in Discussion), then it goes here.


Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should be posted in Discussion, and not Main.

4. Open Threads should start on Monday, and end on Sunday.

New Comment
79 comments, sorted by Click to highlight new comments since: Today at 5:40 PM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Rationality lessons from Overwatch, a multiplayer first-person shooter:

1) Learning when you're wrong: The killcam, which shows how I died from the viewpoint of the person who killed me, often corrects my misconception of how I died. Real life needs a killcam that shows you the actual causes of your mistakes. Too bad that telling someone why they are wrong is usually considered impolite.

2) You get what you measure: Overwatch's post-game scoring gives metals for teamwork activities such as healing and shots blocked and this contributes to players' willingness to help their teammates.

3) Living in someone else's shoes: The game has several different classes of characters that have different strengths and weaknesses. Even if you rarely play a certain class, you get a lot from occasionally playing it to gain insight into how to cooperate with and defeat members of this class.

Addressing 1) "Learning when you're wrong" (in a more general sense):

Absolutely a good thing to do, but the problem is that you're still losing time making the mistakes. We're rationalists; we can do better.

I can't remember what book I read it in, but I read about a practice used in projects called a "pre-mortem." In contrast to a post-mortem, in which the cause of death is found after the death, a pre-mortem assumes that the project/effort/whatever has already failed, and forces the people involved to think about why.

Taking it as a given that the project has failed forces people to be realistic about the possible causes of failures. I think.

In any case, this struck me as a really good idea.

Overwatch example: If you know the enemy team is running a Mcree, stay away from him to begin with. That flashbang is dangerous.

Real life example: Assume that you haven't met your goal of writing x pages or amassing y wealth or reaching z people with your message. Why didn't you?

5Strangeattractor8y
I read about pre-mortem-like questions in a book called Decisive: How to Make Better Choices in Life and Work by Chip Heath and Dan Heath.
3Sable8y
That's probably it; I read it recently. Thanks!
2Lumifer8y
Goes into the "shit LW people say" bin :-D On a tiny bit more serious note, I'm not sure the killcam is as useful as you say. It shows you how you died, but not necessarily why. The "why" reasons look like "lost tactical awareness", "lingered a bit too long in a sniper's field of view", "dived in without team support", etc. and on that level you should know why you died even without a killcam. Other lessons from Overwatch: if a cute small British girl blinks past you, shoot her in the face first :-D
4WalterL8y
"Other lessons from Overwatch: if a cute small British girl blinks past you, shoot her in the face first :-D" Pfft Rationalists play Reaper. Shoot EVERYONE IN ALL THE FACES.
2Lumifer8y
Pfft Rationalists play whatever class at the moment is convenient for shooting everyone in the face in the most speedy and efficient manner :-P
1WalterL8y
So...Reaper.
1James_Miller8y
Reaper gets relatively little value from cooperating with teammates so I hope that rationalists don't find Reaper to be the best for them.
2Lumifer8y
Cooperation is not a terminal goal. Winning the game is. If I don't see my team's Reaper (or Tracer) ever, but the rear ranks of the enemy team mysteriously drop dead on a regular basis, that's perfectly fine.
1James_Miller8y
Agreed, but if a virtue and comparative advantage of rationalists is cooperating than our path to victory won't often involve us using Reaper or Tracer. Do you play on the Xbox?
2Lumifer8y
I'm a bit mystified by how cooperation became a "virtue and comparative advantage of rationalists". I understand why culturally, but if you start from the first principles, it doesn't follow. In a consequentialist framework there is no such thing as virtue, the concept just doesn't exist. And cooperation should theoretically be just one of the many tools of a rationalist who is trying to win. In situations where it's advantageous she'll cooperate and where it isn't she won't. Nope, I play on a PC.
3James_Miller8y
Rationality is systematized winning. If failure to cooperate keeps people like us from winning then we should make cooperation a virtue and practice it when we can. (I'm literally playing Overwatch while I answer this.)
1Lumifer8y
The situation is symmetrical: if eagerness to cooperate keeps people like us from winning then we should make non-cooperation a virtue and practice it when we can. My multitasking isn't as good :-)
0James_Miller8y
I guess it comes down to what has a higher marginal benefit, learning to cooperate or learning to succeed without cooperation.
0Lumifer8y
Why are you phrasing this as either-or? We don't need to decide whether a hammer or a screwdriver has a "higher marginal benefit", we use both as appropriate. Cooperating is conditional on it being useful, sometimes it's a good idea and sometimes it's not. Getting back to Overwatch, there are cases where you need to grab an assassin and go hunting for the enemy sniper, and there are cases where you need to be a healbot and just stand behind your tank...
0James_Miller8y
I was wrong. Reaper and Mei can greatly benefit from cooperation.
0RomeoStevens8y
I really enjoyed blacklight:retribution for the instant rationality training. There is literally an update button that lets you wallhack for a second or so. This makes you vulnerable as well, so there is a cost of information. You must keep making choices between information and taking actions based on your current model.

I am trying to outline main trends in AI safety this year, may I ask an advise what I should add or remove from the following list?

1.Elon Musk became the main player in AI field with his OpenAI program. But the idea of AI openness now opposed by his mentor Nick Bostrom, who is writing an article which is questioning safety of the idea of openness in the field of AI. http://www.nickbostrom.com/papers/openness.pdf Personally I think that here we see an example of arrogance of billionaire. He intuitively come to idea which looks nice, appealing and may work... (read more)

1Manfred8y
OpenAI is significantly more nuanced than you might expect. E.g look at interviews with Ilya Sustskever where he discusses AI safety, or consider that Paul Christiano is (briefly) working for them. Also, where did you get the description of Bostrom as "Elon Musk's mentor?"
-1turchin8y
Musks seems to be using many ideas from Bostrom: he tweets about his book on AI, he mention his idea about simulation. I think that there is difference between idea of Open AI as it was suggested by Musk in the beginning and actual work in the organisation named "Open AI". The latter seems to be more balanced.
1ChristianKl8y
Public understanding by reading a few blog posts might not give a good overview over the reasons for which Open AI was started. I think looking at the actual actions might be a better way to try to understand what the project is supposed to do.
-1turchin8y
I read that you joined Open AI and I think it is good project now, but the idea of "openness of AI" was fairly criticised by Bostrom in his new article. But it seems that the organisation named "OpenAI" will do much more than promote openness. There is a little confusion between the name of organisation and the idea of letting everybody to run their own AI code. I joked that the same way we could create Open Nuke project which will deliver reactors to every household which would probably result in in very balanced world where every household could annihilate any other household, and so everybody is very polite and crime is almost extinct.
0ChristianKl8y
I have no affiliations with Open AI. In this case I'm driven by "Don't judge a book by it's cover"-motivations. Especially in high stakes situations. I think taking the name of an organisation as ultimate authority of what the organisation is about is a bit near-sighted. Making good strategic decisions is complicated. It requires looking at where a move is likely to lead in the future.

The Einstein Toolkit Consortium is developing and supporting open software for relativistic astrophysics

this is a core product, that you can attach modules to for specific models that you want to run. able to handle GR on a cosmological scale !

http://einsteintoolkit.org/

0MrMind8y
I tried to follow the link but the whole framework (ETK + Cactus + Loni an so on...) is so scattered and so poorly documented that it discouraged me. I have the idea that only those who already use Cactus intensively will know how to use the toolkit.

Say you are a strong believer and advocate for the Silicon Valley startup tech culture, but you want to be able to pass an Ideological Turing Test to show that you are not irrational or biased. In other words, you need to write some essays along the lines of "Startups are Dumb" or "Why You Should Stay at Your Big Company Job". What kind of arguments would you use?

7Daniel_Burfoot8y
This comment got 6+ responses, but none that actually attempted to answer the question. My goal of Socratically prompting contrarian thinking, without being explicitly contrarian myself, apparently failed. So here is my version: * Most startups are gimmicky and derivative, even or especially the ones that get funded. * Working for a startup is like buying a lottery ticket: a small chance of a big payoff. But since humans are by nature risk-averse, this is a bad strategy from a utility standpoint. * Startups typically do not create new technology; instead they create new technology-dependent business models. * Even if startups are a good idea in theory, currently they are massively overhyped, so on the margin people should be encouraged to avoid them. * Early startup employees (not founders) don't make more than large company employees. * The vast majority of value from startups comes from the top 1% of firms, like Facebook, Amazon, Google, Microsoft, and Apple. All of those firms were founded by young white males in their early 20s. VCs are driven by the goal of funding the next Facebook, and they know about the demographic skew, even if they don't talk about it. So if you don't fit the profile of a megahit founder, you probably won't get much attention from the VC world. * There is a group of people (called VCs) whose livelihood depends on having a supply of bright young people who want to jump into the startup world. These people act as professional activists in favor of startup culture. This would be fine, except there is no countervailing force of professional critics. This creates a bias in our collective evaluation of the culture.
4Manfred8y
Argument thread! You should probably stay at your big company job because the people who are currently startup founders are self-selected for, on average, different things than you're selecting yourself for by trying to jump on a popular trend, and so their success is only a weak predictor of your success. Startups often cash out by generating hype and getting bought for ridiculous amounts of money by a big company. But they are very, very often, in more sober analysis, not worth this money. From a societal perspective this is bad because it's not properly aligning incentives with wealth creation, and from a new-entrant perspective this is bad because you likely fail if the bubble pops before you can sell.
1ChristianKl8y
Likely because the answers called for a ITT but provided no questions for the ITT. Both of those seem to me like failing the Intellectual Turing Test. I would have a hard time thinking that the average person who works at a big company would make those arguments.
0Douglas_Knight8y
You never explained what you mean by "startup culture," nor "good." One can infer something from your arguments. But different arguments definitely appeal to different definitions of "good." In particular: good for the founder, good for the startup employee, good for the VC, and good for society. There is no reason to believe that it should be good for all of them. In particular, a belief that equity is valuable to startup employees is good for founders and VCs, but if it is false, it is bad for startup employees. If startups are good for society, it may be good for society for the employees to be deceived. But if startups are good for society, it may be a largely win-win for startups to be considered virtuous and everyone involved in startups to receive status. Isn't that the kind of thing "culture" does, rather than promulgate specific beliefs? By "startup culture" you seem to mean anything that promotes startups. Do these form a natural category? If they are all VC propaganda, then I guess that's a natural category, but it probably isn't a coherent culture. Perhaps there is a pro-startup culture that confabulates specific claims when asked. But are the details actually motivating people, or is it really the amorphous sense of virtue or status? Sometimes I see people using "startup culture" in a completely different way. They endorse the claim that startups are good for society, but condemn the current culture as unproductive.
6Lumifer8y
What exactly is the thesis in question? "Startup culture is a valuable piece of a large economy", for example, is not the same thing as "I should go and create a startup, it's gonna be great!".
5Douglas_Knight8y
Not to disagree with this exercise, but I think that the name ITT is overused and should not be applied here. Why not just ask "What are some good arguments against startups?" If you want a LW buzzword for this exercise, how about hypothetical apostasy or premortem? I think that ITT should be reserved for the narrow situation where there is a specific set of opponents and you want to prove that you are paying attention to their arguments. Even when the conventional wisdom is correct, it is quite common that the majority has no idea what the minority is saying and falsely claims to have rebutted their arguments. ITT is a way of testing this.
0ChristianKl8y
That's a different question. A good argument against startups might be set VC as an asset class don't outperfom the stock market. On the other hand it's unlikely that the average person working at a company would make that argument, so arguing it would fail the ideological turing test.
5gjm8y
The question seems like it has more levels of indirection in it than necessary. I mean, to pass an ITT is to behave/speak/write just like someone with the views you're pretending to have. So how is "Say you believe X and want to pass an ITT by arguing not-X. What would you say?" different from "Say you believe not-X and want to defend it. What would you say?" or, even, just "What are the best arguments for not-X?"?
2Jiro8y
Being a believer in X inherently means, for a rationalist, that you think there are no good arguments against X. So this should be impossible, except by deliberately including arguments that are, to the best of your knowledge, flawed. I might be able to imitate a homeopath, but I can't imitate a rational, educated, homeopath, because if I thought there was such a thing I would be a homeopath. Yes, a lot of people extoll the virtues of doing this. But a lot of people aren't rational, and don't believe X on the basis of arguments in the first place. If so, then producing good arguments against X are logically possible, and may even be helpful. (There's another possibility: where you are weighing things and the other side weighs them differently from you. But that's technically just a subcase--you still think the other side's weights are incorrect--and I still couldn't use it to imitate a creationist or flat-earther.)

Being a believer in X inherently means, for a rationalist, that you think there are no good arguments against X.

Huh? You are proposing a very stark, black-and-white, all-or-nothing position. Recall that for a rationalist a belief has a probability associated with it. It doesn't have to be anywhere near 1. Moreover, a rationalist can "believe" (say, with probability > 90%) something against which good arguments exist. It just so happens that the arguments pro are better and more numerous than the arguments con. That does not mean that the arguments con are not good or do not exist.

And, of course, you should not think yourself omniscient. One of the benefits of steelmanning is that it acquaints you with the counterarguments. Would you know what they are if you didn't look?

8Viliam8y
Great point! I guess the point of ITT is that even when you disagree with your opponents, you have the ability to see their (wrong) model of the world exactly as they have it, as opposed to a strawman. For example, if your opponent believes that 2+2=5, you pass ITT by saying "2+2=5", but you fail it by saying "2+2=7". From your perspective, both results are "equally wrong", but from their perspective, the former is correct, while the latter is plainly wrong. In other words, the goal of ITT isn't to develop a "different, but equally correct" map of the territory (because if you would believe in correctness of the opponent's map, it would also become your map), but to develop a correct map of your opponent's map (as opposed to an incorrect map of your opponent's map). So, on some level, while you pass an ITT, you know you are saying something false or misleading; even if just by taking correct arguments and assigning incorrect weights to them. But the goal isn't to derive a correct "alternative truth"; it is to have a good model of your opponent's mind.
3TimS8y
No good arguments, or the weight of the arguments for X are greater than the weight of the arguments against X?
-1Jiro8y
You know, I did mention weighing arguments in my post.
1ChristianKl8y
No, http://lesswrong.com/lw/gz/policy_debates_should_not_appear_onesided/ In high level debating at the debating world championship the participants are generally able to give good arguments for both sides of every issue.
2[anonymous]8y
(Not that I know a thing about the subject, but are you sure this angle is exactly how an 'unbiased re: startups" person would think about it? Why not something more like, "Startups are simply irrelevant, if we get down to it"?)

I didn't realize that the biggest supporter of UBI in the US is the ex-leader of the Service Employees Union. Guess i will have to read that book next. Have Agars 'Humanities End' to tackle next..

http://www.alternet.org/economy/universal-basic-income-solves-robots-taking-jobs

and a write-up on why the elites don't get the Brexit drama right..

http://www.bloomberg.com/view/articles/2016-06-24/-citizens-of-the-world-nice-thought-but

0MrMind8y
That seems to be way off. Prediction markets reflects the opinion of those who enter in the market. AFAIK there's no barrier to the lower income strata of the popoluation. Polls also failed to predict the result, so I would say that it was not a structural failure of the markets.
2Lumifer8y
The thing is, the markets reflect committed-capital-weighted opinions of market participants. This is not an egalitarian democracy.
0ChristianKl8y
Given that market participants insure against risks with the prediction market and the event of the Brexit does carry risk to some businesses I'm not sure that's empircally the case.
0MrMind8y
Possibly we (meaning I vs Epicurean Dealmaker) have a very different notion of 'elite'. I imagine the elite as the 10% (or 5% or 1%, depending on your Pareto distribution) which has enough capital to hedge against market fluctuations (or enough to create it entirely); as far as I understand instead ED means as 'elite' anyone who has enough money to invest in a market.
2Lumifer8y
I don't think this is the issue. If you invest $10m into some market position, your "opinion" literally has one million times the impact of someone who invested $10. It's not just "people who invest" vs "people who do not invest". Even among those who invest, the more capital you apply, the more your opinion matters. Markets are inherently capital-weighted and their opinion necessarily reflects the positions of the rich to a much greater degree.

Is the EU regulations on algorithmic decision-making and a “right to explanation” positive for our future? Does it make a world with UFAI less likely?

[-][anonymous]8y00

Room for improvement in Australia’s overseas development aid

Poor countries typically receive aid from many donors. In Vietnam, Australia is one of 51 multilateral and bilateral donors (Vietnam Ministry of Planning 2010). Interactions between a large number of donors and a single recipient government can have a cumulative and damaging impact. For example, in 2005, the Tanzanian government produced about 2,400 reports for the more than 50 donors operating in the country (TASOET 2005: 1). In the Pacific Islands, some senior government officials are so busy

... (read more)

In the quest to optimize my sleep I have found over the last days that I relaxed a lot more as usual. I sleep on the side but I put cushion between my back and the wall so that part of my weight rests on the back and part rests on the mattress of the bed.

Are there any real reasons why standard beds are flat? Or is it just a cultural custom like our standard toilet design that exists for stupid reasons?

not that I know of. Various suggestions of sleeping with a body pillow exist. Hammocks exist. Plenty of people take naps on couches or in reclining chairs.

I wonder if it has anything to do with ease of manufacture.

I am sure you have read this: www.lesswrong.com/r/discussion/lw/mvf/

(relevant side note) Traditional Japanese beds are harder and thinner than western beds.

0ChristianKl8y
As far as I can see it doesn't discuss sleeping surfaces that aren't flat.

no unfortunately it does not, but it has other details that might be informative.

Is post-rationalism dead? I'm following some trails and the most updated material is at least three years old.
If so, good riddance?

2ChristianKl8y
If I put the phrase into Google I find http://thefutureprimaeval.net/postrationalism/ that was written in 2015 as one of the results, so the phrase got used more recently than three years ago. In general the term isn't important to many of the people that Scott put under that label when he wrote his map. http://www.ribbonfarm.com/ is still alive and well. David Chapman also still writes.
0MrMind8y
That was my starting point too, but I noticed that most new content linked there specifically about PR seems to have been written pre-2015. If those authors still write, I get the impression that they are not writing about PR anymore. That makes me suspect that postrationalism was never a 'thing'.
0ChristianKl8y
Scott used the term when he draw his map and a few people thought that it describes a cluster but most of the involved people don't care for the term. It's similar to a term like Darwinism that wasn't primarily about self-labeling.

Estimation of timing of AI risk

I want to once again try to assess expected time until Strong AI. I will estimate prior probability of AI, and then try to update it based on recent evidences.

At first, I will try to prove the following prior probability of AI: "If AI is possible, it most likely will be built in the 21 century, or it will be proven that the task has some very tough hidden obstacles". Arguments for this prior probability:

  1. Science power argument. We know that humanity was able to solve many very complex tasks in the past, and it to

... (read more)
0ChristianKl8y
That seems to be false. Leonardo da Vinci had drafts of flying machines and it took a lot longer than 100 years to get actual flight.
0turchin8y
That is why I use wording "typically around" to show that I meant medium time of large dedicated efforts. Leonardo's work was not continued by other scientists of 16 century so it was not part of large dedicated effort. It seems that other creators of flying machines tried to invent them from scratch but not based on results of Leonardo. Even in 19 century collaboration between aviation enthusiasts was very small. Probably they learned from failed attempts of others - like ups, flying wings does not work, lets try rotating wings. if they collaborate more effectively, they could come to the working design quicker.
0ChristianKl8y
Then take the project of eternal life. A lot of corporating alchimists worked on it for thousands of years.
0turchin8y
I think that it is possible to make your argument even stronger: it took hundred thousand years to go from stone age to bronze age. But is clear that "total intelligence" of humanity is different on any stage of its development, and when I spoke about hundred years, I thought of "total intelligence" of humanity on the level of 20 century. Anyway in case of AI it is an analogy, and may not work. The AI problem could be extremely complex and even unsolvable, but we could not bet on it if we are going to be well prepared.
0Volyova8y
Could this medium time even be exponential regarding the whole human evolution?

Comp Vision and Machine Learning conference on in Vegas. Some recommended reading at the bottom

https://sites.google.com/site/multiml2016cvpr/

and this is one guy blogging it, must be a lot of twittering too...

https://gab41.lab41.org/all-your-questions-answered-cvpr-day-1-40f488103076#.braqj1fdj

[-][anonymous]8y-30

Quantified hedonism - Personal Key Performance Indicators

The phrase burn the boats comes from the VIking practice of burning boats on the shore before invading so they have to win and settle. No retreat, it's an inspiring analogy, but I heard it in the context of another Real Social Dynamics video, so the implication is to approach sets as if there is no retreat? Bizaare, those guys.....anyway that RSDPapa video suggested that personal KPI's were useful. What's measured gets improved, or so the saying goes. So which KPI's should you choose? After some thou... (read more)

Thoughts on the King, Warrior, Magician, Lover archetypes?

Having been at the self-dev, PUA, systems, psychology, lesswrong, kegan, philosophy, and other things - game for a very long time. My discerning eye suggests that some of the model is good, and some is bad. My advice to anyone looking at that model is that there are equal parts shit and diamonds. If you haven't been reading in this area for 9 years you can't see what's what. Don't hold anything too closely but be a sponge and absorb it all. Throw out the shit when you come across it and keep the diamonds.

At the end of the 4 (KWML) pages suggest some various intelligent and reasonable ways to develop one's self:

  • Take up a martial art.
  • Do something that scares you.
  • Work on becoming more decisive.
  • Meditate. Especially on death.
  • Quit should- ing on yourself.
  • Find your core values.
  • Have a plan and purpose for your life.
  • Boost your adaptability by strengthening your resilience.
  • Study and practice the skills necessary for completing your goals, become a master of your trade.
  • Find the principles that you’re loyal to.
  • Establish some non-negotiable, unalterable terms (or N.U.Ts) and live by them.
  • Compete in a race like
... (read more)
0JohnGreer8y
What other resources do you support in this field, ELO?
9Elo8y
This is really hard to answer in the context of: I'd be willing to give it a shot. What problems are you working on at the moment?
0JohnGreer8y
I've done a fair amount of reading and am comfortable in the social/PUA realm but am always on the lookout for more recommended resources (especially higher-level stuff).
-2ChristianKl8y
Why do you focus on the suggestions that are also made elsewhere instead of what's unique in the King, Warrior, Magician, Lover framework?
6Elo8y
The model is meaningless beyond what it suggests you do. If I were to spend a long time understanding the whole damn model I could possibly end up generating my own predictive set of ideas from that model. Because I have not spent that time - it's easier for me to just look at the (already generated) outputs of the model and comment on the results. I am not 100% sure that all those suggestions fit within the model itself but generally if the site ends in those kinds of suggestions, as above:
0ChristianKl8y
No, if you ignore the model you ignore the reason of why people recommend King, Warrior, Magician, Lover. I don't think anybody who recommended that book to me did so, because of a shallow list of recommendations that fits into a few bullet points. This is similar how taking a list of bulletpoints about CFAR knowledge doesn't compare to evaluating the value that a CFAR workshop provides to it's attendies. There's no value in forming a judgement of a model that one doesn't understand like this by a shallow look at it. There plenty of shallow personal development literature out there that people who like to consume listicles but I haven't heared any recommendations for this book from that audience but mostly from people who think deeper and engage deeply with it.
7Elo8y
I will be delighted to hear your review when you get around to writing it up.
0ChristianKl8y
My current state is that I haven't read the full book or used the ideas in my life but I know multiple people who do, who value the ideas highly and who are generally good sources of personal development ideas.
0Daniel_Burfoot8y
I have travelled there twice, partially to scope it out for a possible move. Here are the downsides: * It is very small, both in terms of geographical size and population. There's just not a lot of places to go or things to do. * At the same time it is not dense, so you probably need a car. * It is very touristy. Of the things to do, most are tourism-related. * The tech sector is not well-developed, so a tech person like me would probably end up working as a random IT consultant for a bank or law firm or something. As far as the upsides, you got them mostly right: strong economy, low taxes, good climate, a generally tranquil feeling of life. Overall I think there would be something enormously psychologically beneficial to live in a place where the main political debate is what to do with the budget surplus. My takeaway is: CI is a great place if you 1) are in the finance sector 2) like "sun and fun" activities like swimming, sailing, and diving 3) don't have big life ambitions (e.g. start a tech company).
0root8y
That website looks like a pretty big clickbait. Not footnotes either, which could be me overestimating people who put footnotes, but it might also be that whomever wrote that could be attempting to avoid being accused of wordplay.
0ChristianKl8y
What's wrong with simple hyperlinks to sources? The post explains ideas layed out in a book and links the book.
-1root8y
You have a point. I'm mostly at fault here to be honest as I'm getting slowly more and more skeptical of 'stuff on the internet' (the site being called Art of Manliness already gives me some certain ideological connotations) and seeing how many things which look appealing intuitively don't really yield much tasty fruit in real life, I'll often label things clickbait rather than actually put some time in them.