All of Causal Chain's Comments + Replies

I interpret upvotes/downvote as

  1. Do I want other people to read this post
  2. Do I want to encourage the author and others to write more posts like this.

And I favour this post for both of those reasons.

I agree that this post doesn't make philosophical argument for it's position, but I don't require that for every post. I value it as an observation of how the EA movement has affected this particular person, and as criticism.

A couple of strongly Anti-EA friends of mine became so due to a similar moral burnout, so it's particularly apparent to me how little emphasis is put on mental health.

2Chris_Leong
Just to make my position really clear: I never said this post needed to make a philosophical argument for its position, rather that if a post wasn't a philosophical argument we shouldn't judge it by the standards we apply to a philosophical argument. Then I tried to figure out an alternative standard by which to judge this post.

This dynamic reminds me of arguments-as-soldiers from The Scout Mindset. If people are used to wielding arguments as soldiers on themselves then it seems relatively easy to extend those patterns to reasoning with others. 

Testing this hypothesis seems tricky. One avenue is "people with more internal conflicts are more predisposed to the soldier mindset". I can see a couple ways in-model for this to be untrue though. 

Some typos:

rationality lessons we've accumulated and made part of our to our thinking

Seems like some duplicated words here.

weird idea like AIs being power and dangerous in the nearish future.

 Perhaps: "weird ideas like AIs being powerful and dangerous"

Similar to being able to reply freely to comments on our posts, it would be nice if we could reply freely to comments on our own comments.

8Raemon
I get why that feels reasonable, but I think that's a fairly different situation. One of the main points of all this is so that post authors can have the kind of discussions they want on their posts, and one of the primary ways LessWrong often feels annoying is to have people asking confused 101 questions that then spawn a distracting conversation on what was supposed to be a 201 or 401 level post.

This seems like a reasonable mechanism, but I thought we already had one: belief-in-belief makes it easier to lie without being caught.

The phrase "the map is not the territory" is not just a possibly conceivable map, it's part of my map.

Thinking in terms of programming, it's vaguely like I have a class instance s where one of the elements p is a pointer to the instance itself. So I can write *(s.p) == s. Or go further and write *(*(s.p).p) == s.

As far as I want with only the tools offered to me by my current map.

1M. Y. Zuo
Did you intend to answer the above question? If so, I don’t quite follow your programming analogy. 

My immediate mental response was that I value this post, but it doesn't fit with the mood of lesswrong. Which is kind of sad because this seems practical. But this is heavily biased by how upvotes are divvied out, since I typically read highly-upvoted posts.

It seems less likely to maximize my happiness or my contribution to society, but it doesn't make me not want it

I thought this was clear to me, but then I thought some more and I no longer think it's straightforward. It pattern matched against

  • high value vs low probability
  • personalities are inbuilt
... (read more)

This reminds me of dual N-back training. Under this frame, dual N-back would improve your ability to track extra things. It's still unclear to me whether training it actually improves mental skills in other domains.

The improvement to my intuitive predictive ability is definitely a factor to why I find it comforting, I don't know what fraction of it is aesthetics, I'd say a poorly calibrated 30%. Like maybe it reminds me of games where I could easily calculate the answer, so my brain assumes I am in that situation as long as I don't test that belief.

I'm definitely only comparing the sizes of changes to the same stat. My intuition also assumes diminishing returns for everything except defense which is accelerating returns - and knowing the size of each step helps inform this.

1Dweomite
That seems opposed to what Linda Lisefors said above:  You like the idea that you could calculate an answer if you chose to, while Linda thinks the inability to calculate an answer is a feature. (Nothing wrong with the two of you wanting different things.  I am just explicitly de-bucketing you in my head.) My model says that the trend in modern games is towards defense having diminishing returns (or at least non-escalating returns), as more developers become aware of that as a thing to track.  I think of armor in WarCraft 3 as being an early trendsetter in this regard (though I haven't gone looking for examples, so it could be that's just the game I happened to play rather than an actual trendsetter). I am now explicitly noticing this explanation implies that my model contains some sort of baseline competence level of strategic mathematics in the general population that is very low by my standards but slowly rising, and that this competence is enough of a bottleneck on game design that this rise is having noticeable effects.  This seems to be in tension with the "players just don't want to multiply" explanation.

 > Offering the player a choice between +5 armor and +10 accuracy implies that the numbers "5" and "10" are somehow expected to be relevant to the player.

When I imagine a game which offers "+armor" or "+accuracy" vs a game which offers "+5 armor" or "+10 accuracy", the latter feels far more comfortable even if I do not intend to do the maths. I suspect it gives something for my intuition to latch onto, to give me a sense of scale.

1Dweomite
Do you mean that it's more comfortable because you feel it provides some noticeable boost to your ability to predict game outcomes (even without consciously doing math), or is it more of an aesthetic preference where you like seeing numbers even if they don't provide any actual information?  (Or something else?) If you're applying a heuristic anything like "+10 accuracy is probably bigger than +5 armor, because 10 is bigger than 5", then I suspect your heuristic is little better than chance.  It's quite common for marginal-utility-per-point to vary greatly between stats, or even within the same stat at different points along the curve. If you're strictly using the numbers to compare differently-sized boosts to the same stat (e.g. +10 accuracy vs +5 accuracy) then that's reasonably safe.

Possible typo

Here is our last set of flashcards to provide you with the key takeaways from the section “Project-based Learning.”

Should be

Here is our last set of flashcards to provide you with the key takeaways from the section “put your learning into practice.”

1spencerg
Thank you! Fixed :)

Well, you check if it's a multiple of every prime below sqrt(11009) ~= 105.

Though if you suspect he's intentionally chosen a tricky number, a product of two large primes, you can look at the square numbers larger than 11009. In this case 11025 = 105^2, and 11025-11009 = 16 = 4^2, so 11009 = 105^2 - 4^2 = (105+4)(105-4) = 109×101