Begin here and read up to part 5 inclusive. On the margin, getting a basic day-in, day-out wardrobe of nice well-fitting jeans/chinos (maybe chino or cargo shorts if you live in a hot place) and t-shirts is far more valuable when you start approaching fashion than hats. Hats are a flair that come after everything else in the outfit you're wearing them with. Maybe you want to just spend a few hours one-off choosing a hat and don't want to think about all the precursors. But that can actually make you backslide. If you look at their advice about hats, you'll see that pork pies and fedoras are recommended, but it's well-known how badly a fedora can backfire if you aren't very careful.
(For example, I'm still in the 'trying new t-shirts/shirts/jeans/chinos/shoes with an occasional jumper purchase' phase after about a year, 18 months. Still haven't even got to shorts. You might progress faster if you do shops more often or have a higher shopping budget. But suffice to say hats are a long way in.)
There is a known phenomenon of guys walking around with a fedora or brimmed hat or whatever with a poorly coordinated outfit, dirty clothes, odour, bad fit, etc. basically not having the basics down before going intermediate. In these cases you will lose points with a lot of people because they will cringe or think you're trying to compensate. You may or may not have been engaging in similar thinking when making this thread, but watch out for that failure mode.
Supplementary reading and good to get a yay or nay before buying something, or to get recommendations within a type of garment: /r/malefashionadvice/
Fashionability and going for safety helmets/caps might be divergent strategies though. If you were purely optimizing the former, what I say above might be relevant. If the latter, just getting some Crasches and calling it a day might be enough.
Yes!! I've also independently come to the conclusion that basic real analysis seems important for these sorts of general lessons. In fact I suspect that seeing the reals constructed synthetically, or the Peano --> Integers --> Rationals --> Dedekind cuts construction, or some similar rigorous construction of an intuitively 'obvious' concept, is probably a big boost in accessing the upper echelons of philosophical ability. Until you've really seen how axioms work and broken some intuitive thing down to the level that you can see how a computer could verify your proofs (at least in principle), I kind of feel like you haven't done the work to really understand the concepts of proof or definition or seen a really fully reduction of a thing to basic terms.
What specifically did you mean here?
What I mean is if you have the resources (time, energy, etc.) to do so, consider trying to get the data where the script returned '0' values because the source you used didn't have that bit of data. But make it clear that you've done independent research where you find the figures yourself, so that the user realises it's not from the same dataset. And failing that, e.g. if there just isn't enough info out there to put a figure, state that you looked into it but there isn't enough data. (This lets the user distinguish between 'maybe the data just wasn't in the dataset' versus 'this info doesn't even exist so I shouldn't bother looking for it.)
I think the big problem with trying to determine "related jobs" is that, more often than not, in the actual job market, the relationship between similar jobs is in name only.
Sure it would again be more resource-intensive, but I was thinking you could figure out yourself which careers are actually related, or ask people in those fields what they actually think are the core parts of their job and which others jobs they'd relate it to.
I like the graph that shows salary progression at every age. Often career advice just gives you the average entry figure and the average and peak senior figures, which kinda seems predicated upon the 'Career for life' mentality which locks people into professions they dislike. Suggestions, to do or not do with as you see fit, no reply necessary:
Ability to compare multiple jobs simultaneously. Make a note saying the graph will appear once you pick a job, or have it pop up by default on a default job. Center the numerical figures in their cells.
Make the list of jobs and/or the list of categories searchable and associate search keywords to jobs. For example, if I want to find 'Professor', it seems to come under postsecondary teachers, which wouldn't have been something I would have thought of without trawling the list of educators, but I would have found it if I could search by 'Professor' and get the result returned.
'Actuaries', 'Statisticians', 'Mathematicians' seem to have a duplicate entries. Check database for other duplicates by querying for where job names coincide. Have the graph update to say which job you're currently looking at, so the user can be sure it's updated. When hovering on the graph, have the box say e.g. 'Age 40' rather than just '40' to make it obvious what '40' refers to. When hovering on the graph, have the order of the figures in the box correspond to the order on the graph, i.e. give the upper, then median, then bottom figures rather than opposite as it currently is. Track down the figures where you don't have data, or establish that there is not enough data, and let the user know which is the case so they know the provenance of researched or omitted figures.
In general, I think a lot of the time the user will want to come in from an angle of having relatively specific jobs in mind and going from there, rather than working from broad categories to increasingly specific jobs. I'm not immediately sure if or how this should cash out into specific suggestions, though. But maybe something to bear in mind while you're developing the product. Perhaps you could have a mode like the current one and a 'wandering' mode where you start with a specific job then have it compared and linked to related or similar jobs (where the relational and similarity data would have to be put into the database somehow). Maybe a graph interface with nodes?
Thanks to Luke for his exceptional stewardship during his tenure! You'll be awesome at GiveWell!
And Nate you're amazing for taking a level and stepping up to the plate in such a short period of time. It always sounded to me like Luke's shoes would be hard for a successor to fill, but seeing him hand over to you I mysteriously find that worry is distinctly absent! :)
I used to have an adage to the effect that if you walk away from an argument feeling like you've processed it before a month has passed, you're probably kidding yourself. I'm not sure I would take such a strong line nowadays, but it's a useful prompt to bear in mind. Might or might not be related to another thing I sometimes say, that it takes at least a month to even begin establishing a habit. While a perfect reasoner might consider all hypotheses in advance or be able to use past data to test new hypotheses, in practice it seems to me that being on the lookout for evidence for or against a new idea is often necessary to give the idea a fair shake, which feels like a very specific case of noticing (namely, noticing when incoming information bears on some new idea you heard and updating).
This premise sounds interesting, but I feel like concrete examples would really help me be sure I understand
I didn't follow everything in the post, but it seems like the motivating problem is that UDT fails in an anti-Newcomb problem defined in terms of the UDT agent. But this sounds a lot like a fully general counterargument against decision algorithms; for any algorithm, we can form a decision problem that penalizes exactly that and only that agent. Take any algorithm running on a physical computer and place it in a world where we specify, as an axiom, that any physical instantiation of that algorithm is blasted by a proton beam as soon as it begins to run, before it can act. This makes any algorithm look bad, but this is just a no free lunch argument that every algorithm seems to fail in some worlds, especially ones where the deck is stacked against the algorithm by defining it to lose.
A decision process 'failing' in some worlds is a problem only if there exists some other decision process that does not suffer analogously. (Possible open problem: What do we mean by analogous failure or an analogously stacked world?)
Of course, it's possible this objection is headed off later in the post where I start to lose the train of thought.
I wondered about this too before I tried it. I thought I had a higher-than-average risk of being very sensitive to my own perspirations/sheddings. But I haven't detected any significant problems on this front after trying it. It goes both ways: Now I know that I'm not very sensitive to my own trouser sweat, it means I can wear trousers longer after they've been washed (i.e. exposed to potentially irritant laundry products), which possibly reduces the risk of skin problems from the laundry products (another problem that I think I have a higher-than-average chance of having; the two aren't mutually exclusive).
(Insert disclaimer about this maybe being very dependent on lots of factors, e.g. maybe I'll move to another city with an imperceptibly different climate and get screwed over by wearing jeans for more than a day.)
Why can't the deduction be the evidence? If I start with a 50-50 prior that 4 is prime, I can then use the subsequent observation that I've found a factor to update downwards. This feels like it relies on the reasoner's embedding though, so maybe it's cheating, but it's not clear and non-confusing to me why it doesn't count.