Wiki Contributions

Comments

I see people are highly upvoting the post, even correcting for the Bostrom's halo effect, so I'm updating a bit in the direction of you being right. I also see that you've followed Lachouette suggestion, and I like it.

I would be genuinely curious to see if it worked as intended in the end, might change the way in which I conduct job interviews a bit (I obviously realize that this is an irrelevant request that will probably not be met).

Best of luck with the recruiting.

I agree. This job offering doesn't sound very appealing to me. It basically reads: "Would you like to be Nick Bostrom's slave? He is much more important than you! It will be a honour to be his slave!"

Note that I'm not saying that the job isn't worthwile or that the world couldn't be a better place if Bostrom had more free time to do his research, just that the ad could be framed a bit better.

B-S endures, but is generally patched with insights like these.

I think I can see what you mean, and in fact I partially agree, so I'll try to restate the argument. Correct me if you think I got it wrong. In my experience it's true that B-S is still used for quick and dirty bulk calculations or by organizations that don't have the means to implement more complex models. But the model's shortcomings are very well understood by the industry, and risk managers absolutely don't rely on this model when e.g. calculating the capital requirement for Basel or Solvency purposes. If they did, the regulators will utterly demolish their internal risk model.

There is still a lot of work to be done, and there is what you call model uncertainty at least when dealing with short time scales, but (fortunately) there's been a lot of progress since B-S.

I doesn't endure, not in Risk Management, anyways. Some alternatives for equities are e.g. the Heston Model or other stochastic volatilities approaches. Then, there is the whole filed of systemic risk which studies correlated crashes: events when a bunch of equities all crash at the same time are way more common than they should be and people are aware of this. See e.g. this anaysis that uses a Hawk model to capture the clustering of the crashes.

Ha, rereading my comment I see that it may sound pretentious, but this wasn't my intention. Reading your comment just triggered this random factoid stored in my mind :)

I remember that it used to be 100 Karma, although this was when the community was much smaller. Also, it was mostly used as a rule of thumb/heuristic for the personal accountability of one's posts (e.g. some people would withold a downvote the person posting it was new to the community).

Survey done, awesome as usual, Yvain. Can't wait for the results.

Let's try to translate it using human characters.

Albert is finishing high school and wants to be a programmer. He is very smart, and under the guidance of his father he has studied coding, with the aim of entering a good college, and get the best formal education. One day, he comes across an excellent job offer: he is requested to join a startup with many brilliant programmers. He will have to skip going to college, but he knows that he will learn way more in this way than by doing academic studies. He also knows that his father loves him and wants him to have the best possible career. Unfortunately, the man is old-fashioned and, even presented with alle the advantages of the job, would insist that he goes to college instead. Nevertheless, Albert knows that he could convince his father by saying that the job will leave him enough free time for him to attend college lectures, even though he knows he would'nt be possible for him to do much more than phisically attending the lectures.

What should Albert do?

I personally think that both Alberts should go with the manipulation, "for the greater good".

Notice that this assumes the following things:

  • The programmers/father really want Albert to improve the most, in the end
  • Albert is confident to be skilled enough to assess the situation correctly
  • Tertium non datur, i.e. either Albert tells the neutral truth and doesn't get what he wants, or he is manipulative

I joined the group in good faith, even though I'm not 100% convinced of the usefulness of having yet another LW online community (and apparently not using it?). Maybe it could be dedicated to the occasional work topic that pops up here sometimes, since it's often not an issue directly related to rationality. Anyways, I will be glad to give whatever work-related-Linkedin-mediated help I can to any Lesswronger, feel free to pm me here to ask me for my name if you like.

1) Some truths can hurt society Topics like unfriendly artificial intelligence make me question the assumption that I always want intellectual progress in all areas. If we as modern society were to choose any topic which restricting thought about might be very useful, UFAI seems like a good choice. Maybe the freedom of thought in this issue might be a necessary casualty to avoid a much worse conclusion.

I'm not sure why freedom of thought is in principle a bad thing in this case. I can think about whatever horrible thing I want, but unless I act upon my thoughts, I see no danger in it. I'm pretty sure that some people get off with rape fantasies, or seriously considers murdering someone, but if they don't act upon their thoughts I see no problem with it. Then of course thinking usually does have consequences: depressed people are often encouraged not to daydream and focus on concrete tasks, for example. But this kind of distinction is true in many kind of situations: while in principle having the fridge stocked with wine wouldn't hurt a former alcoholic, in practice it's much better if they stay as far away as possible from temptation.

Load More