Wiki Contributions

Comments

The blue line suddenly stops because the last comment is posted at that time. I was kind of lazy about this graph and did have labels and a legend, but apparently I was too out of it to realise they didn't show on the png.

As said by gwillen, x axis is minutes.

Sorry about not having units, I added code to set them but apparently it was the wrong code and I wasn't paying enough attention.

Green line is total comments, blue is top level comments. X-axis is minutes, y axis is number of comments.

So I did what you suggested and plotted the number of top level posts and total posts over time. The attached graph is averaged over the last 20 open threads. Code available here: https://gist.github.com/TRManderson/6849ab558d18906ede40

I don't trust myself to do any analysis, so I delegate that task to you lot.

EDIT: Changed GitHub repo to a gist

That's not quite the law of the excluded middle. In your first example, leaving isn't the negation of buying the car but is just another possibility. Tertium non datur would be He will either buy the car or he will not buy the car. It applies outside formal systems, but the possibilities outside a formal system are rarely negations of one another. If I'm wrong, can someone tell me?

Still, planting the "seed of destruction" definitely seems like a good idea, although I'd think caution in specifying only one event where that would happen. This idea is basically ensuring beliefs are falsifiable.

Does the average LW user actually maintain a list of probabilities for their beliefs? Or is Bayesian probabilistic reasoning just some gold standard that no-one here actually does? If the former, what kinds of stuff do you have on your list?

Thanks. Just going to clarify my thoughts below.

Because doing so will lead to worse outcomes on average.

In specific instances, avoiding the negative outcome might be beneficial, but only for that instance. If you're constantly settling for less-than-optimal outcomes because they're less risky, it'll average out to less-than-optimal utility.

The terminology "non-linear valuation" seemed to me to imply some exponential valuation, or logarithmic or something; I think "subjective valuation" or "subjective utility" might be better here.

Is there any reason we don't include a risk aversion factor in expected utility calculations?

If there is an established way of considering risk aversion, where can I find posts/papers/articles/books regarding this?

Just found this in a search for "Brisbane". I'd show up, and maybe bring a friend who is a non-LW rationalist.

It's likely that Eliezer isn't tending towards either side of the nature vs. nurture debate, and as such isn't claiming that nature or nurture is doing the work in generating preferences.

Neither finite differences nor calculus are new to me, but I didn't pick up the correlation between the two until now, and it really is obvious.

This is why I love mathematics - there's always a trick hidden up the sleeve!

Load More