Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: SilentCal 08 June 2016 05:50:37PM *  3 points [-]

Hacking aromanticism is the wrong framing for this, IMO--fighting romantic insecurity has much wider applicability. You can be open to relationships and still want to be able to be single without feeling like a failure.

Comment author: SilentCal 27 May 2016 08:28:51PM 0 points [-]

The trouble with judging ideas by their proponents is that there could be confounders. For instance, if intelligent people are more often in white-collar jobs than blue-collar, intelligent people might tend to favor laws benefiting white-collar workers even when they're not objectively correct. Even selecting for benevolence might not be enough--maybe benevolent people tend to go into the government, and people who are benevolent by human standards are still highly ingroup-biased. Then you'd see more benevolent people tending to support more funds and power going to the government, whether or not that's a good idea.

Comment author: Stuart_Armstrong 25 May 2016 12:16:44PM 1 point [-]

Mary certainly experiences something new, but does she learn something new? Maybe for humans. Since we use empathy to project our own experiences onto those of others, humans tend to learn something new when they feel something new. If we already had perfect knowledge of the other, it's not clear that we learn anything new, even when we feel something new.

Comment author: SilentCal 27 May 2016 08:17:56PM 0 points [-]

Agreed, with the addendum that human intuition has trouble fathoming the 'perfect knowledge of the other' scenario. If seeing red caused Mary to want to see more color, we'd be tempted to describe it as her 'learning' the pleasure of color, whether or not Mary's predictions about anything changed.

Comment author: SilentCal 22 April 2016 04:39:25PM 2 points [-]

Epistemic status: devil's advocate

The web browser is your client, because the display is the content.

Why did web forums, rather than closed NNTP networks, become the successor to Usenet? One possibility is that the new internet users weren't savvy enough to install a program without having CDs of it stacked on every available public surface. But another is that web sites, able to provide a look and feel appropriate to their community, plainly outcompeted networks of plaintext content. The advantages aren't necessarily just aesthetic; UI 'nudges' might guide users to pay attention to the same things at the same times, allowing a more coordinated and potentially more tailored set of discussion norms.

Notice that on mobile, users have rejected the dominance of the browser--in favor of less standardization and interoperability, via native apps that dispense with HTML.

Put another way, a web community does have a comparative advantage at designing a UI for itself, because UIs are not interchangeable.

Comment author: Lumifer 21 April 2016 09:22:17PM 1 point [-]

Is it an observation that expected utility maximization does not include risk management for free, just because it's "utility".

Comment author: SilentCal 21 April 2016 09:47:15PM 0 points [-]

I'm still not sure which line you're taking on this: A) Disputing the VNM formulation of rational behavior that a rational agent should maximize expected utility (https://en.wikipedia.org/wiki/Von_Neumann%E2%80%93Morgenstern_utility_theorem), or B) Disputing that we can write down an approximate utility function accurate enough to sufficiently capture our risk preferences.

Comment author: Lumifer 21 April 2016 07:27:30PM 1 point [-]

Why would maximizing expectation on a concave utility function lead to losing your shirt?

Because you're ignoring risk.

The expectation is a central measure of a distribution. If that's the only thing you look at, you have no idea about the width of your distribution. How long and thick is that left tail which is curling around preparing to bite you in the ass? Um, you don't know.

Comment author: SilentCal 21 April 2016 08:42:35PM 0 points [-]

Is that a critique of expected utility maximization in general, or are you saying that concave functions of wealth aren't risk-averse enough?

Comment author: Lumifer 20 April 2016 02:22:10PM 2 points [-]

Maximizing expected log(wealth) is very different than maximizing expected wealth.

Yes, you are right. However even a log utility function does not let you escape a Pascal mugging (you just need bigger numbers).

A log utility function us much more risk averse.

Risk aversion (in reality) does not boil down to a concave utility function. So the OP's claim that a well-defined utility function will fully determine the optimal risk-reward tradeoff is still false.

Comment author: SilentCal 21 April 2016 07:12:28PM 0 points [-]

Why would maximizing expectation on a concave utility function lead to losing your shirt? It seems like any course of action that predictably leads to losing your shirt is self-evidently not maximizing expected concave-utility-function, unless it's a Pascal mugging type scenario. I don't think there are credible Pascal muggings in the world of personal finance, and if there are I'd be willing to accept an ad hoc axiom that we limit our theory to more conventional investments.

Now, I'll admit it's possible we should have a loss averse utility function, but we can do that without abandoning the mathematical approach--just add a time derivative of wealth, or something.

Comment author: SilentCal 19 April 2016 05:29:01PM 1 point [-]

Has anyone developed a quantitative theory of personal finance in the following sense?

Most money advice falls back on rules of thumb; I'm looking for an approach that's made-up numbers all the way down.

The main idea would be to express utility as a function of financial quantities; an obvious candidate would be utility per unit time equals the log of money spent per unit time, making sure to count things like imputed rent on owned property as spending. Once you have that, there's an exact answer to the optimal risk/reward tradeoff in investments, how much to save/borrow, etc.

Comment author: Fluttershy 09 April 2016 10:33:36AM 9 points [-]

Avoid this program.

Jonah and Robert have good intentions, and I was actually happy with the weekly interview sessions taught by Robert. However, I had a poor experience with this program overall. I'll list some observations from my experience as a member of the first cohort below.

First, this program is effectively self-directed; most of the time, neither the TA nor the instructor were available. When they were, asking them questions was incredibly difficult due to their lack of familiarity with the material they were supposed to be teaching. To be sure, both the instructor and the TA were intelligent people--the problem was just that they knew lots of math, but not very much data science.

Second, there were lots of communication issues between the instructors and the students. I really do not want to give specific examples, since I don't want to say something that would reflect so poorly on the LessWrong community. However, I assure you that this was an incredibly large issue.

Lastly, everything about this program was disorganized. Several of us paid for housing through the program, which ended up not being available as soon as we'd been told that it would be. The furniture in the office space we used was set up by participants because Signal was too disorganized to have it set up before we were supposed to start using it. The fact that only two out of twelve students pair programmed together on an average day was also due to a lack of organization of the part of the instructors.

Jonah and Robert clearly worked very hard to make this program what it was, but attending was still a bad experience for me. If you already have a background in software engineering and want to pay $8,000 to teach yourself data science alongside other students who are doing the same, this program is a good fit for you. Otherwise, consider attending a longer, more established program, like Zipfian Academy that actually uses pair programming and has instructors available to answer questions.

Comment author: SilentCal 11 April 2016 06:18:58PM 4 points [-]

I don't intend this as a demand, but you may wish to edit your top comment.

As it stands, the first line of the first comment on this post is "Avoid this program." Based on the comments in this thread it sounds like you think the program might be a good fit for some people.

Comment author: SilentCal 11 April 2016 06:02:44PM 1 point [-]

Glad to have this term. I do think there's a non-fallacious, superficially similar argument that goes something like this:

"X leads to Y. This is obvious, and the only way you could doubt it would be some sort of motivated reasoning--motivated by something other than preventing Y. Therefore, if you don't think X leads to Y, you aren't very motivated to prevent Y."

It's philosophically valid, but requires some very strong claims. I also suspect it's prone to causing circular reasoning, where you've 'proven' that no one who cares about Y thinks X doesn't lead to Y and then use that belief to discredit new arguments that X doesn't lead to Y.

View more: Next