Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: IlyaShpitser 17 September 2017 03:46:13PM *  0 points [-]

LW1.0's problem with karma is that karma isn't measuring anything useful (certainly not quality). How can a distributed voting system decide on quality? Quality is not decided by majority vote.

The biggest problem with karma systems is in people's heads -- people think karma does something other than what it does in reality.

Comment author: tristanm 17 September 2017 06:04:56PM *  1 point [-]

Hopefully this question is not too much of a digression - but has anyone considered using something like Arxiv-Sanity but instead of for papers it could include content (blog posts, articles, etc.) produced by the wider rationality community? Because at least with that you are measuring similarity to things you have already read and liked, things other people have read and liked, or things people are linking to and commenting on, and you can search things pretty well based on content and authorship. Ranking things by (what people have stored in their library and are planning on taking time to study) might contain more information than karma.

Comment author: IlyaShpitser 15 September 2017 07:59:25PM 1 point [-]

People use name recognition in practice, works pretty well.

Comment author: tristanm 16 September 2017 09:55:10PM *  1 point [-]

Going to reply to this because I don't think it should be overlooked. It's a valid point - people tend to want to filter out information that's not from the sources they trust. I think these kind of incentive pressures are what led to the "LessWrong Diaspora" being concentrated around specific blogs belonging to people with very positive reputation such as Scott Alexander. And when people want to look at different sources of information they will follow the advice of said people usually. This is how I operate when I'm doing my own reading / research - I start somewhere I consider to be the "safest" and move out from there according to the references given at that spot and perhaps a few more steps outward.

When we use a karma / voting system, we are basically trying to calculate P(this contains useful information | this post has a high number of votes) but no voting system ever offers as much evidence as a specific reference from someone we recognize as trustworthy. The only way to increase the evidence gained from a voting system is to add further complexity to the system by increasing the amount of information contained in a vote, either by weighing the votes or by identifying the person behind the vote. And then from there you can add more to a vote, like a specific comment or a more nuanced judgement. I think the end of that track is basically what we have now, blogs by a specific person linking to other blogs, or social media like Facebook where no user is anonymous and everyone has their information filtered in some way.

Essentially I'm saying we should not ignore the role that optimization pressure has played in producing the systems we already have.

Comment author: tristanm 10 July 2017 06:21:21PM *  3 points [-]

My summary of the review:

  • HRAD is “work that aims to describe basic aspects of reasoning and decision-making in a complete, principled, and theoretically satisfying way”

  • Further breaking down HRAD into MIRI’s research topics (Philosophy, decision theory, logical uncertainty, and Vingean reflection).

  • MIRI’s position is that even minor mistakes in AI design could have catastrophic effects if these AI systems are very powerful.

  • HRAD, if fully complete, would give us a full description of AI systems such that we would be able to feel relatively certain that a given AI system would or would not cause catastrophic harm.

  • Daniel agrees that current formalisms to describe reasoning are incomplete or unsatisfying.

  • He also agrees that powerful AI systems have the potential to cause serious harm if mistakes are made in their design.

  • He agrees that we should have some kind of formalism that tells us whether or not an advanced AI system will be aligned.

  • However, Daniel assigns only a 10% chance that MIRI’s work in HRAD will be helpful in understanding current and future AI designs.

  • The reasons for this are:
    (1) MIRI’s HRAD work does not seem to be applicable to any current machine learning systems. (2) Mainstream AI researchers haven’t expressed much enthusiasm for MIRI’s HRAD work. (3) Daniel is more enthusiastic about Paul Christiano’s alternative approach and believes academic AI researchers are as well.

  • However, he believes MIRI researchers are “thoughtful, aligned with our values, and have a good track record.”

  • He believes HRAD is currently funding constrained and somewhat neglected, therefore, if it turns out to be the correct approach, then supporting it now could be very beneficial.

Comment author: IlyaShpitser 05 July 2017 05:18:23PM *  0 points [-]

I am saying there is a very easy explanation on why the stats community moved on and LW is still talking about this: LW's thinking on this is "freshman level."


I don't think "know what you are talking about" is controversial, but perhaps I am just old.


I think it's ok for non-experts to talk, I just think they need to signal stuff appropriately. Wikipedia has a similar problem with non-expert and expert talk being confused, which is why it's not seen as a reliable source on technical topics.


Being "credential-agnostic" is sort of being a bad Bayesian -- you should condition on all available evidence if you aren't certain of claims (and you shouldn't be if you aren't an expert). Argument only screens authority under complete certainty.

Comment author: tristanm 05 July 2017 05:49:54PM 0 points [-]

Non-experts may not know the boundary of their own knowledge, and may also have trouble knowing where the boundaries of the knowledge of others are as well.

In fact, I think that quite frequently even experts have trouble knowing the extent of their own expertise. You can find countless examples of academics weighing in on matters they aren't really qualified for. I think this is a particularly acute problem in the philosophy of science. This is a problem I had a lot when I read books by authors of pop-sci / pop-philosophy. They sure seem like experts to the non-initiated. I attribute this mainly to them becoming disconnected from academia and living in a bubble containing mostly just them and their fans, who don't offer much in the way of substantive disagreement. But this is one of the reasons I value discussion so highly.

When I began writing this post, I did not honestly perceive my level of knowledge to be at the "freshman" level. As I've mentioned before, many of the points are re-hashes of stuff from people like Jaynes, and although I might have missed some of his subtle points, is there any good way for me to know that he represents a minority or obsolete position without being deeply familiar with the aspects of that field, as someone with decades of experience would?

The simplest solution is just to read until I have that level of experience with the topics as measured by actual time spent on it, but I feel like that would come at the very high cost of not being able to participate in online discussions, which are valuable. But even then, I probably would still not know where my limits are until I bump into opposing views, which would need to occur through discussion.

Comment author: IlyaShpitser 03 July 2017 08:49:46PM *  2 points [-]

I don't really have time to "oppose" in the sense you mean, as that's a full time job. But for the record this aspect of LW culture is busted, I think.

"somewhat informal intellectual discussion"

All I am saying is, if you are going to talk about technical topics, either: (a) know what you are talking about, or (b) if you don't or aren't sure, maybe read more and talk less, or at least put disclaimers somewhere. That's at least a better standard than what [university freshmen everywhere] are doing.

If you think you know what you are talking about, but then someone corrects you on something basic, heavily update towards (b).

I try to adhere to this also, actually -- on technical stuff I don't know super well. Which is a lot of stuff.


The kind of meaningless trash talk MrMind is engaged in above, I find super obnoxious.

Comment author: tristanm 05 July 2017 05:15:40PM 0 points [-]

All I am saying is, if you are going to talk about technical topics, either: (a) know what you are talking about, or (b) if you don't or aren't sure, maybe read more and talk less, or at least put disclaimers somewhere. That's at least a better standard than what [university freshmen everywhere] are doing.

But this is a philosophical position you're taking. You're not just explaining to us what common decency and protocol should dictate - you're arguing for a particular conception of discourse norms you believe should be adopted. And probably, in this community, a minority position at that. But, the way that you have stated this comes across like you think your position is obvious, to the point where it's not really worth arguing for. To me, it doesn't seem so obvious. Moreover, if it's not obvious, and if you were to follow your own guidelines fully, you might decide to leave that argument up to the professional, fully credentialed philosophers.

Anyway, what you are bringing up is worth arguing about in my opinion. LW may be credential-agnostic, but it also would be beneficial to have some way of knowing which arguments carry the most weight, and what information is deemed the most reliable - while also allowing people of all levels of expertise to discuss it freely. Such a problem is very difficult, but I think following your principle of "only experts talk, non-experts listen" is sort of extreme and not really appropriate outside of classrooms and lecture halls.

Comment author: IlyaShpitser 05 July 2017 01:18:03PM *  0 points [-]

"The conversation then degenerates on dick-size measuring."

"I hope this list can keep the discussion productive."

Alright then, Bayes away!


Generic advice for others: the growth mindset for stats (which is a very hard mathematical subject) is to be more like a grad student, e.g. work very very hard and read a lot, and maybe even try to publish. Leave arguing about philosophy to undergrads.

Comment author: tristanm 05 July 2017 04:39:50PM 0 points [-]

This sounds a lot like the Neil Tyson / Bill Nye attitude of "science has made philosophy obsolete!"

Comment author: MrMind 04 July 2017 12:40:53PM *  0 points [-]

The latter seems to be the most intuitively correct rule

So if I extract an red ball from an urn, should I condition the probability of finding a black ball in the next turn on not having extracted a red ball?

Besides, P(H) is most definitely not equal to P(H|E). P(H) is on the other hand demonstrably equal to P(H|E)P(E)+P(H|-E)P(-E), the usual decomposition of unity. I think we are talking about two completely different things here.

Comment author: tristanm 04 July 2017 01:52:22PM 0 points [-]

I'm talking about the following issue, found at this link:

A. The problem of uncertain evidence. The Simple Principle of Conditionalization requires that the acquisition of evidence be representable as changing one's degree of belief in a statement E to one — that is, to certainty. But many philosophers would object to assigning probability of one to any contingent statement, even an evidential statement, because, for example, it is well-known that scientists sometimes give up previously accepted evidence. Jeffrey has proposed a generalization of the Principle of Conditionalization that yields that principle as a special case. Jeffrey's idea is that what is crucial about observation is not that it yields certainty, but that it generates a non-inferential change in the probability of an evidential statement E and its negation ~E (assumed to be the locus of all the non-inferential changes in probability) from initial probabilities between zero and one to Pf(E) and Pf(~E) = [1 − Pf(E)]. Then on Jeffrey's account, after the observation, the rational degree of belief to place in an hypothesis H would be given by the following principle:

Principle of Jeffrey Conditionalization: Pf(H) = Pi(H/E) × Pf(E) + Pi(H/~E) × Pf(~E) [where E and H are both assumed to have prior probabilities between zero and one] Counting in favor of Jeffrey's Principle is its theoretical elegance. Counting against it is the practical problem that it requires that one be able to completely specify the direct non-inferential effects of an observation, something it is doubtful that anyone has ever done. Skyrms has given it a Dutch Book defense.

Comment author: IlyaShpitser 01 July 2017 07:42:13AM *  1 point [-]

Maybe there is a cultural/generational difference here.


I have seen very little on Bayes out of LW over the years I agree with -- take it as a datapoint if you wish. Most of it is somewhere between at least somewhat wrong and not even wrong.


Hanson had a post somewhere on how folks should practice holding strong opinions and arguing for them, but not taking the whole thing very seriously. Maybe that's what you are doing.

Comment author: tristanm 03 July 2017 07:48:03PM 0 points [-]

There may indeed be a cultural difference here.

LessWrong has tended towards skepticism (though not outright rejection) of academic credentials ( consider Eliezer's "argument trumps authority" discussions in the Sequences). However, this site is more or less a place for somewhat informal intellectual discussion. It is not an authoritative information repository, and as far as I can tell, does not claim to be. Anyone who participates in discussions here is probably well aware of this fact, and is fully expected to be able to consider the arguments here, not take them at face value.

If you disagree with some of the core ideas around this community (like Bayesian epistemology), as well as what you perceive to be the "negative externalities" of the tendency towards informal / non-expert discussion, then to me it seems likely that you disagree with certain aspects of the culture here. But you seem to have chosen to oppose those aspects, rather than simply choosing not to participate.

Comment author: MrMind 03 July 2017 01:33:39PM *  0 points [-]

I side with you on this issue. It irks me all the time when the Bayesian foundations are vaguely criticized with an air of superiority, as if dismissing them is a sign of having transcended to some higher level of existence (neorationalists, I'm looking at you).
On the other hand, I could accept tool-boxing, in accordance to the principle of "one truth, many methods to find it" if and only if:

  • it effectively showed better results than the Bayesian methods
  • it wouldn't suddenly forget the pluri-decennial findings on the fallibility of human intuitions.

On the other hand:

Should you treat “evidence” for a hypothesis, or “data”, as having probability 1?

This is provably true: P(X|X) = 1.

P(X) = P(X /\ X) = P(X|X)P(X) <=> P(X|X) = 1.

Comment author: tristanm 03 July 2017 06:06:36PM 0 points [-]

That point was mostly referring to when you perform the "Bayesian update", the rule you use can be either strict conditionalization (P(H) = P(H|E)), which assumes P(E) = 1, or Jeffreys' conditionalization, (P(H) = P(H|E)P(E) + P(H|~E)P(~E)). The latter seems to be the most intuitively correct rule, but I guess there are some subtle issues with using that rule that I need to dive deeper into to really understand.

Comment author: ChristianKl 02 July 2017 06:09:38PM *  0 points [-]

I am arguing against tool-boxism, on the grounds that if it were accepted as true (I don't think it can actually be true in a meaningful sense) you basically give up on the ability to converge on truth in an objective sense. Any kind of objective principles would not be tool-boxism.

This sounds like you argue against it on the grounds that you don't like a state of affairs where tool-boxism is true, so you assume it isn't. This seems to me like motivated reasoning.

It's structurally similar to the person who says they are believing in God because if God doesn't exist that would mean that life is meaningless.

Comment author: tristanm 02 July 2017 07:49:41PM 0 points [-]

I don't think it's possible to have unmotivated reasoning. Nearly all reasoning begins by assuming a set of propositions, such as axioms, to be true, before following all the implications. If I believe objectivity is true, then I want to know what follows from it. Note that Cox's theorem proceeds similarly, by forming a set of desiderata first, and then finding a set of rules that satisfies them. Do you not consider this chain of reasoning to be valid?

(If I strongly believed "life is meaningless" to be false, and I believed that "God does not exist implies life is meaningless" then concluding from those that God exists is logically valid. Whether or not the two first propositions are themselves valid is another question)

View more: Next