Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: username2 14 August 2017 06:25:00PM *  1 point [-]

I'm currently going through a painful divorce so of course I'm starting to look into dating apps as a superficial coping mechanism.

It seems to me that even the modern dating apps like Tinder and Bumble could be made a lot better with a tiny bit of machine learning. After a couple thousand swipes (which doesn't take long), I would think that a machine learning system could get a pretty good sense of my tastes and perhaps some metric of my minimum standards of attractiveness. This is particularly true for a system that has access to all the swiping data across the whole platform.

Since I swipe completely based on superficial appearance without ever reading the bio (like most people), the system wouldn't need to take the biographical information into account, though I suppose it could use that information as well.

The ideal system would quickly learn my preferences in both appearance and personal information and then automatically match me up with the top likely candidates. I know these apps keep track of the response rates of individuals, so matches who tend not to respond often (probably due to being very generally desirable) would be penalized in your personal matchup ranking - again, something machine learning could handle easily.

I find myself wondering why this doesn't already exist.

Comment author: MrMind 17 August 2017 10:28:51AM 1 point [-]

"Once" does exactly what you have described.

In response to comment by MrMind on Inscrutable Ideas
Comment author: TheAncientGeek 12 August 2017 12:21:53PM *  0 points [-]

I also claim that meta-rationalists claim to be at level 3, while they are not.

Can you support that? I rather suspect you are confusing new in the historic sense with new-to-rationalists. Bay area rationalism claims to be new, but is in many respects a rehash of old ideas like logical positivism. Likewise, meta rationalism is old, historically.

, I haven't seen any proof that meta-rationalists have offered god as a simplifying hypothesis of some unexplained phoenomenon that wasn't trivial.

Theres a large literature on that sort of subject. Meta rationality is not something Chapman invented a few years ago.

But the entire raison d'etre of mathematics is that everything is reducible to trivial, it just takes hundreds of pages more.

You still have relative inscrutability, because advanced maths isn't scrutable to everybody.

but claiming that something is inherently mysterious...

Nobody said that.

Comment author: MrMind 14 August 2017 10:30:42AM *  0 points [-]

Now I understand that we are talking with two completely different frames of reference.
When I write about meta-rationalists, I'm specifically referring to Chapman and Gworley and the like. You have obviously a much wider tradition in mind, on which I don't necessarily have an opinion. Everything I said needs to be restricted to this much smaller context.

On other points of your answer:
- yes, there are important antecedents, but also important novelties too;
- identification of what you consider to be the relevant corpus of 'old' meta-rationality would be appreciated, mainly of deity as a simplifying nontrivial hypothesis;
- about inherently mysteriousness, it's claimed in the linked post of this page, first paragraph: " I had come to terms with the idea that my thoughts might never be fully explicable".

In response to comment by MrMind on Inscrutable Ideas
Comment author: TheAncientGeek 09 August 2017 08:02:14AM *  0 points [-]

Your example, in my opinion, disproves your point. Einstein did not simply noticed them: he constructed a coherent explanation that accounted for both the old model and the discrepancies.

I wasn't making a point about meta-rationality versus rationality, I was making a point about noticing-and-putting-on-a-shelf versus noticing-and-taking-seriously. Every Christian has noticed the problem of evil...in the first sense.

the more people have independently access to the phoenomenon, the more confidence I would give to its existence.

You need to distinguish between phenomena (observations, experiences) and explanations. Even something as scientifically respectable as Tegmarks' multiverse, or MWI, isn't supposed to be supported by some unique observation, they are supposed to be better explanations, in terms of simplicity, generality, consilience, and so on, of the same data. MWI has to give the same predictions as CI.

If it's only one person and said person cannot communicate it nor behaves any differently... well I would equate its existence to that of the invisible and intangible dragon.

You also need to distinguish between belief and understanding. Any kind of fundamentally different, new or advanced understanding has to be not completely communicable and comprehensible to the N-1 level, otherwise it would not be fundamentally new. It is somewhere between pointless and impossible to believe in advanced understanding on the basis of faith. Sweepingly rejecting the possibility of advanced understanding proves too much, because PhD maths is advanced understanding compared high school maths, and so on.

You are not being invited to have a faith-like belief in things that are undetectable and incomprehensible to anybody, you are being invited to widen your understanding so that you can see for yourself.

Comment author: MrMind 10 August 2017 09:51:40AM *  1 point [-]

I wasn't making a point about meta-rationality versus rationality, I was making a point about noticing-and-putting-on-a-shelf versus noticing-and-taking-seriously.

Right. Let's say that there are (at least) three levels of noticing a discrepancy in a model:
1 - noticing, shrugging and moving on
2 - noticing and claiming that it's important
3 - noticing, claiming that it's important and create something new about it ('something' can be a new institution, a new model, etc.)

We both agree that LW rationalists are mostly at stage 1. We both agree that meta-rationalsts are at level 2. I also claim that meta-rationalists claim to be at level 3, while they are not.

You need to distinguish between phenomena (observations, experiences) and explanations.

This is also right. But at the same time, I haven't seen any proof that meta-rationalists have offered god as a simplifying hypothesis of some unexplained phoenomenon that wasn't trivial.

Any kind of fundamentally different, new or advanced understanding has to be not completely communicable and comprehensible to the N-1 level, otherwise it would not be fundamentally new.

I think this is our true disagreement. I reject your thesis: there is nothing that is inherently mysterious, not even relatively. I think that any idea is either incoherent, comprehensible or infinitely complex.
Math is an illustration of this classification: it exists exactly at the level of being comprehensible. We see levels because we break down a lot of complexity in stages, so that you manipulate the simpler levels, and when you get used to them, you start with more complex matters. But the entire raison d'etre of mathematics is that everything is reducible to trivial, it just takes hundreds of pages more.
Maybe meta-rationalists have yet to unpack their intuitions: it happens all the time that someone has a genius idea that only later gets unpacked into simpler components. So kudos to the idea of destroying inscrutability (I firmly believe that destroying inscrutability will destroy meta-rationalism), but claiming that something is inherently mysterious... that runs counter epistemic hygiene.

In response to comment by MrMind on Inscrutable Ideas
Comment author: TheAncientGeek 08 August 2017 08:13:33AM *  2 points [-]

Points 1 and 2 are critiques of the rationalist community that are around since the inception of LW (as witnessed by the straw Vulcan / hot iron approaching metaphors), so I question that they usefully distinguish meta- from plain rationalists

Maybe the distinction is in noticing it enough and doing something about it.. iti is very common to say "yeah, that's a problem, let's put it in a box to be dealt with later" and then forget about it .

Lots of people noticed the Newton/Maxwell disparities in the 1900s, but Einstein noticed them enough.

"The "controversy" was quite old in 1905. Maxwell's equations were around since 1862 and Lorentz transformations had been discussed at least since 1887. You are absolutely correct, that Einstein had all the pieces in his hand. What was missing, and what he supplied, was an authoritative verdict over the correct form of classical mechanics. Special relativity is therefor less of a discovery than it is a capping stone explanation put on the facts that were on the table for everyone to see. Einstein, however, saw them more clearly than others. –"

https://physics.stackexchange.com/questions/133366/what-problems-with-electromagnetism-led-einstein-to-the-special-theory-of-relati

Point 3 is more helpful in this regard, but then if anyone made that claim then I would ask to point to what differences does such a behavior imply... I find very hard to believe in something that is both unscrutable and unnoticeable.

Inscrutable and unnoticeable to whom?

Comment author: MrMind 08 August 2017 02:28:22PM *  1 point [-]

Lots of people noticed the Newton/Maxwell disparities in the 1900s, but Einstein noticed them enough.

Your example, in my opinion, disproves your point. Einstein did not simply noticed them: he constructed a coherent explanation that accounted for both the old model and the discrepancies. It unified both models under one map. Do you feel that meta-rationalists have a model of intention-implementation and maps generation that is coherent with the naive model of a Bayesian agent?
A meta-rationalist is like physicist from the 19th century, that, having noticed the dual nature of light, called himself meta-physicist, because he uses two maps for the phoenomenon of light. Instead the true revolution, quantum mechanics, happened when two conflicting models were united under one explanation.

Inscrutable and unnoticeable to whom?

It's a degree: the more people have independently access to the phoenomenon, the more confidence I would give to its existence. If it's only one person and said person cannot communicate it nor behaves any differently... well I would equate its existence to that of the invisible and intangible dragon.

Comment author: Viliam 07 August 2017 12:52:03PM 1 point [-]

I am imagining how to set up the experiment...

"Sir, I will leave you alone in this room now, with this naked supermodel. She is willing to do anything you want. However, if you can wait for 20 minutes without touching her -- or yourself! -- I will bring you one more."

Comment author: MrMind 07 August 2017 01:30:19PM 0 points [-]

I don't know how much sexual satisfaction scales linearly, but from 1 to 2 seems about right.

Comment author: MrMind 07 August 2017 01:26:00PM *  1 point [-]

"Inscrutable", related to the meta-rationality sphere, is a word that gets used a lot these days. On the fun side, set theory has a perfectly scrutable definition of indescribability.
Very roughly: the trick is to divide your language in stages, so that stage n+1 is strictly more powerful than stage n. You can then say that a concept (a cardinal) k is n-indescribable if every n-sentence true in a world where k is true, is also true in a world where a lower concept (a lower cardinal) is true. In such a way, no true n-sentence can distinguish a world where k is true from a world where something less than k is true.
Then you can say that k is totally indescribable if the above property is true for every finite n.

Total indescribability is not even such a strong property, in the grand scheme of large cardinals.

In response to Inscrutable Ideas
Comment author: Viliam 05 August 2017 02:30:32PM *  5 points [-]

Here is my attempt to summarize "what are the meta-rationalists trying to tell to rationalists", as I understood it from the previous discussion, this article, and some articles linked by this article, plus some personal attempts to steelman:

1) Rationalists have a preference for living in far mode, that is studying things instead of experiencing things. They may not endorse this preference explicitly, they may even verbally deny it, but this is what they typically do. It is not a coincidence that so many rationalists complain about akrasia; motivation resides in near mode, which is where rationalists spend very little time. (And the typical reaction of a rationalist facing akrasia is: "I am going to read yet another article or book about 'procrastination equation'; hopefully that will teach me how to become productive!" which is like trying to become fit by reading yet another book on fitness.) At some moment you need to stop learning and start actually doing things, but rationalists usually find yet another excuse for learning a bit more, and there is always something more to learn. They even consider this approach a virtue.

Rationalists are also more likely to listen to people who got their knowledge from studying, as opposed to people who got their knowledge by experience. Incoming information must at least pretend to be scientific, or it will be dismissed without second thought. In theory, one should update on all available evidence (although not equally strongly), and not double-count any. In practice, one article containing numbers or an equation will always beat unlimited amounts of personal experience.

2) Despite admitting verbally that a map is not the territory, rationalists hope that if they take one map, and keep updating it long enough, this map will asymptotically approach the territory. In other words, that in every moment, using one map is the right strategy. Meta-rationalists don't believe in the ability to update one map sufficiently (or perhaps just sufficiently quickly), and intentionally use different maps for different contexts. (Which of course does not prevent them from updating the individual maps.) As a side effect of this strategy, the meta-rationalist is always aware that the currently used map is just a map; one of many possible maps. The rationalist, having invested too much time and energy into updating one map, may find it emotionally too difficult to admit that the map does not fit the territory, when they encounter a new part of territory where the existing map fits poorly. Which means that on the emotional level, rationalists treat their one map as the territory.

Furthermore, meta-rationalists don't really believe that if you take one map and keep updating it long enough, you will necessarily asymptotically approach the territory. First, the incoming information is already interpreted by the map in use; second, the instructions for updating are themselves contained in the map. So it is quite possible that different maps, even after updating on tons of data from the territory, would still converge towards different attractors. And even if, hypothetically, given infinite computing power, they would converge towards the same place, it is still possible that they will not come sufficiently close during one human life, or that a sufficiently advanced map would fit into a human brain. Therefore, using multiple maps may be the optimal approach for a human. (Even if you choose "the current scientific knowledge" as one of your starting maps.)

3) There is an "everything of everythings", exceeding all systems, something like the highest level Tegmark multiverse only much more awesome, which is called "holon", or God, or Buddha. We cannot approach it in far mode, but we can... somehow... fruitfully interact with it in near mode. Rationalists deny it because their preferred far-mode approach is fruitless here. But you can still "get it" without necessarily being able to explain it by words. Maybe it is actually inexplicable by words in principle, because the only sufficiently good explanation for holon/God/Buddha is the holon/God/Buddha itself. If you "get it", you become the Kegan-level-5 meta-rationalist, and everything will start making sense. If you don't "get it", you will probably construct some Kegan-level-4 rationalist verbal argument for why it doesn't make sense at all.

How well did I do here?

In response to comment by Viliam on Inscrutable Ideas
Comment author: MrMind 07 August 2017 12:45:14PM *  0 points [-]

Points 1 and 2 are critiques of the rationalist community that are around since the inception of LW (as witnessed by the straw Vulcan / hot iron approaching metaphors), so I question that they usefully distinguish meta- from plain rationalists.
Point 3 is more helpful in this regard, but then if anyone made that claim then I would ask to point to what differences does such a behavior imply... I find very hard to believe in something that is both unscrutable and unnoticeable.

Comment author: Thomas 07 August 2017 08:09:16AM *  1 point [-]

This problem to think about.

Comment author: MrMind 07 August 2017 12:25:51PM 0 points [-]

The intuitive answer seems to me to be: the last one. It's the tallest, so it witness exactly one billion towers. Am I misinterpreting something?

Comment author: cousin_it 06 August 2017 10:24:33AM *  1 point [-]

If we want a measure of rationality that's orthogonal to intelligence, maybe we could try testing the ability to overcome motivated reasoning? Set up a conflict between emotion and reason, and see how the person reacts. The marshmallow test is an example of that. Are there other such tests, preferably ones that would work on adults? Which emotions would be easiest?

Comment author: MrMind 07 August 2017 10:31:39AM 1 point [-]

Which emotions would be easiest?

Sexual attraction...

Comment author: gworley 27 July 2017 05:25:10PM 0 points [-]

(cross-posting my comment on this from the original because i think it might be of more interest here)

I might write a more detailed response along these lines depending on where my thinking takes me, but I've previously thought about this issue and after thinking about it more since reading this yesterday it still seems to me that meta-rationality is specifically inscrutable because it needs meta-rationality to explain itself.

In fairness this is also a problem for rationality, too, because it can't really explain itself in terms of pre-rationality, and from what I can tell we actually don't know that well how to teach rationality either. STEM education mostly seems to teach some of the methods of rationality, like how to use logic to manipulate symbols, but tends to do so in a way that ends up domain restricted. Most STEM graduates are still pre-rational thinkers in most domains of their lives, though they may dress up their thoughts in the language of rationality, and this is specifically what projects like LessWrong are all about: getting people to at least be actually rational rather than pre-rational in rationalist garb.

But even with CFAR and other efforts LW seems to be only marginally more successful than most because I know a lot of LW/CFAR folks who have read, written, and thought about rationality a lot and they still struggle with many of the basics to not only adopt the rationalist world view but to also at least stop using the pre-rationalist world view and instead notice they don't understand something. To be fair marginal success is all LW needed to achieve to satisfy its goals of producing a supply of people capable of doing AI safety research, but I think it's telling that even such a project so directed to making rationality learnable has only been marginally successful and from what I can tell not by making rationality scrutable but by creating lots of opportunities for enlightenment.

Given that we don't even have a good model of how to make rationality truly scrutable, I'm not sure we can really hope to make meta-rationality scrutable. What seems to me more likely is that we can work to find ways of not explaining meta-rationality but training people into it. Of course this is already what you're doing with Meaningness, but it's also for this reason I'm not sure we can do more than what Meaningness has so far been working to accomplish.

Comment author: MrMind 31 July 2017 03:06:16PM *  1 point [-]

But if meta-rationality is unscrutable for rationality, how do you know it even exists? At least, Bayesian rationalists have some solace in Cox's theorem, or the coherence theorem, or the Church-Turing thesis. What stops me from declaring there's a sigma-rationality, which is unscrutable by all n-rationality below them? What does meta-rationality even imply, for the real world?

View more: Next