I did in fact have something between those two in mind, and was even ready to defend it, but then I basically remembered that LW is status-crazy and and gave up on fighting that uphill battle. Kudos to alkjash for the fighting spirit.
They explicitly said that he's not wrong-on-many-things in the T framework, the same way Eliezer is T.correct.
Frustrating, that's not what I said! Rule 10: be precise in your speech, Rule 10b: be precise in your reading and listening :P My wording was quite purposeful:
I don't think you can safely say Peterson is "technically wrong" about anything
I think Raemon read my comments the way I intended them. I hoped to push on a frame in people seem to be (according to my private, unjustified, wanton opinion) obviously too stuck in. See a...
Cool examples, thanks! Yeah, these are issues outside of his cognitive expertise and it's quite clear that he's getting them wrong.
Note that I never said that Peterson isn't making mistakes (I'm quite careful with my wording!). I said that his truth-seeking power is in the same weight class, but obviously he has a different kind of power than LW-style. E.g. he's less able to deal with cognitive bias.
But if you are doing "fact-checking" in LW style, you are mostly accusing him of getting things wrong about which he never c...
This story is trash and so am I.
If people don't want to see this on LW I can delete it.
You are showcasing a certain unproductive mental pattern, for which there's a simple cure. Repeat after me:
This is my mud pile
I show it with a smile
And this is my face
It also has its place
For increased effect, repeat 5 times in rap style.
[Please delete this thread if you think this is getting out of hand. Because it might :)]
I'm not really going to change my mind on the basis of just your own authority backing Peterson's authority.
See right here, you haven't listened. What I'm saying is that there is some fairly objective quality which I called "truth-seeking juice" about people like Peterson, Eliezer and Scott which you can evaluate by yourself. But you are just dug yourself into the same trap a little bit more. From what you write, your heuristics for evalua...
I'm worried we may be falling into an argument about definitions, which seems to happen a lot around JBP. Let me try to sharpen some distinctions.
In your quote, Chapman disagrees with Eliezer about his general approach, or perhaps about what Eliezer finds meaningful, but not about matters of fact. I disagree with JBP about matters of fact.
My best guess at what "truth-seeking juice" means comes in two parts: a desire to find the truth, and a methodology for doing so. All three of Eliezer/Scott/JBP have the first part down, but their methodolo...
[Note: somewhat taking you up on the Crocker's rules]
Peterson's truth-seeking and data-processing juice is in super-heavy weight class, comparable to Eliezer etc. Please don't make the mistake of lightly saying he's "wrong on many things".
At the level of analysis in your post and the linked Medium article, I don't think you can safely say Peterson is "technically wrong" about anything; it's overwhelmingly more likely you just didn't understand what he means. [it's possible to make more case-specific arguments here but I think the outside view meta-rationality should be enough...]
Perhaps you can explain what Peterson really means when he says that he really believes that the double helix structure of DNA is being depicted in ancient Egyptian and Chinese art.
What does he really means when he says, "Proof itself, of any sort, is impossible, without an axiom (as Godel proved). Thus faith in God is a prerequisite for all proof."?
Why does he seems to believe in Jung's paranormal concept of "synchronicity"?
Why does he think quantum mechanics means consciousness creates reality, and confuse the Copenhagen inter...
If you want to me to accept JBP as an authority on technical truth (like Eliezer or Scott are), then I would like to actually see some case-specific arguments. Since I found the case-specific arguments to go against Peterson on the issues where I disagree, I'm not really going to change my mind on the basis of just your own authority backing Peterson's authority.
For example: the main proof Peterson cites to show he was right about C-16 being the end of free speech is the Lindsay Shepherd fiasco. Except her case wasn't even in the relevant j...
4) The skill to produce great math and skill to produce great philosophy are secretly the same thing. Many people in either field do not have this skill and are not interested in the other field, but the people who shape the fields do.
FWIW I have reasonably strong but not-easily-transferable evidence for this, based on observation of how people manipulate abstract concepts in various disciplines. Using this lens, math, philosophy, theoretical computer science, theoretical physics, all meta disciplines, epistemic rationality, etc. form a cluster in which math is a central node, and philosophy is unusually close to math even considered in the context of the cluster.
Note that this is (by far) the least incentive-skewing from all (publicly advertised) funding channels that I know of.
Apply especially if all of 1), 2) and 3) hold:
1) you want to solve AI alignment
2) you think your cognition is pwned by Moloch
3) but you wish it wasn't
Maybe it'd be useful to make a list of all the publicly advertised funding channels? Other ones I know of:
tl;dr: your brain hallucinates sensory experiences that have no correspondence to reality. Noticing and articulating these “felt senses” gives you access to the deep wisdom of your soul.
I think this snark makes it clear that you lack gears in your model of how focusing works. There are actual muscles in your actual body that get tense as a result of stuff going on with your nervous system, and many people can feel that even if they don't know exactly what they are feeling.
[Note that I am in no way an expert on strategy, probably not up to date with the discourse, and haven't thought this through. I also don't disagree with your conclusions much.]
[Also note that I have a mild feeling that you engage with a somewhat strawman version of the fast-takeoff line of reasoning, but have trouble articulating why that is the case. I'm not satisfied with what I write below either.]
These possible arguments seem not included in your list. (I don't necessarily think they are good arguments. Just mentioning whatever int...
I think it's perfectly valid to informally say "gears" while meaning both "gears" (how clear a model is on what it predicts) and "meta-gears" (how clear the meta model is on which models it a priori expects to be correct). And the new clarity you bring to this would probably be the right time to re-draw the boundaries around gears-ness, to make it match the structure of reality better. But this is just a suggestion.
[excellent, odds ratio 3:2 for worth checking LW2.0 sometimes and 4:3 for LW2.0 will succeed]
I think "Determinism and Reconstructability" are great concepts but you picked terrible names for them, and I'll probably call them "gears" and "meta-gears" or something short like that.
This article made me realize that my cognition runs on something equivalent to logical inductors, and what I recently wrote on Be Well Tuned about cognitive strategies is a reasonable attempt at explaining how to implement logical inductors in a human brain.
Request: Has this idea already been explicitely stated elsewhere? Anything else regular old TAPs are missing?
It's certainly not very new, but nothing wrong with telling people about your TAP modifications. There are many nuances to using TAPs in practice, and ultimately everyone figures out their own style anyway. Whether you have noticed or not, you probably already have this meta-TAP:
"TAPs not working as I imagined -> think how to improve TAPs"
It is, ultimately, the only TAP you need to successfully install to start the process of recursive improvement.
I have the suspicion that everyone is secretly a master at Inner Sim
There's a crucial difference here between:
One example is that the top tiers of the community are in fact composed largely of people who directly care about doing good things for the world, and this (surprise!) comes together with being extremely good at telling who's faking it. So in fact you won't be socially respected above a certain level until you optimize hard for altruistic goals.
Another example is that whatever your goals are, in the long run you'll do better if you first become smart, rich, knowledgeable about AI, sign up for cryonics, prevent the world from ending etc.
if people really wanted to optimize for social status in the rationality community there is one easiest canonical way to do this: get good at rationality.
I think this is false: even if your final goal is to optimize for social status in the community, real rationality would still force you to locally give it up because of convergent instrumental goals. There is in fact a significant first order difference.
I realized today that UDT doesn't really need the assumption that other players use UDT.
Was there ever such an assumption? I recall a formulation in which the possible "worlds" include everything that feeds into the decision algorithm, and it doesn't matter if there are any games and/or other players inside of those worlds (their treatment is the same, as are corresponding reasons for using UDT).
You’d reap the benefits of being pubicly wrong
Bad typo.
By the way - did I mention that inventing the word "hammertime" was epic, and that now you might just as well retire because there's no way to compete against your former glory.
I'm confused about the typo, is it publicly we're talking about?
Thanks for that - if I thought like that I'd have retired a long time ago.
Edit: Oh god I'm blind, took another 5 reads to notice. And here I'm supposed to be teaching noticing or something.
I think this comment is 100% right despite being perhaps maybe somewhat way too modest. It's more useful to think of sapience as introducing a delta on behavior, rather than a way to execute desired behavior. The second is a classic Straw Vulcan failure mode.
I wonder if all of the CFAR techniques will have different names after you are done with them :) Looking forward to your second and third iteration.
All sounds sensible.
Also, reminds me of the 2nd Law of Owen:
In a funny sort of way, though, I guess I really did just end up writing a book for myself.
[Note: I am writing from my personal epistemic point of view from which pretty much all the content of the OP reads as obvious obviousness 101.]
The reason why people don't know this is not because it's hard to know it. This is some kind of common fallacy: "if I say true things that people apparently don't know, they will be shocked and turn their lives around". But in fact most people around here have more than enough theoretical capacity to figure out this, and much more, without any help. The real bottleneck is human psychology, ...
Note to everyone else: the least you can do is share this post until everyone you know is sick of it.
I would feel averse to this post being shared outside LW circles much, given its claims about AGI in the near future being plausible. I agree with the claim but not really for the reasons provided in the post; I think it's reasonable to put some (say 10-20%) probability on AGI in the next couple of decades due to the possibility of unexpectedly fast progress and the fact that we don't actually know what would be needed for AGI. But that isn'...
It is a little bit unfair to say that buying 10 bicoins was everything you needed to do. I owned 10 bitcoins, and then sold them at a meager price. Nothing changed as a result of me merely understanding that buying bitcoins was a good idea.
What you really needed was to sit down and think up a strict selling schedule, and also commit to following it. E.g. spend $100 on bitcoin now, and later sell exactly 10% of your bitcoins every time that 10% becomes worth at least $10,000 (I didn't run the numbers to check if these exact values make sense, but you g...
A good general rule here is to think in terms of what percentage of your portfolio (or net worth) you want in a specific asset class, rather than making buying/selling a binary decision. Then rebalance every 3 months.
For example, you might decide you want 2.5%-5% in crypto. If the price quadrupled, you would well about 75% of your stake at the end of the quarter. If it halved, you would buy more.
The major benefit is that this moves you from making many small decisions to one big decision, which is usually easier to get right.
At grave peril of strawmanning, a first order-approximation to SquirrelInHell’s meta-process (what I think of as the Self) is the only process in the brain with write access, the power of self-modification. All other brain processes are to treat the brain as a static algorithm and solve the world from there.
Let me clarify: I consider it the meta level when I think something like "what trajectory do I expect to have as a result of my whole brain continuing to function as it already tends to do, assuming I do nothing special with the output of the thoug...
Humans are not thermostats, and they can do better than a simple mathematical model. The idea of oscillation with decreasing amplitude you mention is well known from control theory, and it's looking at the phenomenon from a different (and, I dare say, less interesting) perspective.
To put it in another way, there is no additional deep understanding of reality that you could use to tell apart the fourth and the sixth oscillation of a converging mathematical model. If you know the model, you are already there.
[Note: I'm not sure if this was your concern - let me know if what I write below seems off the mark.]
The most accurate belief is rarely the best advice to give; there is a reason why these corrections tend to happen in a certain order. People holding the naive view need to hear the first correction, those who overcompensated need to hear the second correction. The technically most accurate view is the one that the fewest people need to hear.
I invoke this pattern to forestall a useless conversation about whose advice is objectively best.
In fact, I thin...
Here we go: the pattern of this conversation is "first correction, second correction, accurate belief" (see growth triplets).
Naive view: "learn from masters"
The OP is the first correction: "learn from people just above you"
Your comment is the second correction: "there are cases where teacher's advice is better quality"
The accurate belief takes all of this into account: "it's best learn from multiple people in a way that balances wisdom against accessibility"
Yes! Not just improved, but leading by stellar example :)
People have recently discussed short words from various perspectives. While I was initially not super-impressed by this idea, this post made me shift towards "yeah, this is useful if done just right".
Casually reading this post on your blog yesterday was enough for the phrase to automatically latch on to the relevant mental motion (which it turns out I was already using a lot), solidify it, make it quicker and more effective, and make me want to use it more.
It has since then been popping up in my consciousness repeatedly, on at least 5 separate oc...
Your point can partially be translated to "make reasonably close to 1" - this makes the decisions less about what the moderators want, and allows longer chains of passing the "trust buck".
However, to some degree "a clique moved in that wrote posts that the moderators (and the people they like) dislike" is pretty much the definition of a spammer. If you say "are otherwise extremely good", what is the standard by which you wish to judge this?
Yes, and also it's even more general than that - it's sort of how progress works on every scale of everything. See e.g. tribalism/rationality/post-rationality; thesis/antithesis/synthesis; life/performance/improv; biology/computers/neural nets. The OP also hints at this.
This seems to rest on a model of people as shallow, scripted puppets.
"Do you want my advice, or my sympathy?" is really asking: "which word-strings are your password today?" or "which kind of standard social script do you want to play out today?" or "can you help me navigate your NPC conversation tree today?".
Personally, when someone tries to use this approach on me I am inclined to instantly write them off and never look back. I'm not saying everyone is like me but you might want to be wary of what kind of people you are optimizing yourself for.
I'd add that the desire to hear apologies is itself a disguised status-grabbing move, and it's prudent to stay wary of it.
While I 100% agree with your views here, and this is by far the most sane opinion on akrasia that I've seen in a long time, I'm not convinced that so many people on LW really "get it". Although to be sure, the distribution of behavior that signals this has significantly shifted since the move to LW2.0.
So overall I am very uncertain, but I still find it more plausible that the reason why the community as a whole stopped talking about akrasia is more like, people run out of impressive-seeming or fresh-seeming things to say about it? While the minority that could have contributed actual real new insights turned away for better reasons.
Big props for posting a book review - that's always great and in demand. However, some points on (what I think is) good form while doing these:
[Note: you post is intentionally poetic, so I'll let myself be intentionally poetic while answering this:]
Would you trust someone without a shadow?
The correct answer is, I think, "don't care". On Friday night you dance with a Gervais-sociopath. On Saturday you build a moon rocket together and use it to pick up groceries. Do you "trust" the rocket to be "good"? No, but you don't need to.
Not to put too fine a point on it: through the tone and content of the post, I can still see the old attachments and subconscious messed-up strategies shining through.
I am, of course, not free of blame here because the same could be said about my comment.
However, I reach out over both of these and touch you, Val.
Sure, and that's probably what almost all users do. But the situation is still perverse: the broken incentives of the system are fighting against your private incentive to not waste effort.
This kind of conflict is especially bad if people have different levels of the internal incentive, but also bad even they don't, because on the margin it pushes everyone to act slightly against their preferences. (I don't think this particular case is really so bad, but the more general phenomenon is and that's what you get if you design systems with poor incentives)
Ultimately the primary constraint on almost any feature on LessWrong is UI complexity, and so there is a very strong prior against any specific passing the very high bar to make it into the final UI
On the low end, you can fit the idea entirely inside of the existing UI, as a new fancy way of calculating voting weights under the hood (and allowing multiple clicks on the voting buttons).
Then, in a rough order of less to more shocking to users:
I'm still not really sure what the root issues you're trying to resolve are. What are examples of cases you're either worried about the current system specifically failing, or areas where we just little don't have anything even trying to handle a particular use-case?
Sure, I can list some examples, but first note that while I agree that examples are useful, focusing on them too much is not a good way in general to think about designing systems.
A good design can preempt issues that you would never have predicted could happen; a bad design...
This is very well done :) Thanks for the Terence Tao link - it's amusing that he describes exactly the same meta-level observation which I expressed in this post.
Classes of interpersonal problems often translate into classes of intrapersonal problems, and the tools to solve them are broadly similar.
This is true, but it seems you don't have any ideas about why it's true. I offer the following theory: if you are designing brains to deal with social situations, it is very adaptive to design them in a way that internally mirrors some of the structure that arises in social environments. This makes the computations performed by the brain more directly applicable to social life, in several interesting ways (e.g....
We should expect that anyone should be able to get over 1000 karma if they hang around the site long enough.
I second this worry. Historically, karma on LW has been a very good indicator of hours of life burned on the site, and a somewhat worse indicator of other things.
Excellent content, would be even beter in a shorter post.
As a 5-minute exercise, I'm coming up with some more examples:
Obvious note: this sequence of posts is by itself a good example of what circumambulation looks like in practice.
Well, if ageing was slowed proportionally, and the world were roughly unchanged from the present condition, I'd expect large utility gains (in total subjective QoL) from prioritizing longer lives, with diminishing returns to this only in late 100s or possibly 1000s. But I think both assumptions are extremely unlikely.
I think at this point it's fair to say that you have started repeating yourself, and your recent posts strongly evoke the "man with a hammer" syndrome. Yes, your basic insight describes a real aspect of some part of reality. It's cool, we got it. But it's not the only aspect, and (I think) also not the most interesting one. After three or four posts on the same topic, it might be worth looking for new material to process, and other insights to find.
I get links like this:
https://www.lesserwrong.com/feed.xml?view=%5Bobject%20Object%5D&karmaThreshold=0
One concrete scenario is: front page contains posts A(1 karma), B(2 karma) and C(3 karma). You subscribe to a RSS feed (frontpage, karmaThreshold=0) and it contains C, B (assuming max feed size is 2). RSS reader shows new articles C, B. Then someone downvotes C to -1 karma. The RSS feed contains B, A. RSS reader shows A as a new article. Note that this is not the full extent of the problem but I do not fully understand the other issues around this, li...
This is what the whole discussion is about. You are setting boundaries that are convenient for you, and refuse to think further. But some people in that reference class you are now denigrating as a whole are different from others. Some actually know their stuff and are not charlatans. Throwing a tantrum about it doesn't change it.