Stephen James

Wiki Contributions

Comments

Sorted by
Answer by Stephen James40

If your head is full of concepts but you haven't applied them, there are a few things you can start practicing - easily, right now - to begin living rationaly.

  1. Open the CFAR handbook, turn to page 135 (in the 2019 edition) and do the Resolve Cycle Technique, top to bottom. Review the background if you need a refresher.
  2. (Same book) Read about OODA loops and consciously do them for the rest of the day; if a problem comes up, apply Frame-by-Frame Debugging.
  3. Read "Thinking Better On Purpose" and take every call to action literally.

Let me know how it goes. All of these can be done on the order of minutes.

I think if one frames the problem w.r.t. individuals that were never allowed remote work (e.g. restaurant staff), individuals allowed remote work on a recurring basis (e.g. office worker with regular, life essential medical treatment), and individuals given remote work freely (e.g. board members, executives, people employed by Basecamp) it's easier to see a factor of 2 as well-calibrated, or even conservative. Doing the napkin arithmetic:

  • Restaurant workers: 0 x 2 = 0 (no change)
  • Regular office work: (once or twice a month) x 2 = once every week or two
  • Regular remote employee: 2 x infinity (or 1/2 the pre-pandemic office cadence) = just as remote or even less time face to face.

You do raise a good point about certain people being well-suited for remote vs in-person work. I'm not a huge fan of it myself, but mostly because I live in an expensive city and my at-home work situation strains my ability to spatially compartmentalize. But I've been productive and I like the kinds of breaks that I can have at home that were never afforded me in an office setting. Anecdotal aside: I do research work, mostly, so my manager made the argument that being co-located was irrelevant for our team's collaboration. He seems right so far...

Imprecision in speech clouds the mind and blurs one's perceptions of reality. While listening to episode 134 of the Bayesian Conspiracy podcast, one host shared the truism of economics undergrads:

Everything is signaling.

No. It is not that "everything is [the act of] signaling" but "everything signals [some value]".

It's humorous how quick we are to cast judgments on things that we think we have a full understanding of, when we only have an adequate representation for our purposes and are yet completely oblivious to the full potential left untapped.

Not to be highfalutin, imagine you were a young adult acquiring your first vehicle. It's a truck - a fixer-upper, even. You repair the leather seating, replace the radio, undergo an apprenticeship as a mechanic, get it "humming" again, even adding an ECU and improvement suspension. You've done a lot and you have quite the understanding. And yet your representation is merely adequate. You stretch and strain to represent the engine to a novice such that they might build one from scratch (or the nearest thing to it, these days); the implementing engineer just doesn't have the information from you.

Then, one evening: you pull into the driveway, turn off the engine, exit the vehicle and open the garage door. You walk towards the garage door, when suddenly your truck turns on and is in full reverse, backing into the street! You're surprised at the acceleration you haven't yet experienced, all these years later, and then WHAM! You're pulled forward and knocked back by some gravimetric anomaly, causing you to scorpion flip. Getting up and dusting yourself off, before your very eyes your truck transforms into Optimus Prime.

That was quite the potential you had been sitting on all these years. This was inspired by someone I read about the impracticality of our community's "brand" of rationality, namely the scarcity of ideation methods in a sea of methods to cull bad ideas. And then I read Babble and Prune. This self-same individual wrote self blog posts about this shortcoming and failed to propose a solution, even one as simple and Babble and Prune. This was a willingness to right of the potential based on your limited experience.

I think this is a limiting factor in the world at this time. As emergent skills, technologies, and demands confuse the legacy left by our forebears, the jaded grab to conspiracy and self-disenfranchisement, the elite oblivious to the troubles of the day, and enough people are awake to enough problems that there's infighting regarding which problems should be solved first.

We have an issue of coherence in coordination on our hands and something has to be done about it, and quickly.

Just started two books as a research endeavor into information communication:

  • Weapons of Math Destruction, Kathy O'Neil
  • Skin in the Game, Nassim Nicholas Taleb (seems to be a popularization of his technical paper by the same name)
Answer by Stephen James60

I have wondered this exact same thing myself, having discovered LessWrong in 2018 and Nate's story very soon thereafter. We have a similar-enough background, though I'm missing the basic analysis course in university. Your analysis lines up with almost everything I have gleaned over the last year, when things have seemed much quieter.

My effective conclusion - in absence of more information - is that MIRI is "full" like one is full from a meal. It would be nice to have more people on the mathematical side of things, but it's not going to help for a little while; you noticed this in the lack of workshops these days.

My resolution has been to get a degree in mathematics, as to preclude future missed research opportunities. We haven't automated mathematicians just yet.

Why would this be an ethical thing to do? It sounds like you're trying to manipulate others into people you'd like them to be and not what they themselves like to be.

Perhaps I didn't give enough detail. I definitely don't want to drive others exclusively into what I would like them to be. Nor do I want people to believe as I do in most regards. There's a greater principle that I think would make the world a better place:

When I engage with someone who presents themselves as opposed to an entire Other group, they tend to (in one way or another) divulge their assumption for opposing/hating/rebuking/etc that group. Very rarely do they have a complex enemy. The ethical ground I stand on is one of seeking to build bridges of understanding to those whom one claims to oppose, that will be readily crossed. My hope is that, with time, the "I'm anti-XYZ" or "I'm pro-ABC" won't be necessary because we'll be willing to consider people as fellow humans. We won't seek to make them a low-resolution representation of one sliver of their identity. We will, hopefully, face our opposition with eyes wide open, Bayesian "self-updaters" at the ready.

You're basically trying to hack into someone else's mind through very limited input channels (speech/text).

Again, I may have put incorrect emphasis or perhaps you are perceptive of the ways ideas can turn dangerous. Either way, I thank you for helping me relate these ideas.

I want to teach what I uncover because I think there is a limited impact to whatever sweet truths I glean from the universe if they stay strictly inside my head. Part of this goal is acquiring new teaching abilities, such as the ability to custom-fit my conveyance of material to the audience and dynamically ("real-time") adjust delivery based on reception.

In my experience it's never a lack of knowledge that's hindering people from overcoming akrasia (also the reason I'm skeptical towards the efficacy of self-help books).

This is exactly the point of that idea: just having the information doesn't seem to be enough. But for me, the knowledge seems more than enough for many applications. I want to

  1. extract what ever that is
  2. figure out how to apply it in the domains where - for myself - "cold-turkey" doesn't seem to do it,
  3. distill it, and
  4. share what's distilled.

Enabling the sincere dropping of bad habits strikes me as "for the good".

For example, it would be great if I could switch-off the processes that allow me to easily generate resentment for my spouse. It would be even better if I could flip the switch like I dropped hot showers, or the belief that the runtime complexity of the "power" function was constant-time (rather than the correct logarithmic-time).

There are possible ways of using this ability for ill. There would need to be controlled experiments if the tool is even extricable. There get to be a lot of conjunctions, so it's of a lesser concern for the near-term.

Answer by Stephen James20

I tend to keep three on mind and in rotation, as they move from "under inspection" to "done for now" and all the gradations between. In the past, this has included the likes of:

  • the validity of reverse chronological time travel ("done for now" back in 2010)
  • predictability of interpersonal interactions ("done for now" as of Spring 2017)
  • how to reject advice, while not alienating the caring individuals that provide advice (on hold)

Currently I'm working on:

  • How and Why are people presenting themselves as so divided in current conversations?
    • Yes, Politics is the Mind Killer. Still there are people that I think I want in my life that all falling prey to this beast and I want to save them.
    • Maybe there's a Sequence to talk me out of it?
  • The Mathematical Legitimacy of Machine Learning (convex optimization of randomly initialized matrices whose products fit curves in n-dimensional space)
    • Essentially, I think we're under-utilizing several higher mathematical objects - Tensors, to name one.
    • While not a mathematician myself, I have spoken with a few mathematicians who've validated my opinions (after examining the literature), and am currently seeking training to become such.
  • How to utilize my "cut X, cold-turkey" ability to teach and maintain anti-akrasia (or more general, non-self-bettering) techniques

The last of those has been in the works for the longest and current evidence (anecdotal and journal studies) suggests to me that we researching "apathy for self-betterment" are looking too high up the abstraction ladder. So it's time to dig a little deeper