All of BeanSprugget's Comments + Replies

This makes me think "tulpamancy-lite". Not that that's a bad thing - perhaps it's like a safer tulpamancy. Some thoughts:

(It's just such little mannerisms that allow a shoulder advisor to be "really real"—to bring it to life, give it a personality separate from, and not dependent on, your brain's main central personality. Again, I don't have a sound explanation of the mechanics, but it works.) Would it be useful to have a shoulder-advisor not constrained by having to relate to a real example? Or perhaps, without that link it will just tend to become mor

... (read more)
6Duncan Sabien (Deactivated)
I have created shoulder advisors from scratch, most notably a pair of characters from a novel that I'm writing who like to show up in opposition to each other.  Not sure about any of the other points.

How many solutions are there that we overlook because they seem childish or "cringe"? Maybe that's just something I notice, since I notice myself avoiding "cringe" things too much. I think being averse to cringe is not entirely a bad thing, because it helps rule out solutions that probably wouldn't work.

I think what's happening is basically that the pink shows where the visible mass is, but the purple shows where the mass should be according to gravitational lensing. Dark matter should pass straight through, and that is what we see according to lensing, even though the pink lags behind because it can collide (since it's mostly the hot plasma).

At least, I think that's what's happening... I myself am really confused and am pretty unconfident in that explanation.

I'm also confused as to what modified gravity predicts, and how bullet clusters disprove it. I gu... (read more)

1BeanSprugget
Here's a link that seems to confirm what I wrote: https://chandra.harvard.edu/graphics/resources/handouts/lithos/bullet_lithos.pdf

This reminds me of lukeprog's post on motivation (I can't seem to find it though...; this should suffice). Your model and TMT kind of describe the same thing: you have more willpower for things that are high value and high expectancy. And the impulsiveness factor is similar to how there are different "kinds" of evidence: eg you are more "impulsive" to play video games or not move, even though they aren't high value+expectation logically.

2Yoav Ravid
How to Beat Procrastination and My Algorithm for Beating Procrastination

What does he do specifically? It's very unclear just from reading the Amazon description. Or is it like an entire program. I'm skeptical: I have never heard of this anywhere else, so it seems like one of those $100-bill-on-the-subway-floor type things.

5Sabiola
I'm actually not quite sure. It's not a whole program; you just read the book, and (if you're anything like me) you're sober, just like that. I'm not a 'recovering alcoholic' the way I see some people writing who have been sober for even longer than I have; I'm done with alcohol, for good, and I knew it right away. He basically talks about the negatives of drinking and the positives of sobriety, with stories and examples, in a way that just 'clicked' for me.

it needed me to also have an active project I was working on that I actually enjoyed. I think otherwise I would have found other ways to distract myself and eventually undermined it to the point that I gave up.

Same with me. Although it's still better than nothing: the usual distractions are more habit than actually fun, and I've found that I read more interesting things instead of just mindlessly browsing social media.

I like to use the add-on LeechBlockNG (I don't know if you can use it on mobile). You can use it to outright block sites, but also delay access to the site before you actually enter, and also put time limits. The delay is something I haven't seen other apps/add-ons use a similar feature, and it's kind of a deal-breaker, since it solves the problem of "instant gratification" that makes social media (etc.) addicting.

I really agree with this. I have been thinking that we should "default to privacy", because if we think we have to share it, we will change our thoughts because of the social anxieties/pressures. (It's similar to that experiment that demonstrated people make better decisions if they didn't have to come to a solution first (I just remember this from reading HP:MOR).) Only after we reach the answer, (socially) unbiased, then we can decide to share it.

I don't think privacy means dishonesty. I personally really dislike lying, and I think it's because acting wi... (read more)

I feel like I feel a similar thing, but with regards to effective altruism and learning intellectual things. I sometimes ask myself, "are my beliefs around EA and utilitarianism just 'signaling'", especially since I'm only in high school and don't really have any immediate plans. But I'm also not a very social person, and when I do talk to others I don't usually talk about EA. I guess I'm not a very conscientious person: I like the idea of "maximizing utility" and learning cool things, but my day-to-day fun things (outside of "addictions": social media, ga... (read more)

Why don't you just use MathJax? Maybe this wasn't the case when you wrote this comment, but there should be a button that just applies the formatting, and Ankidroid can render it.

1[anonymous]
I now use MathJax!

Really interesting post. To me, approaching information with mathematics seems like a black box - and in this post, it feels like magic.

I'm a little confused by the concept of cost: I understand that it takes more data to represent more complex systems, which grows exponentially faster than than the amount of bits. But doesn't the more complex model still strictly fit the data better? - is it just trying to go for a different goal than accuracy? I feel like I'm missing the entire point of the end.

1Mart_Korz
I am not sure whether my take on this is correct, so I'd be thankful if someone corrects me if I am wrong: I think that if the goal was only 'predicting' this bit-sequence after knowing the sequence itself, one could just state probability 1 for the known sequence. In the OP instead, we regard the bit-sequence as stemming from some sequence-generator, of which only this part of the output is known. Here, we only have limited data such that singling out a highly complex model out of model-space has to be weighed against the models' fit to the bit-sequence.

Even with quantum uncertainty, you could predict the result of a coin flip or die roll with high accuracy if you had precise enough measurements of the initial conditions.

I'm curious about how how quantum uncertainty works exactly. You can make a prediction with models and measurements, but when you observe the final result, only one thing happens. Then, even if an agent is cut off from information (i.e. observation is physically impossible), it's still a matter of predicting/mapping out reality.

I don't know much about the specifics of quantum uncertainty, though.

Since national pride is decreasing, the pride of scientific accomplishments seem to be mainly relegated to, well, the scientists themselves - geeks and nerds.

That reminds me of this Scott Aaronson post (https://www.scottaaronson.com/blog/?p=87). Unless the science "culture" changes or everyone else does, it seems like there will be a limit on the amount of people who would be willing to celebrate technical achievements.

It doesn't optimize for "you", it optimizes for the gene that increases the chance of cheating. The "future" has very little "you".

This seems mainly to be about the importance of compromise: that something is better than nothing. Refusing only makes sense when there are "multiple games", like the Prisoner's Dilemma; if you can't find an institution that is similar enough, then don't do it.

But I think there is some risk to joining a cause that "seems" worth it. (I can't find it, but) I remember an article on LessWrong about the dangers of signing petitions, which can influence your beliefs significantly despite the smallness of the action.

Reminded me of this blog post by Nicky Case, where they said "Trust, but verify". Emotions are often a good heuristic for truth: if we didn't feel pain, that would be bad.

I don't know anything about Go. But the fact that following it helps you reminds me of In praise of fake frameworks: while "good shape" isn't fully accurate at calculating the best move, it's more "computationally useful" for most situations (similar to calculating physics with Newton's laws vs general relativity and quantum mechanics). (The author also mentions "ki", which makes no sense from a physics perspective, to get better at aikido.)

I think it's just important to remember that the "model" is only a map for the "reality" (the rules of the game).

I don't really doubt that increasing value while preserving values is nontrivial, but I wonder just how nontrivial it is: are the regions of the brain for intelligence and values separate? Actually, writing that out, I realize that (at least for me) values are a "subset" of intelligence: the "facts" we believe about science/math/logic/religion are generated in basically the same way as our moral values; the difference to us humans seems obvious, but it really is, well, nontrivial. The paper clip maximizing AI is a good example: even if it wasn't about "moral values"--even if you wanted to maximize something like paper clips--you'd still run into trouble

You could make a habit of checking LW and EA at a certain time each day/week/etc. I don't know how easy that would be maintain, or if it's really practical depending on your situation.