Some thoughts on Buddhist epistemology.
This risks being threatening, upsetting, and heretical within a certain point of view I commonly see expressed on LW for reasons that will become clear if you keep reading. I don't know if that means you shouldn't read this if that sounds like the kind of thing you don't want to read, but I put it out there so you can make the choice without having to engage in the specifics if you don't want to. I don't think you will be missing out on anything if that warning gives you a tinge of "maybe I won't like reading this".
My mind produces a type error when people try to perform deep and precise epistemic analysis of the dharma. That is, when they try to evaluate the truth of claims made by the dharma this seems generally fine, but when they go deep enough that they end up trying to evaluate whether the dharma itself is based on something true, I get the type error.
I'm not sure what people trying to do this turn up. My expectation is that their results looks like noise if you aggregate over all such attempts. The reason being that the dharma is not founded on episteme.
As a quick reminder, there are at leas...
So when we talk about the dharma or justify our actions on it, it's worth noting that it is not really trying to provide consistent episteme. [...] Thus it's a strange inversion to ask the dharma for episteme-based proofs. It can't give them, nor does it try, because its episteme is not consistent and cannot be because it chooses completeness instead.
In my view, this seems like a clear failing. The fact that the dharma comes from a tradition where this has usually been the case is not an excuse for not trying to fix it.
Yes, the method requires temporarily suspending episteme-based reasoning and engaging with less conceptual forms of seeing. But it can still be justified and explained using episteme-based models; if it could not, there would be little reason to expect that it would be worth engaging with.
This is not just a question of "the dharma has to be able to justify itself"; it's also a question of leaving out the episteme component leaves the system impoverished, as noted e.g. here:
Recurrent training to attend to the sensate experience moment-by-moment can undermine the capacity to make meaning of experience. (The psychoanalyst Wilfred Bion d...
>unmediated-by-ontology knowledge of reality.
I think this is a confused concept, related to wrong-way-reduction.
I'm sad that postrationality/metarationality has, as a movement, started to collapse on itself in terms of doing the thing it started out doing.
What I have in mind is that initially, say 5+ years ago, postrationality was something of a banner for folks who were already in the rationalist or rationalist-adjacent community, saw some ways in which rationalists were failing at their own project, and tried to work on figuring out how to do those things.
Now, much like postmodernism before it, I see postrationality collapsing from a thing only for people who were already rationalists and wanted to go beyond its limitations of the time to a kind of prerationality that rejects instead of builds on the rationalist project.
This kind of dynamic is pretty common (cf. premodern, modern, and postmodern) but it still sucks. On the other hand, I guess the good side of it is that I see lots of signs that the rationality community is better integrating some of the early postrationalist insights such that it feels like there's less to push back against in the median rationalist viewpoint.
Yeah, it seems like postrationalists should somehow establish their rationalist pedigree before claiming the post- title. IIRC, Chapman endorsed this somewhere on twitter? But I can't find it now. Maybe it was a different postrat. Also it was years ago.
This is a short post to register my kudos to LWers for being consistently pretty good at helping each other find answers to questions, or at least make progress towards answers. I feel like I've used LW numerous times to make progress on work by saying "here's what I got, here's where I'm confused, what do you think?", whether that be through formal question posts or regular posts that are open ended. Some personal examples that come to mind: recent, older, another.
Praise to the LW community!
I'm fairly pessimistic on our ability to build aligned AI. My take is roughly that it's theoretically impossible and at best we might build AI that is aligned well enough that we don't lose. I've not written one thing to really summarize this or prove it, though.
The source of my take comes from two facts:
Stuart Armstrong has made a case for (2) with his no free lunch theorem. I've not seen anyone formally make the case for (1), though.
Is this something worth trying to prove? That Goodharting is unavoidable and at most we can try to contain its effects?
I'm many years out from doing math full time so I'm not sure if I could make a rigorous proof of it, but this seems to be something that people disagree on sometimes (arguing that Goodharting can be overcome) but I think most of those discussions don't get very precise about what that means.
This paper gives a mathematical model of when Goodharting will occur. To summarize: if
(1) a human has some collection of things which she values,
(2) a robot has access to a proxy utility function which takes into account some strict subset of those things, and
(3) the robot can freely vary how much of there are in the world, subject only to resource constraints that make the trade off against each other,
then when the robot optimizes for its proxy utility, it will minimize all 's which its proxy utility function doesn't take into account. If you impose a further condition which ensures that you can't get too much utility by only maximizing some strict subset of the 's (e.g. assuming diminishing marginal returns), then the optimum found by the robot will be suboptimal for the human's true utility function.
That said, I wasn't super-impressed by this paper -- the above is pretty obvious and the mathematical model doesn't elucidate anything, IMO.
Moreover, I think this model doesn't interact much with the skeptical take about whether Goodhart's Law implies doom in practice. Namely, here are some things I believe about the worl...
People often talk of unconditional love, but they implicitly mean unconditional love for or towards someone or something, like a child, parent, or spouse. But this kind of love is by definition conditional because it is love conditioned on the target being identified as a particular thing within the lover's ontology.
True unconditional love is without condition, and it cannot be directed because to direct is to condition and choose. Unconditional love is love of all, of everything and all of reality even when not understood as a thing.
Such love is rare, so it seems worth pursuing the arduous cultivation of it.
I think it's safe to say that many LW readers don't feel like spirituality is a big part of their life, yet many (probably most) people do experience a thing that goes by many names---the inner light, Buddha-nature, shunyata, God---and falls under the heading of "spirituality". If you're not sure what I'm talking about, I'm pointing to a common human experience you aren't having.
Only, I don't think you're not having it, you just don't realize you are having those experiences.
One way some people get in touch with this thing, which I like to think of as "the source" and "naturalness" and might describe as the silently illuminated wellspring, is with drugs, especially psychedelics but really any drug that gets you to either reduce activity of the default-mode network or at least notice it's operation and stop identifying with it (dissociatives may function like this). In this light, I think of drug users as very spiritual people, only they are unfortunately doing it in a way that is often destructive to their bodies and causes headlessness (causes them to fail to perceive reality accurately and so may act ...
Only, I don't think you're not having it, you just don't realize you are having those experiences.
The mentality that lies behind a statement like that seems to me to be pretty dangerous. This is isomorphic to "I know better than other people what's going on in those other people's heads; I am smarter/wiser/more observant/more honest."
Sometimes that's *true.* Let's not forget that. Sometimes you *are* the most perceptive one in the room.
But I think it's a good and common standard to be skeptical of (and even hostile toward) such claims (because such claims routinely lead to unjustified and not-backed-by-reality dismissal and belittlement and marginalization of the "blind" by the "seer"), unless they come along with concrete justification:
[Mod note] I thought for a while about how shortform interacts with moderation here. When Ray initially wrote the shortform announcement post, he described the features, goals, and advice for using it, but didn’t mention moderation. Let me follow-up by saying: You’re welcome and encouraged to enforce whatever moderation guidelines you choose to set on shortform, using tools like comment removal, user bans, and such. As a reminder, see the FAQ section on moderation for instructions on how to use the mod tools. Do whatever you want to help you think your thoughts here in shortform and feel comfortable doing so.
Some background thoughts on this: In other places on the internet, being blocked locks you out of the communal conversation, but there are two factors that make it pretty different here. Firstly, banning someone from a post on LW means they can’t reply to the content they’re banned from, but it doesn’t hide your content from them or their content from you. And secondly, everyone here on LessWrong has a common frontpage where the main conversation happens - the shortform is a low-key place and a relatively unimportant part of the conversation. (You can be banned from posts on fr...
Hey Gordon, let me see if I understand your model of this thread. I’ll write mine and can you tell me if it matches your understanding?
nods Then I suppose I feel confused by your final response.
If I imagine writing a shortform post and someone said it was:
I would often be like “No, you’re wrong” or maybe “I actually stand by it and intended to be rude” or “Thanks, that’s fair, I’ll edit”. I can also imagine times where the commenter is needlessly aggressive and uncooperative where I’d just strong downvote and ignore.
But I’m confused by saying “you’re not allowed to tell me off for norm-violations on my shortform”. To apply that principle more concretely, it could say “you’re not allowed to tell me off for lying on my shortform”.
My actual model of you feels a bit confused by Duncan’s claim or something, and wants to fight back against being attacked for something you don’t see as problematic. Like, it feels presumptuous of Duncan to walk into your post and hold you to what feels mostly like high standards of explanation, and you want to (rightly) say that he’s not allowed to do that.
Does that all seem right?
[He] does and will regularly decide that he knows better than other people what's going on in those other people's heads. [...] Personally, I find it unjustifiable and morally abhorrent.
How can it be morally abhorrent? It's an epistemic issue. Factual errors often lead to bad consequences, but that doesn't make those errors moral errors. A moral error is an error about a moral fact, assignement of value to situations, as opposed to prediction of what's going on. And what someone thinks is a factual question, not a question of assigning value to an event.
So it's a moral principle under the belief vs. declaration distinction (as in this comment). In that case I mostly object to not making that distinction (a norm to avoid beliefs of that form is on entirely different level than a norm to avoid their declarations).
Personally I don't think the norm about declarations is on the net a good thing, especially on LW, as it inhibits talking about models of thought. The examples you mentioned are important but should be covered by a more specialized norm that doesn't cause as much collateral damage.
leaving the conversation at "he, I, and LessWrong as a community are all on the same page about the fact that Gordon endorses making this mental move."
Nesov scooped me on the obvious objection, but as long as we're creating common knowledge, can I get in on this? I would like you and Less Wrong as a community to be on the same page about the fact that I, Zack M. Davis, endorse making the mental move of deciding that I know better than other people what's going on in those other people's heads when and only when it is in fact the case that I know better than those other people what's going on in their heads (in accordance with the Litany of Tarski).
the existence of bisexuals
As it happens, bisexual arousal patterns in men are surprisingly hard to reproduce in the lab![1] This is a (small, highly inconclusive) example of the kind of observation that one might use to decide whether or not we live in a world in which the cognitive algorithm of "Don't decide that you know other people's minds better than they do" performs better or worse than other inference procedures.
J. Michael Bailey, "What Is Sexual Orientation and Do Women Have One?", section titled "Sexual Arousal Patter
as clearly noted in my original objection
Acknowledged. (It felt important to react to the great-grandparent as a show of moral resistance to appeal-to-inner-privacy conversation halters, and it was only after posting the comment that I remembered that you had acknolwedged the point earlier in the thread, which, in retrospect, I should have at least acknowledged even if the great-grandparent still seemed worth criticizing.)
there is absolutely a time and a place for this
Exactly—and lesswrong.com is the place for people to report on their models of reality, which includes their models of other people's minds as a special case.
Other places in Society are right to worry about erasure, marginalization, and socially manipulative dismissiveness! But in my rationalist culture, while standing in the Citadel of Truth, we're not allowed to care whether a map is marginalizing or dismissive; we're only allowed to care about whether the map reflects the territory. (And if there are other cultures competing for control of the "rationalist" brand name, then my culture is at war with them.)
My whole objection is that Gordon wasn't bothering to
Great! Thank you for critcizing people who don'
...criticizing people who don't justify their beliefs with adequate evidence and arguments
I think justification is in the nature of arguments, but not necessary for beliefs or declarations of beliefs. A belief offered without justification is a hypothesis called to attention. It's concise, and if handled carefully, it can be sufficient for communication. As evidence, it's a claim about your own state of mind, which holds a lot of inscrutable territory that nonetheless can channel understanding that doesn't yet lend itself to arguments. Seeking arguments is certainly a good thing, to refactor and convey beliefs, but that's only a small part of how human intelligence builds its map.
There's a dynamic here that I think is somewhat important: socially recognized gnosis.
That is, contemporary American society views doctors as knowing things that laypeople don't know, and views physicists as knowing things that laypeople don't know, and so on. Suppose a doctor examines a person and says "ah, they have condition X," and Amy responds with "why do you say that?", and the doctor responds with "sorry, I don't think I can generate a short enough explanation that is understandable to you." It seems like the doctor's response to Amy is 'socially justified', in that the doctor won't really lose points for referring to a pre-existing distinction between those-in-the-know and laypeople (except maybe for doing it rudely or gracelessly). There's an important sense in which society understands that it in fact takes many years of focused study to become a physicist, and physicists should not be constrained by 'immediate public justification' or something similar.
But then there's a social question, of how to grant that status. One might imagine that we want astronomers to be able to do their ...
What's the difference?
Suppose I'm talking with a group of loose acquaintances, and one of them says (in full seriousness), "I'm not homophobic. It's not that I'm afraid of gays, I just think that they shouldn't exist."
It seem to me that it is appropriate for me to say, "Hey man, that's not ok to say." It might be that a number of other people in the conversation would back me up (or it might be that they they defend the first guy), but there wasn't common knowledge of that fact beforehand.
In some sense, this is a bid to establish a new norm, by pushing a the private opinions of a number of people into common knowledge. It also seems to me to be a virtuous thing to do in many situations.
(Noting that my response to the guy is not: "Hey, you can't do that, because I get to decide what people do around here." It's "You can't do that, because it's bad" and depending on the group to respond to that claim in one way or another.)
Outside observer takeaway: There's a bunch of sniping and fighting here, but if I ignore all the fighting and look at only the ideas, what we have is that Gordon presented an idea, Duncan presented counterarguments, and Gordon declined to address the counterarguments. Posting on shortform doesn't come with an obligation to follow up and defend things; it's meant to be a place where tentative and early stage ideas can be thrown around, so that part is fine. But I did come away believing the originally presented idea is probably wrong.
(Some of the meta-level fighting seemed not-fine, but that's for another comment.)
I have plans to write this up more fully as a longer post explaining the broader ideas with visuals, but I thought I would highlight one that is pretty interesting and try out the new shortform feature at the same time! As such, this is not optimized for readability, has no links, and I don't try to backup my claims. You've been warned!
Suppose you frequently found yourself identifying with and feeling like you were a homunculus controlling your body and mind: there's a real you buried inside, and it's in the driver's seat. Sometimes your mind and body do what "you" want, sometimes it doesn't and this is frustrating. Plenty of folks reify this in slightly different ways: rider and elephant, monkey and machine, prisoner in cave (or audience member in theater), and, to a certain extent, variations on the S1/S2 model. In fact, I would propose this is a kind of dual process theory of mind that has you identifying with one of the processes.
A few claims.
First, this is a kind of constant, low-level dissociation. It's not the kind of high-intensity dissociation we often think of when we use that term, but it's still a separation of sense of ...
More surprised than perhaps I should be that people take up tags right away after creating them. I created the IFS tag just a few days ago after noticing it didn't exist but wanted to link it and I added the first ~5 posts that came up if I searched for "internal family systems". It now has quite a few more posts tagged with it that I didn't add. Super cool to see the system working in real time!
One of the fun things about the current Good Heart Token week is that it's giving me cover to try less hard to write posts. I'm writing a bunch, and I have plausible deniability if any of them end up not being that good—I was Goodharting. Don't hate the player, hate the game.
I'm not sure how many of these posts will stand the test of time, but I think there's something valuable about throwing a bunch of stuff at the wall and seeing what sticks. I'm not normally going to invest in that sort of strategy; I just don't have time for it. But for one week it's f...
tl;dr: read multiple things concurrently so you read them "slowly" over multiple days, weeks, months
When I was a kid, it took a long time to read a book. How could it not: I didn't know all the words, my attention span was shorter, I was more restless, I got lost and had to reread more often, I got bored more easily, and I simply read fewer words per minute. One of the effects of this is that when I read a book I got to live with it for weeks or months as I worked through it.
I think reading like that has advantages. By living with a book for...
I get worried about things like this article that showed up on the Partnership on AI blog. Reading it there's nothing I can really object to in the body of post: it's mostly about narrow AI alignment and promotes a positive message of targeting things that benefit society rather than narrowly maximize a simple metric. How it's titled "Aligning AI to Human Values means Picking the Right Metrics" and that implies to me a normative claim that reads in my head something like "to build aligned AI it is necessary and sufficient to p...
Sometimes people at work say to me "wow, you write so clearly; how do you do it?" and I think "given the nonsense I'm normally trying to explain on LW, it's hardly a surprise I've developed the skill well enough that when it's something as 'simple' as explaining how to respond to a page or planning a technical project that I can write clearly; you should come see what it looks like when I'm struggling at the edge of what I understand!".
Small boring, personal update:
I've decided to update my name here and various places online.
I started going by "G Gordon Worley III" when I wrote my first academic paper and discovered I there would be significant name collision if I just went by "Gordon Worley". Since "G Gordon Worley III" is, in fact, one version of my full legal name that is, as best as I can tell, globally unique, it seemed a reasonable choice.
A couple years ago I took Zen precepts and received a Dharma name: "Sincere Way." In the Sino-Japanese used for Dharma names, "誠道", or "Seidoh" ...
It seems like humans need an outgroup.
My evidence is not super strong, but I notice a few things:
I recently watched all 7 seasons of HBO's "Silicon Valley" and the final episode (or really the final 4 episodes leading up into the final one) did a really great job of hitting on some important ideas we talk about in AI safety.
Now, the show in earlier seasons has played with the idea of AI with things like an obvious parody of Ben Goertzel and Sophia, discussion of Roko's Basilisk, and of course AI that Goodharts. In fact, Goodharting is a pivotal plot point in how the show ends, along with a Petrov-esque ending where hard choices have to be made under u...
NB: There's something I feel sad about when I imagine what it's like to be others, so I'm going to ramble about it a bit in shortform because I'd like to say this and possibly say it confusingly rather than not say it at all. Maybe with some pruning this babble can be made to make sense.
There's a certain strain of thought and thinkers in the rationality community that make me feel sad when I think about what it must be like to be them: the "closed" individualists. This is as opposed to people who view personal identity as...
Strong and Weak Ontology
Ontology is how we make sense of the world. We make judgements about our observations and slice up the world into buckets we can drop our observations into.
However I've been thinking lately that the way we normally model ontology is insufficient. We tend to talk as if ontology is all one thing, one map of the territory. Maybe these can be very complex, multi-manifold maps that permit shifting perspectives, but one map all the same.
We see some hints at the breaking of this ontology of ontology as a single map by noticing the way...
So long as shortform is salient for me, might as well do another one on a novel (in that I've not heard/seen anyone express it before) idea I have about perceptual control theory, minimization of prediction error/confusion, free energy, and Buddhism that I was recently reminded of.
There is a notion within Mahayana Buddhism of the three poisons: ignorance, attachment (or, I think we could better term this here, attraction, for reasons that will become clear), and aversion. This is part of one model of where suffering arises from. Others express these n...
In a world that is truly and completely post-scarcity there would be no need for making tradeoffs.
Normally when we think about a post-scarcity future we think in terms of physical resources like minerals and food and real estate because for many people these are the limiting resources.
But the world is wealthy enough that some people already have access to this kind of post-scarcity. That is, they have enough money that they are not effectively limited in access to physical resources. If they need food, shelter, clothing, materiel, etc. they can get it in s...
If I want to continue to rack up Good Heart Tokens I now have to make legit contributions, not just make a bid to feed me lots of karma because I'm going to donate it.
So, what would be an interesting post you'd enjoy reading from me? It'll have to be something I can easily put together without doing a lot of research.
I unfortunately don't have a backlog of things to polish up and put out because I've been working on a book, and although I have draft chapters none of them is quite ready to go out. I might be able to get one of them out the door before GHT g...
One of the nice things in my work is I can just point to when I think something human is getting in the way. Like, sometimes someone says an idea is a bad idea. If I dig in, sometimes there's a human reason they say that: they don't actually think it's a bad idea, they just don't think they will like doing the work to make the idea real, or something similar. But those are different things, but it's important to have a conversation to sort that out and then we can move forward on two topics: is the idea good and why don't you want to be involved with it.
Bu...
Hogwarts Houses as Religions
Okay, this is just a fun nonsense idea I thought up. Please don't read anything too much into it, I'm just riffing. Sorry if I've mischaracterized a religion or Hogwarts house!
What religion typifies each Hogwarts house?
I'll start with Hufflepuff, which I think is aligned with Buddhism: treat everyone the same, and if you want salvation the only option is to do multiple lifetimes worth of work.
Next is Ravenclaw, which looks a lot like Judaism: there's a system to the world, you gotta follow the rules, and also lets debate and res...
Robert Moses and AI Alignment
It's useful to have some examples in mind of what it looks like when an intelligent agent isn't aligned with the shared values of humanity. We have some extreme examples of this, like paperclip maximizers, and some less extreme but extreme in human terms examples, like dictators like Stalin, Mao, and Pol Pot who killed millions in the pursuit for their goals, but these feel like outliers that people can too easily make various arguments for being extreme and that no "reasonable" system would have these problems.
Okay, so let's t...
Won't I get bored living forever?
I feel like this question comes up often as a kind of push back against the idea of living an unbounded number of years, or even just a really really long time beyond the scale of human comprehension for what it would mean to live that many years.
I think most responses rely on intuition about our lives. If your life today seems full of similar days and you think you'd get bored, not living forever or at least taking long naps between periods of living seems appealing. Alternatively, if your life today seems full of new expe...
Personality quizzes are fake frameworks that help us understand ourselves.
What-character-from-show-X-are-you quizzes, astrology, and personality categorization instruments (think Big-5, Myers-Briggs, Magic the Gathering colors, etc.) are perennially popular. I think a good question is to ask, why do humans seem to like this stuff so much that even fairly skeptical folks tend to object not to categorization but that the categorization of any particular system is bad?
My stab at an answer: humans are really confused about themselves, and are interested in thi...
As I work towards becoming less confused about what we mean when we talk about values, I find that it feels a lot like I'm working on a jigsaw puzzle where I don't know what the picture is. Also all the pieces have been scattered around the room and I have to find the pieces first, digging between couch cushions and looking under the rug and behind the bookcase, let alone figure out how they fit together or what they fit together to describe.
Yes, we have some pieces already and others think they know (infer, guess) what the picture is from those ...
Most of my most useful insights come not from realizing something new and knowing more, but from realizing something ignored and being certain of less.
After seeing another LW user (sorry, forgot who) mention this post in their commenting guidelines, I've decided to change my own commenting guidelines to the following, matching pretty close to the SSC commenting guidelines that I forgot existed until just a couple days ago:
Comments should be at least two of true, useful, and kind, i.e. you believe what you say, you think the world would be worse without this comment, and you think the comment will be positively received.
I like this because it's simple and it says what rather than how. My old gui...
http://www.overcomingbias.com/2019/12/automation-so-far-business-as-usual.html
I similarly suspect automation is not really happening in a dramatically different way thus far. Maybe that will change in the future (I think it will), but it's not here yet.
So why so much concern about automation?
I suspect because of something they don't look at in this study much (based on the summary): displacement. People are likely being displaced from jobs into other jobs by automation or the perception of automation and some few of those exit the labor market ra...
I started showing symptoms and testing positive for COVID on Saturday. I'm now over nearly all the symptoms other than some pain in parts of my body and fatigue.
The curious question in my mind is, what's causing this pain and fatigue and what can be done about it?
My high-level, I'm-not-a-doctor theory is that there's something like generalized inflammation happening in my body, doing things makes it worse, and then my body sends out the signal to rest in order to get the inflammation back down. Once it's down I can do things for a while until it builds up ...
Maybe spreading cryptocurrency is secretly the best thing we can do short term to increase AI safety because it increases the cost of purchasing compute needed to build AI. Possibly offset, though, by the incentives to produce better processors for cryptocurrency mining that are also useful for building better AI.
This post suggests a feature idea for LessWrong to me:
https://www.lesswrong.com/posts/6Nuw7mLc6DjRY4mwa/the-national-defense-authorization-act-contains-ai
It would be pretty cool if, instead of a lot of comments that have an order determined by votes or time of posting it were instead possible to write a post that had part that could be commented on directly. So, for example, say the comments for a particular section could live straight in the section rather than down at the bottom. Could be an interesting way to deal with lots of comments on large, structured posts.
I few months ago I found a copy of Staying OK, the sequel to I'm OK—You're OK (the book that probably did the most to popularize transactional analysis), on the street near my home in Berkeley. Since I had previously read Games People Play and had not thought about transactional analysis much since, I scooped it up. I've just gotten around to reading it.
My recollection of Games People Play is that it's the better book (based on what I've read of Staying OK so far). Also, transactional analysis is kind of in the water in ways...
Off-topic riff on "Humans are Embedded Agents Too"
One class of insights that come with Buddhist practice might be summarized as "determinism", as in, the universe does what it is going to do no matter what the illusory self predicts. Related to this is the larger Buddhist notion of "dependent origination", that everything (in the Hubble volume you find yourself in) is causally linked. This deep deterministic interdependence of the world is hard to appreciate from our subjective experience, because the creation of ontology crea...
I just noticed something odd. It's not that odd: the cognitive bias that powers it is well know. It's more odd that a company is leaving money on the table by not exploiting it.
I primarily fly United and book rental cars with Avis. United offers to let you buy refundable fares for a little more than the price of a normal ticket. Avis let's you pre-pay for your rental car to receive a discount. These are symmetrical situations presented with a different framing because the default action is different in the two cases: on United the default is to have a non-...
Isolate the Long Term Future
Maybe this is worthy of a post, but I'll do a short version here to get it out.
Psychological Development and Age
One of the annoying things about developmental psychology is disentangling age-related from development-related effects.
For example, as people age they tend to get more settled or to more have their lives sorted out. I'm pointing at the thing where kids and teenages and adults in their 20s tend to have a lot of uncertainty about what they are going to do with their lives and that slowly decreases over time.
A simple explanation is that it's age related, or maybe more properly experience related. As a person lives more years,...
ADHD Expansionism
I'm not sure I fully endorse this idea, hence short form, but it's rattling around inside my head and maybe we can talk about it?
I feel like there's a kind of ADHD (or ADD) expansionism happening, where people are identifying all kinds of things as symptoms of ADHD, especially subclinical ADHD.
On the one had this seems good in the sense that performing this kind of expansionism seems to actually be helping people by giving them permission to be the way they are via a diagnosis and giving them strategies they can try to live their life bett...
You're always doing your best
I like to say "you're always doing your best", especially as kind words to folks when they are feeling regret.
What do I mean by that, though? Certainly you can look back at what you did in any given situation and imagine having done something that would have had a better outcome.
What I mean is that, given the all conditions under which you take any action, you always did the best you could. After all, if you could have done something better given all the conditions you would have.
The key is that all the conditions include the e...
I feel like something is screwy with the kerning on LW over the past few weeks. Like I keep seeing sentences that look like they are missing space between the period and the start of the next sentence but when I check closely they are not. For whatever reason this doesn't seem to show in the editor, only in the displayed text.
I think I've only noticed this with comments and short form, but maybe it's happening other places? Anyway, wanted to see if others are experiencing this and raise a flag for the LW team that a change they made may be behaving in unexpected ways.
Story stats are my favorite feature of Medium. Let me tell you why.
I write primarily to impact others. Although I sometimes choose to do very little work to make myself understandable to anyone who is more than a few inferential steps behind me and then write out on a far frontier of thought, nonetheless my purpose remains sharing my ideas with others. If it weren't for that, I wouldn't bother to write much at all, and certainly not in the same way as I do when writing for others. Thus I care instrumentally a lot about being able to assess if I a...