Software developer and mindhacking instructor. Interested in intelligent feedback (especially of the empirical testing variety) on my new (temporarily free) ebook, A Minute To Unlimit You.
More specifically, the issue is that the img srcset attribute contains unescaped commas, causing the URLs to be broken. Deleting the srcset attributes fixes the image, or replacing all the f_auto, q_auto
bits in the srcset with f_auto%2cq_auto
fixes it.
It looks like maybe this is a bug in LW's support for uploaded images?
I expect business and sales people would mostly not feel similarly, though to be fair it's uncommon for business friendships/acquaintances to reach "best friend" or better status. The vibe of somebody putting you in a CRM to stay in touch without any direct/immediate monetary benefit is like, "oh, how thoughtful of you / props for being organized / I should really be doing that".
Anyway, the important question isn't how most people would feel, it's how one's desired friends in particular would feel. And many people might feel things like "honored this busy person with lots of friends wants to upgrade our friendship and is taking action to make sure it happens -- how awesome".
One of the reasons your question is challenging is that "fear of failure" is a phrase our brains use to stop thinking about the horrible thing they don't want to think about. "Failure" is an abstract label, but the specific thing you fear isn't the literal failure to accomplish your goal. It's some concrete circumstance the situation will resemble, along with some defined meaning of the failure.
This is easier to see if you consider how many things you do every day that involve failure to accomplish a goal and yet do not provoke the same kind of emotion. Lots of things are "no big deal" and thus no big deal to fail at.
Things that are a "big deal" are a big deal because of some meaning we assign to them, either positively or negatively.
Mostly negatively.
More specifically: negatively, masquerading as positively. The "tell" for this is when your goals are suspiciously abstract or unclear. It's a strong sign that the real motivation for the goals is signaling, specifically signaling that you aren't something.
These days I call it GUPI Syndrome, for "Guilty Until Proven Innocent". Common patterns I see in my practice:
So quite often, the phrase "fear of failure" actually unpacks to "fear I will fail at my lifelong mission to prove I'm not {lazy, a loser, incompetent, stupid, not a man, irresponsible, etc...}".
And this can't be addressed by advice that's aimed at motivation or discipline or what-have-you, because the underlying emotional goal will never be satisfied. Ever.
You can't prove a negative, and that is fundamentally what this syndrome is about: proving you're not something that you're afraid other people may see you as.
(To be clear here, this is the generic "you" of anyone who is experiencing this, which I'm not saying is "you", the author of this question!)
Anyway, the solution to this problem is to stop trying to prove you're not whatever bad thing you fear you already are (or that people do/might believe you are). This may involve several sub-steps such as:
Is this a lot? Yes it is. But the payoff is that once you're no longer trying to prove a negative to your emotional brain, you have a lot more mental energy available to spend on goals that no longer seem like such a "big deal", and whose path to achievement feels much clearer.
(Also, it's hard to overstate how big a deal it is to not be feeling every day like someone is going to uncover your horrible secrets or everyone will see you fall on your face, or whatever the thing is that's going on.)
Benevolence towards others flows out of shared values; unconditional regard others-in-general is unnatural.
Now there's a nice quotable quote. I don't think it's entirely accurate, unless people with certain kinds of lobe damage or meditation history count as "unnatural". (Which I suppose they could.) On the other hand, those people arguably have brains that define others as themselves, and thus having shared values with said ohters as a matter of course. (Or alternately, I suppose they have a very expansive definition of "shared values".)
But as a truism or proverb, this makes a lot of sense, and should be helpful to people who suffer from feeling like they should care more about others-in-general. Knowing that caring spreads by way of shared values makes it possible to find the caring one already has, before trying to extend it further. (Rather than constantly going to a well you're told should be full, and always finding it dry.)
we don’t think that shutdown-seeking avoids every possible problem involved with reward misspecification
Seems like this is basically the alignment problem all over again, with the complexity just moved to "what does it mean to 'shut down' in the AI's inner model".
For example, if the inner-aligned goal is to prevent its own future operation, it might choose to say, start a nuclear war so nobody is around to start it back up, repair it, provide power, etc.
it doesn't have the kind of insight into its motives that we do
Wait, human beings have insight into their own motives that's better than GPTs have into theirs? When was the update released, and will it run on my brain? ;-)
Joking aside, though, I'd say the average person's insight into their own motives is most of the time not much better than that of a GPT, because it's usually generated in the same way: i.e. making up plausible stories.
What I was pointing out is that the barrier is asymmetrical: it's biased towards AIs with more-easily-aligned utility functions. A paperclipper is more likely to be able to create an improved paperclipper that it's certain enough will massively increase its utility, while a more human-aligned AI would have to be more conservative.
In other words, this paper seems to say, "if we can create human-aligned AI, it will be cautious about self-improvement, but dangerously unaligned AIs will probably have no issues."
The first and most obvious issue here is that an AI that "solves alignment" sufficiently well to not fear self-improvement is not the same as an AI that's actually aligned with humans. So there's actually no protection there at all.
In fact, the phenomenon described here seems to make it more likely that an unaligned AI will be fine with self-improving, because the simpler the utility function the easier time it has guaranteeing the alignment of the improved version!
Last, but far from least, self-improvement of the form "get faster and run on more processors" is hardly challenging from an alignment perspective. And it's far from unlikely an AI could find straightforward algorithmic improvements that it could mathematically prove safe relative to its own utility function.
In short, the overall approach seems like wishful thinking of the form, "maybe if it's smart enough it won't want to kill us."
Nope - expression of feelings of friendship isn't part of the explicit structure of friendship either. Lots of people are friends without saying anything about it.
All I've really said here is that the difference between VCFWB and a "romantic" relationship is difficult to discern, especially from the outside, and given that the nature of "romance" is both internal and optional to the relationship. If a pair of VCFWB's stop having sex or hanging out or cuddling, it's hard to say they're still in a VCFWB relationship. But if people in a "romantic" relationship stop acting romantic with one another, they can still be said to be in a "romantic" relationship.
The overall point here is that describing "romantic" as if it is a property of a relationship rather than a property of people's feelings is not a good carving of reality at the joints. People can have romantic feelings (or expression thereof) without having any relationship at all, let alone one with reciprocal romantic feelings.
(Indeed, romantic feelings are quite orthogonal to the type and nature of the relationship itself: the term "friend zone" highlights this point.)
So, from an epistemic view, my take is that it's not only useless but confusing to describe a relationship as being romantic, since it's not meaningfully a property of the relationship, but rather a set of feelings that come and go for (and about) parties in the relationship. How many feelings must happen? How often? Must they be reciprocal? Is it still romantic if neither party feels that way any more? What if they didn't start out that way but are now?
I think that the bundle of things called "romantic relationship" are much better described structurally in terms of behavior, in order to avoid cultural projections and mismatched expectations between partners. One person might use it to mean "marriage for life", while another might mean "passionate weekend affair", after all! These more structurally-defined relationships can both be labeled a "romantic relationship" but this does not do a good job of defining a shared vision and expectations for the parties in said relationship.
IOW, I believe that everyone is better off taboo-ing the phrase "romantic relationship" in any serious discussion about relationships -- especially a relationship they'll personally be involved in!
More diplomatically: people are terrified of disapproval and will do anything to avoid feeling they deserve it, so if you must point out that something isn't working, try to do so in such a way that the easiest way for them to resolve their cognitive dissonance isn't "blow you off" or "get mad at you". i.e., find a way for them to "save face".
(As a lot of people associate being incorrect with being deserving of disapproval.)