Or as a possible more concrete prompt if preferred: "Create a cost benefit analysis for EU directive 2019/904, which demands that bottle caps of all plastic bottles are to remain attached to the bottles, with the intention of reducing littering and protecting sea life.
Output:
key costs and benefits table
economic cost for the beverage industry to make the transition
expected change in littering, total over first 5 years
QALYs lost or gained for consumers throughout the first 5 years"
In the EU there's some recent regulation about bottle caps being attached to bottles, to prevent littering. (this-is-fine.jpg)
Can you let the app come up with a good way to estimate the cost benefit ratio of this piece of regulation? E.g. (environmental?) benefit vs (economic? QALY?) cost/drawbacks, or something like that. I think coming up with good metrics to quantify here is almost as interesting as the estimate itself.
I have the vague impression that this is true for me as well, and I remember having made that same claim (that spontaneous conversations at conferences seem maybe most valuable) to a friend when traveling home from an EAGx. My personal best guess: planned conversations are usually 30 minutes long, and while there is some interest based filtering going on, there's usually no guarantees you vibe well with the person. Spontaneous encounters however have pretty variable length, so the ones where you're not vibing will just be over naturally quickly, whereas the better connections will last longer. So my typical "spontaneous encounter minute" tends to be more enjoyable than my typical "planned 1-1 minute". But hard to say how this transfers to instrumental value.
I made a somewhat similar point in a post earlier this year, but much more superficial and less technical. So it was nice to read your deeper exploration of the topic.
Did this already happen? :)
Almost two years after writing this post, this is still a concept I encounter relatively often. Maybe less so in myself, as, I like to think, I have sufficiently internalized the idea to not fall into the "fake alternative trap" anymore very often. But occasionally this comes up in conversations with others, when they're making plans, or we're organizing something together.
With some distance, and also based on some of the comments, I think there is room for improvement:
But besides that, I think it holds up. It's a relevant concept, "fake alternatives" seems like a good handle to represent it, and the post is short and focused.
For people who like guided meditations: there's a small YouTube channel providing a bunch of secular AI-generated guided meditations of various lengths and topics. More are to come, and the creator (whom I know) is happy about suggestions. Three examples:
They are also available in podcast form here.
I wouldn't say these meditations are necessarily better or worse than any others, but they're free and provide some variety. Personally, I avoid apps like Waking Up and Headspace due to both their imho outrageous pricing model and their surprising degree of monotony. Insight Timer is a good alternative, but the quality varies a lot and I keep running into overly spiritual content there. Plus there's obviously thousands and thousands of guided meditations on YouTube, but there too it's hit and miss. So personally I'm happy about this extra source of a good-enough-for-me standard.
Also, in case you ever wanted to hear a guided meditation on any particular subject or in any particular style, I guess you can contact the YouTube channel directly, or tell me and I'll forward your request.
I'm a bit torn regarding the "predicting how others react to what you say or do, and adjust accordingly" part. On the one hand this is very normal and human and makes sense. It's kind of predictive empathy in a way. On the other hand, thinking so very explicitly about it and trying to steer your behavior in a way so as to get the desired reaction out of another person also feels a bit manipulative and inauthentic. If I knew another person would think that way and plan exactly how they interacted with me, I would find that quite off-putting. But maybe the solution is just "don't overdo it", and/or "only use it in ways the other person would likely consent to" (such as avoiding to accidentally say something hurtful).
My take on this is that patching the more "obvious" types of jailbreaking and obfuscation already makes a difference and is probably worth it (as long as it comes at no notable cost to the general usefulness of the system). Sure, some people will put in the effort to find other ways, but the harder it is, and the fewer little moments of success you have when first trying it, the fewer people will get into it. Of course one could argue that the worst outcomes come from the most highly motivated bad actors, and they surely won't be deterred by such measures. But I think even for them there may be some path dependencies involved where they only ended up in their position because over the years, while interacting with LLMs, they ended up running into a bunch of just ready enough jailbreaking scenarios that kept their interest up. Of course that's an empirical question though.
Downvoted for 3 reasons: