silentbob

Wiki Contributions

Comments

Sorted by

Downvoted for 3 reasons: 

  • The style strikes me as very AI-written. Maybe it isn't - but the very repetitive structure looks exactly like the type of text I tend to get out of ChatGPT much of the time. Which makes it very hard to read.
  • There are many highly superficial claims here without much reasoning to back them up. Many claims of what AGI "would" do without elaboration. "AGI approaches challenges as problems to be solved, not battles to be won." - first, why? Second, how does this help us when the best way to solve the problem involves getting rid of humans?
  • Lastly, I don't get the feeling this post engages with the most common AI safety arguments at all. Neither does it with evidence from recent AI developments. How do you expect "international agreements" with any teeth in the current arms race? When we don't even get national or state level agreements. While Bing/Sydney was not an AGI, it clearly showed that much of what this post dismisses as anthropocentric projections is realistic, and, currently, maybe even the default of what we can expect of AGI as long as it's LLM-based. And even if you dismiss LLMs and think of more "Bostromian" AGIs, that still leaves you with instrumental convergence, which blows too many holes into this piece to leave anything of much substance.

Or as a possible more concrete prompt if preferred: "Create a cost benefit analysis for EU directive 2019/904, which demands that bottle caps of all plastic bottles are to remain attached to the bottles, with the intention of reducing littering and protecting sea life.

Output:

  • key costs and benefits table

  • economic cost for the beverage industry to make the transition

  • expected change in littering, total over first 5 years

  • QALYs lost or gained for consumers throughout the first 5 years"

In the EU there's some recent regulation about bottle caps being attached to bottles, to prevent littering. (this-is-fine.jpg)

Can you let the app come up with a good way to estimate the cost benefit ratio of this piece of regulation? E.g. (environmental?) benefit vs (economic? QALY?) cost/drawbacks, or something like that. I think coming up with good metrics to quantify here is almost as interesting as the estimate itself.

I have the vague impression that this is true for me as well, and I remember having made that same claim (that spontaneous conversations at conferences seem maybe most valuable) to a friend when traveling home from an EAGx. My personal best guess: planned conversations are usually 30 minutes long, and while there is some interest based filtering going on, there's usually no guarantees you vibe well with the person. Spontaneous encounters however have pretty variable length, so the ones where you're not vibing will just be over naturally quickly, whereas the better connections will last longer. So my typical "spontaneous encounter minute" tends to be more enjoyable than my typical "planned 1-1 minute". But hard to say how this transfers to instrumental value.

I made a somewhat similar point in a post earlier this year, but much more superficial and less technical. So it was nice to read your deeper exploration of the topic.

Almost two years after writing this post, this is still a concept I encounter relatively often. Maybe less so in myself, as, I like to think, I have sufficiently internalized the idea to not fall into the "fake alternative trap" anymore very often. But occasionally this comes up in conversations with others, when they're making plans, or we're organizing something together.

With some distance, and also based on some of the comments, I think there is room for improvement:

  • the Gym membership example is a tricky one, as "getting a gym membership to go to the gym" is, for many people, also kind of a fake option, as they get the membership and pay for it, but still end up not going to the gym anyway. That example works for people who are more likely to go to the gym than to work out at home. But if you would in expectation exercise no more at the gym than you would at home, then paying a gym membership is not helpful.
    • Maybe an example that applies to more people would be studying at the (university) library vs studying at home? The former works better for many. So studying at home would potentially be a fake alternative. Just because you could in principle study for 10 hours a day at home, doesn't mean you actually end up doing that.
  • I was and still am a bit unhappy about the "Option A - Option B - Do nothing" diagram. Somehow it's harder to read than its simplicity would suggest.
  • The AI generated title image doesn't really convey the idea of of the post. But back then, AI image generation was still more limited than today, and it was difficult enough to even get that image to look acceptable.

But besides that, I think it holds up. It's a relevant concept, "fake alternatives" seems like a good handle to represent it, and the post is short and focused.

For people who like guided meditations: there's a small YouTube channel providing a bunch of secular AI-generated guided meditations of various lengths and topics. More are to come, and the creator (whom I know) is happy about suggestions. Three examples:

They are also available in podcast form here.

I wouldn't say these meditations are necessarily better or worse than any others, but they're free and provide some variety. Personally, I avoid apps like Waking Up and Headspace due to both their imho outrageous pricing model and their surprising degree of monotony. Insight Timer is a good alternative, but the quality varies a lot and I keep running into overly spiritual content there. Plus there's obviously thousands and thousands of guided meditations on YouTube, but there too it's hit and miss. So personally I'm happy about this extra source of a good-enough-for-me standard.

Also, in case you ever wanted to hear a guided meditation on any particular subject or in any particular style, I guess you can contact the YouTube channel directly, or tell me and I'll forward your request.

I'm a bit torn regarding the "predicting how others react to what you say or do, and adjust accordingly" part. On the one hand this is very normal and human and makes sense. It's kind of predictive empathy in a way. On the other hand, thinking so very explicitly about it and trying to steer your behavior in a way so as to get the desired reaction out of another person also feels a bit manipulative and inauthentic. If I knew another person would think that way and plan exactly how they interacted with me, I would find that quite off-putting. But maybe the solution is just "don't overdo it", and/or "only use it in ways the other person would likely consent to" (such as avoiding to accidentally say something hurtful).

My take on this is that patching the more "obvious" types of jailbreaking and obfuscation already makes a difference and is probably worth it (as long as it comes at no notable cost to the general usefulness of the system). Sure, some people will put in the effort to find other ways, but the harder it is, and the fewer little moments of success you have when first trying it, the fewer people will get into it. Of course one could argue that the worst outcomes come from the most highly motivated bad actors, and they surely won't be deterred by such measures. But I think even for them there may be some path dependencies involved where they only ended up in their position because over the years, while interacting with LLMs, they ended up running into a bunch of just ready enough jailbreaking scenarios that kept their interest up. Of course that's an empirical question though.

Load More