silentbob

Wikitag Contributions

Comments

Sorted by

Some further examples:

  • Past me might have said: Apple products are "worse" because they are overpriced status symbols
  • Many claims in politics, say "we should raise the minimum wage because it helps workers"
  • We shouldn't use nuclear power because it's not really "renewable"
  • When AI lab CEOs warn of AI x risk we can dismiss that because they might just want to build hype
  • AI cannot be intelligent, or dangerous, because it's just matrix multiplications
  • One shouldn't own a cat because it's an unnatural way for a cat to live
  • Pretty much any any-benefit mindset that makes it into an argument rather than purely existing in a person's behavior

It certainly depends on who's arguing. I agree that some sources online see this trade-off and end up on the side of not using flags after some deliberation, and I think that's perfectly fine. But this describes only a subset of cases, and my impression is that very often (and certainly in the cases I experienced personally) it is not even acknowledged that usability, or anything else, may also be a concern that should inform the decision. 

(I admit though that "perpetuates colonialism" is a spin that goes beyond "it's not a 1:1 mapping" and is more convincing to me)

This makes me wonder, how could an AI figure out whether it had conscious experience? I always used to assume that from first person perspective it's clear when you're conscious. But this is kind of circular reasoning as it assumes you have a "perspective" and are able to ponder the question. Now what does a, say, reasoning model do? If there is consciousness, how will it ever know? Does it have to solve the "easy" problem of consciousness first and apply the answer to itself?

In no particular order, because interestingness is multi-dimensional and they are probably all to some degree on my personal interesting Pareto frontier:

  • We're not as 3-dimensional as we think
  • Replacing binary questions with "under which circumstances"
  • Almost everything is causally linked, saying "A has no effect on B" is almost always wrong (unless you very deliberately search for A and B that fundamentally cannot be causally linked). If you ran a study with a bazillion subjects for long enough, practically anything you can measure would reach statistical significance
  • Many disagreements are just disagreements about labels ("LLMs are not truly intelligent", "Free will does not exist") and can be easily resolved / worked around once you realize this (see also)
  • Selection biases of all kind
  • Intentionality bias, it's easy to explain human behavior with supposed intentions, but there is much more randomness and ignorance everywhere than we think
  • Extrapolations tend to work locally, but extrapolating further into the future very often gets things wrong; kind of obvious, applies to e.g. resource shortages ("we'll run out of X and then there won't be any X anymore!"), but also Covid (I kind of assumed Covid cases would just exponentially climb until everything went to shit, and forgot to take into account that people would get afraid and change their behavior on a societal scale, at least somewhat, and politicians would eventually do things, even if later than I would), and somewhat AI (we likely won't just "suddenly" end up with a flawless superintelligence)
  • "If only I had more time/money/whatever" style thinking is often misguided, as often when people say/think this, the sentence continues with "then I could spend that time/money/whatever in other/more ways than currently", meaning as soon as you get more of X, you would immediately want to spend it, so you'll never sustainably end up in a state of "more X". So better get used to X being limited and having to make trade-offs and decisions on how to use that limited resource rather than daydreaming about a hypothetical world of "more X". (This does not mean you shouldn't think about ways to increase X, but you should probably distance yourself from thinking about a world in which X is not limited)
  • Taleb's Extremistan vs Mediocristan model
  • +1 to Minimalism that lsusr already mentioned
  • The mindblowing weirdness of very high-dimensional spaces
  • Life is basically an ongoing coordination problem between your past/present/future selves
  • The realization that we're not smart enough to be true consequentialists, i.e. consequentialism is somewhat self-defeating
  • The teleportation paradox, and thinking about a future world in which a) teleportation is just a necessity to be successful in society (and/or there is just social pressure, e.g. all your friends do it and you get excluded from doing cool things if you don't join in) and b) anyone having teleported before having convincing memories of having gone through teleportation and coming out on the other side. In such a world, anyone with worries about teleportation would basically be screwed. Not sure if I should believe in any kind of continuity of consciousness, but that certainly feels like a thing. So I'd probably prefer not to be forced to give that up just because the societal trajectory happens to lead through ubiquitous teleportation.
silentbobΩ06-2

Random thought: maybe (at least pre-reasoning-models) LLMs are RLHF'd to be "competent" in a way that makes them less curious & excitable, which greatly reduces their chance of coming up with (and recognizing) any real breakthroughs. I would expect though that for reasoning models such limitations will necessarily disappear and they'll be much more likely to produce novel insights. Still, scaffolding and lack of context and agency can be a serious bottleneck.

Interestingly, the text to speech conversion of the "Text does not equal text" section is another very concrete example of this: 

  • The TTS AI summarizes the "Hi!" ASCII art picture as "Vertical lines arranged in a grid with minor variations". I deliberately added an alt text to that image, describing what can be seen, and I expected that this alt text would be used for TTS - but seemingly that is not the case, and instead some AI describes the image in isolation. If I were to describe that image without any further context, I would probably mention that it says "Hi!", but I grant that describing it as "Vertical lines arranged in a grid with minor variations" would also be a fair description.
  • the "| | | |↵|-| | |↵| | | o" string is read out as "dash O". I would have expected the AI to just read that out in full, character by character. Which probably is an example of me falsely taking my intention as a given. There are probably many conceivable cases where it's actually better for the AI to not read out cryptic strings character by character (e.g. when your text contains some hash or very long URL). So maybe it can't really know that this particular case is an exception.

But what you're probably not aware of is that 0.8% of the US population ends up dieing due to intentional homicide

That is an insane statistic. According to a bit of googling this indeed seems plausible, but would still be interested in your source if you can provide it.

Downvoted for 3 reasons: 

  • The style strikes me as very AI-written. Maybe it isn't - but the very repetitive structure looks exactly like the type of text I tend to get out of ChatGPT much of the time. Which makes it very hard to read.
  • There are many highly superficial claims here without much reasoning to back them up. Many claims of what AGI "would" do without elaboration. "AGI approaches challenges as problems to be solved, not battles to be won." - first, why? Second, how does this help us when the best way to solve the problem involves getting rid of humans?
  • Lastly, I don't get the feeling this post engages with the most common AI safety arguments at all. Neither does it with evidence from recent AI developments. How do you expect "international agreements" with any teeth in the current arms race? When we don't even get national or state level agreements. While Bing/Sydney was not an AGI, it clearly showed that much of what this post dismisses as anthropocentric projections is realistic, and, currently, maybe even the default of what we can expect of AGI as long as it's LLM-based. And even if you dismiss LLMs and think of more "Bostromian" AGIs, that still leaves you with instrumental convergence, which blows too many holes into this piece to leave anything of much substance.

Or as a possible more concrete prompt if preferred: "Create a cost benefit analysis for EU directive 2019/904, which demands that bottle caps of all plastic bottles are to remain attached to the bottles, with the intention of reducing littering and protecting sea life.

Output:

  • key costs and benefits table

  • economic cost for the beverage industry to make the transition

  • expected change in littering, total over first 5 years

  • QALYs lost or gained for consumers throughout the first 5 years"

In the EU there's some recent regulation about bottle caps being attached to bottles, to prevent littering. (this-is-fine.jpg)

Can you let the app come up with a good way to estimate the cost benefit ratio of this piece of regulation? E.g. (environmental?) benefit vs (economic? QALY?) cost/drawbacks, or something like that. I think coming up with good metrics to quantify here is almost as interesting as the estimate itself.

Load More