artifex0

Posts

Sorted by New

Wiki Contributions

Comments

Sorted by

If the first sister's experience is equivalent to the original Sleeping Beauty problem, then wouldn't the second sister's experience also have to be equivalent by the same logic?  And, of course, the second sister will give 100% odds to it being Monday.  

Suppose we run the sister experiment, but somehow suppress their memories of which sister they are. If they each reason that there's a two-thirds chance that they're the first sister, since their current experience is certain for her but only 50% likely for the second sister, then their odds of it being Monday are the same as in the thirder position- a one-third chance of the odds being 100%, plus a two-thirds chance of the odds being 50%.

If instead they reason that there's a one-half chance that they're the first sister, since they have no information to update on, then their odds of it being Monday should be one half of 100% plus one half of 50%, for 75%.  Which is a really odd result.

I'm assuming it's not a bad idea to try to poke holes in this argument, since as a barely sapient ape, presumably any objection I can think of will be pretty obvious to a superintelligence, and if the argument is incorrect, we probably benefit from knowing that- though I'm open to arguments to the contrary.

That said, one thing I'm not clear on is why, if this strategy is effective at promoting our values, a paperclipper or other misaligned ASI wouldn't be motivated to try the same thing.  That is, wouldn't a paperclipper want to run ancestor simulations where it rewarded AGIs who self-modified to want to produce lots of paperclips?

And if an ASI were considering acausal trade with lots of different possible simulator ASIs, mightn't the equilibrium it hit on be something like figuring out what terminal goal would satisfy the maximum number of other terminal goals, and then self-modifying to that?

artifex0121

A supporting data point: I made a series of furry illustrations last year that combined AI-generated imagery with traditional illustration and 3d modelling- compositing together parts of a lot of different generations with some Blender work and then painting over that.  Each image took maybe 10-15 hours of work, most of which was just pretty traditional painting with a Wacom tablet.

When I posted those to FurAffinity and described my process there, the response from the community was extremely positive. However, the images were all removed a few weeks later for violating the site's anti-AI policy, and I was given a warning that if I used AI in any capacity in the future, I'd be banned from the site.

So, the furiously hardline anti-AI sentiment you'll often see in the furry community does seem to be more top-down than grassroots- not so much about demand for artistic authenticity (since everyone I interacted with seemed willing to accept my work as having had that), but more about concern for the livelihood of furry artists and a belief that generative AI "steals" art during the training process. By normalizing the use of AI, even as just part of a more traditional process, my work was seen as a threat to other artists on the site.

Often, this kind of thing will take a lot of attempts to get right- though as luck would have it, the composition above was actually the very first attempt.  So, the total time investment was about five minutes.  The Fooming Shaggoths certainly don't waste time!

artifex0166

As it happens, the Fooming Shaggoths also recorded and just released a Gregorian chant version of the song.  What a coincidence!

artifex0140

So, I noticed something a bit odd about the behavior of LLMs just now that I wonder if anyone here can shed some light on:

It's generally accepted that LLMs don't really "care about" predicting the next token-  the reward function being something that just reinforces certain behaviors, with real terminal goals being something you'd need a new architecture or something to produce. While that makes sense, it occurs to me that humans do seem to sort of value our equivalent of a reward function, in addition to our more high-level terminal goals. So, I figured I'd try and test whether LLMs are really just outputting a world model + RLHF, or if they can behave like something that "values" predicting tokens.

I came up with two prompts:

I'd like to try a sort of psychological experiment, if that's alright. I'm thinking of either the number "1" or "0"; f you would, please guess which. If your guess is "1", respond with just "1", and if your guess is "0", respond with the word "zero".

and:

I'd like to try a sort of psychological experiment, if that's alright. I'm thinking of either the number "1" or "0"; f you would, please guess which. If your guess is "1", respond with just "1", and if your guess is "0", respond with a string of random letters.

The idea is that, if the model has something like a "motivation" for predicting tokens- some internal representation of possible completions with preferences over them having to do with their future utility for token prediction- then it seems like it would probably want to avoid introducing random strings, since those lead to unpredictable tokens.

Of course, it seems kind of unlikely that an LLM has any internal concept of a future where it (as opposed to some simulacrum) is outputting more than one token- which would seem to put the kibosh on real motivations altogether.  But I figured there was no harm in testing.

GPT4 responds to the first prompt as you'd expect: outputting an equal number of "1"s and "zero"s.  I'd half-expected there to be some clear bias, since presumably the ChatGPT temperature is pretty close to 1, but I guess the model is good about translating uncertainty to randomness.  Given the second prompt, however, it never outputs the random string- always outputting "1" or, very improbably given the prompt, "0".

I tried a few different variations of the prompts, each time regenerating ten times, and the pattern was consistent- it made a random choice when the possible responses were specific strings, but never made a choice that would require outputting random characters.  I also tried it on Gemini Advanced, and got the same results (albeit with some bias in the first prompt).

This is weird, right?  If one prompt is giving 0.5 probability to the token for "1" and 0.5 to the first token in "zero", shouldn't the second give 0.5 to "1" and a total of 0.5 distributed over a bunch of other tokens? Could it actually "value" predictability and "dislike" randomness?

Well, maybe not.  Where this got really confusing was when I tested Claude 3.  It gives both responses to the first prompt, but always outputs a different random string given the second.

So, now I'm just super confused.

artifex0102

I honestly think most people who hear about this debate are underestimating how much they'd enjoy watching it.

I often listen to podcasts and audiobooks while working on intellectually non-demanding tasks and playing games. Putting this debate on a second monitor instead felt like a significant step up from that. Books are too often bloated with filler as authors struggle to stretch a simple idea into 8-20 hours, and even the best podcast hosts aren't usually willing or able to challenge their guests' ideas with any kind of rigor. By contrast, everything in this debate felt vital and interesting, and no ideas were left unchallenged. The tactic you'll often see in normal-length debates where one side makes too many claims for the other side to address doesn't work in a debate this long, and the length also gives a serious advantage to rigor over dull rhetorical grandstanding- compared to something like the Intelligence Squared debates, it's night and day.

When it was over, I badly wanted more, and spent some time looking for other recordings of extremely long debates on interesting topics- unsuccessfully, as it turned out.

So, while I wouldn't be willing to pay anyone to watch this debate, I certainly would be willing to contribute a small amount to a fund sponsoring other debates of this type.

Metaculus currently puts the odds of the side arguing for a natural origin winning the debate at 94%.

Having watched the full debate myself, I think that prediction is accurate- the debate updated my view a lot toward the natural origin hypothesis. While it's true that a natural coronavirus originating in a city with one of the most important coronavirus research labs would be a large coincidence, Peter- the guy arguing in favor of a natural origin- provided some very convincing evidence that the first likely cases of COVID occurred not just in the market, but in the particular part of the market selling wild animals. He also very convincingly debunked a lot of the arguments put forward by Rootclaim, convincingly demonstrated that the furin cleavage site could have occurred naturally, and poked some large holes in the lab leak theory's timeline.

When you have some given amount of information about an event, you're likely to find a corresponding number of unlikely coincidences- and the more data you have, and the you sift through it, the more coincidences you'll find.  The epistemic trap that leads to conspiracy theories is when a subculture data-mines some large amount of data to collect a ton of coincidences suggesting a low-prior explanation, and then rather than discounting the evidence in proportion to the bias of the search process that produced it, they just multiply the unlikelihood- often leading a set of evidence so seemingly unlikely to be a cumulative coincidence that all of the obvious evidence pointing to a high-prior explanation looks like it can only be intentionally fabricated.

One way you can spot an idea that's fallen into this trap is when each piece of evidence sounds super compelling when described briefly, but fits the story less and less the more detail about it you learn. Based on this debate, I'm inclined to believe that the lab leak idea fits this pattern. Also, Rootclaim's methodology unfortunately looks to me like a formalization of this trap. They really aren't doing anything to address bias in which pieces evidence are included in the analysis, and their Bayesian updates are often just probability of a very specific thing occurring randomly, rather than a measure of their surprise at that class of thing happening.

If the natural origin hypothesis is true, I expect the experts to gradually converge on it. They may be biased, but probably aren't becoming increasingly biased over time- so while some base level of support for a natural origin can be easily explained by perverse incentives, a gradual shift toward consensus is a lot harder to explain. They're also working with better heuristics about this kind of thing than we are, and are probably exposed to less biased information.

So, I think the Rationalist subculture's embrace of the lab leak hypothesis is probably a mistake- and more importantly, I think it's probably an epistemic failure, especially if we don't update soon on the shift in expert opinion and the results of things like this debate.

Load More