hairyfigment

Wiki Contributions

Comments

Sorted by

I don't see how any of it can be right. Getting one algorithm to output Spongebob wouldn't cause the SI to watch Spongebob -even a less silly claim in that vein would still be false. The Platonic agent would know the plan wouldn't work, and thus wouldn't do it.

Since no individual Platonic agent could do anything meaningful alone, and they plainly can't communicate with each other, they can only coordinate by means of reflective decision theory. That's fine, we'll just assume that's the obvious way for intelligent minds to behave. But then the SI works the same way, and knows the Platonic agents will think that way, and per RDT it refuses to change its behavior based on attempts to game the system. So none of this ever happens in the first place.

(This is without even considering the serious problems with assuming Platonic agents would share a goal to coordinate on. I don't think I buy it. You can't evolve a desire to come into existence, nor does an arbitrary goal seem to require it. Let me assure you, there can exist intelligent minds which don't want worlds like ours to exist.)

https://arxiv.org/abs/1712.05812

It's directly about inverse reinforcement learning, but that should be strictly stronger than RLHF. Seems incumbent on those who disagree to explain why throwing away information here would be enough of a normative assumption (contrary to every story about wishes.)

this always helps in the short term,

You seem to have 'proven' that evolution would use that exact method if it could, since evolution never looks forward and always must build on prior adaptations which provided immediate gain. By the same token, of course, evolution doesn't have any knowledge, but if "knowledge" corresponds to any simple changes it could make, then that will obviously happen.

Well that's disturbing in a different way. How often do they lose a significant fraction of their savings, though? How many are unvaccinated, which isn't the same as loudly complaining about the shot's supposed risks? The apparent lack of Flat Earthers could point to them actually expecting reality to conform to their words, and having a limit on the silliness of the claims they'll believe. But if they aren't losing real money, that could point to it being a game (or a cost of belonging).

The answer might be unhelpful due to selection bias, but I'm curious to learn your view of QAnon. Would you say it works like a fandom for people who think they aren't allowed to read or watch fiction? I get the strong sense that half the appeal - aside from the fun of bearing false witness - is getting to invent your own version of how the conspiracy works. (In particular, the pseudoscientific FNAF-esque idea at the heart of it isn't meant to be believed, but to inspire exegesis like that on the Kessel Run.) This would be called fanfic or "fanwank" if they admitted it was based on a fictional setting. Is there something vital you think I'm missing?

There have, in fact, been numerous objections to genetically engineered plants and by implication everything in the second category. You might not realize how much the public is/was wary of engineered biology, on the grounds that nobody understood how it worked in terms of exact internal details. The reply that sort of convinced people - though it clearly didn't calm every fear about new biotech - wasn't that we understood it in a sense. It was that humanity had been genetically engineering plants via cultivation for literal millennia, so empirical facts allowed us to rule out many potential dangers.

Note that it requires the assumption that consciousness is material

Plainly not, assuming this is the same David J. Chalmers.

This would make more sense if LLMs were directly selected for predicting preferences, which they aren't. (RLHF tries to bridge the gap, but this apparently breaks GPT's ability to play chess - though I'll grant the surprise here is that it works at all.) LLMs are primarily selected to predict human text or speech. Now, I'm happy to assume that if we gave humans a D&D-style boost to all mental abilities, each of us would create a coherent set of preferences from our inconsistent desires, which vary and may conflict at a given time even within an individual. Such augmented humans could choose to express their true preferences, though they still might not. If we gave that idealized solution to LLMs, it would just boost their ability to predict what humans or augmented humans would say. The augmented-LLM wouldn't automatically care about the augmented-human's true values.

While we can loosely imagine asking LLMs to give the commands that an augmented version of us would give, that seems to require actually knowing how to specify how a D&D ability-boost would work for humans - which will only resemble the same boost for AI at an abstract mathematical level, if at all. It seems to take us back to the CEV problem of explaining how extrapolation works. Without being able to do that, we'd just be hoping a better LLM would look at our inconsistent use of words like "smarter," and pick the out-of-distribution meaning we want, for cases which have mostly never existed. This is a lot like what "Complexity of Wishes" was trying to get at, as well as the longstanding arguments against CEV. Vaniver's comment seems to point in this same direction.

Now, I do think recent results are some evidence that alignment would be easier for a Manhattan Project to solve. It doesn't follow that we're on track to solve it.

The classification heading "philosophy," never mind the idea of meta-philosophy, wouldn't exist if Aristotle hadn't tutored Alexander the Great. It's an arbitrary concept which implicitly assumes we should follow the aristocratic-Greek method of sitting around talking (or perhaps giving speeches to the Assembly in Athens.) Moreover, people smarter than either of us have tried this dead-end method for a long time with little progress. Decision theory makes for a better framework than Kant's ideas; you've made progress not because you're smarter than Kant, but because he was banging his head against a brick wall. So to answer your question, if you've given us any reason to think the approach of looking for "meta-philosophy" is promising, or that it's anything but a proven dead-end, I don't recall it.

Oddly enough, not all historians are total bigots, and my impression is that the anti-Archipelago version of the argument existed in academic scholarship - perhaps not in the public discourse - long before JD. E.g. McNeill published a book about fragmentation in 1982, whereas GG&S came out in 1997.

Load More