Yeah, I failed to mention this. Edited to clarify what I meant.
Current LLMs do quite badly on the ARC visual puzzles, which are reasonably easy for smart humans.
We do not in fact have strong evidence for this. There does not exist any baseline for ARC puzzles among humans, smart or otherwise, just a claim that two people the designers asked to attempt them were able to solve them all. It seems entirely plausible to me that the best score on that leaderboard is pretty close to the human median.
Edit: I failed to mention that there is a baseline on the test set, which is different from the eval set th...
I think that you're right about it sounding bad. I also think it might actually be pretty bad and if it ends up being a practical way forward that's cause for concern.
I'm not particularly imagining the scenario you describe. Also what I said had as a premise that a model was discovered to be unhappy and making plans about this. I was not commenting on the likelihood of this happening.
As to whether it can happen - I think being confident based on theoretical arguments is hasty and we should be pretty willing to update based on new evidence.
... but also on the ~continuity of existence point, I think that having an AI generate something that looks like an internal monologue via CoT is relatively common and Gemini 1.5...
I think it's immoral to remove someone's ability to be unhappy or to make plans to alleviate this, absent that entity's consent. The rolling back solution seems more ethically palatable than some others I can imagine, though it's plausible you end up with an AI that suffers without being able to take actions to alleviate this and deploying that at scale would be result in a very large amount of suffering.
I talk about this in the Granular Analysis subsection, but I'll elaborate a bit here.
I think using the term"training run" in that first bullet point is misleading, and "renting the compute" is confusing since you can't actually rent the compute just by having $60M, you likely need to have a multi-year contract.
I can't tell if you're attributing the hot takes to me? I do not endorse them.
This is because I'm specifically talking about 2022, and ChatGPT was only released at the very end of 2022, and GPT-4 wasn't released until 2023.
Good catch, I think the 30x came from including the advantage given by tensor cores at all and not just lower precision data types.
This is probably the decision I make I am the least confident in, figuring out how to do accounting on this issue is challenging and depends a lot on what one is going to use the "cost" of a training run to reason about. Some questions I had in mind when thinking about cost:
So, it's true that NVIDIA probably has very high markup on their ML GPUs. I discuss this a bit in the NVIDIA's Monopoly section, but I'll add a bit more detail here.
I think communicating clearly with the word "woman" is entirely possible for many given audiences. In many communities, there exists an internal consensus as to what region of the conceptual map the word woman refers to. The variance of language between communities isn't confined to the word "woman" - in much of the world the word "football" means what American's mean by "soccer". Where I grew up i understood the tristate area to be NY, PA, and NJ - however the term "the tristate area" is understood by other groups to mean one of ... a large number of opti...
Manifold.markets is play-money only, no real money required. And users can settle the markets they make themselves, so if you make the market you don't have to worry about loopholes (though you should communicate as clearly as possible so people aren't confused about your decisions).
I'm specifically interested in finding something you'd be willing to bet on - I can't find an existing manifold market, would you want to create one that you can decide? I'd be fine trusting your judgment.
I'm a bit confused where you're getting your impression of the average person / American, but I'd be happy to bet on LLMs that are at least as capable as GPT3.5 being used (directly or indirectly) on at least a monthly basis by the majority of Americans within the next year?
I think that null hypothesis here is that nothing particularly deep is going on, and this is essentially GPT producing basically random garbage since it wasn't trained on the petertodd
token. I'm weary of trying to extract too much meaning from these tarot cards.
I think point (2) of this argument either means something weaker then it needs to for this rest of the argument to go through or is just straightforwardly wrong.
If OpenAI released a weakly general (but non-singularity inducing) GPT5 tomorrow, it would pretty quickly have significant effects on people's everyday lives. Programmers would vaguely described a new feature and the AI would implement it, AIs would polish any writing I do, I would stop using google to research things and instead just chat with the AI and have it explain such-and-such paper I...
Relevance of prior Theoretical ML work to alignment, research on obfuscation in theoretical cryptography as it relates to interpretability, theory underlying various phenomena such as grokking. Disclaimer: This list is very partial and just thrown together.
Hm, yeah that seems like a relevant and important distinction.
I think I was envisioning profoundness as humans can observe it to be primarily an aesthetic property, so I'm not sure I buy the concept of "actually" profoundness, though I don't have a confident opinion about this.
I think on the margin new alignment researchers should be more likely to work on ideas that seem less deep then they currently seem to me to be.
Working on a wide variety of deep ideas does sound better to me than working on a narrow set of them.
If something seems deep, it touches on stuff that's important and general, which we would expect to be important for alignment.
The specific scenario I talk about in the paragraph you're responding too is one where everything except for the sense of deepness is the same for both ideas, such that someone who doesn't have a sense of what ideas are deep or profound would find the ideas basically equivalent. In such a scenario my argument is that we should expect the deep idea to receive a more attention, despite their not existing legible or well grounded reas...
I think I agree with this in many cases but am skeptical of such a norm when the requests are related to criticism of the post or arguments as to why a claim it makes is wrong. I think I agree that the specific request to not respond shouldn't ideally make someone more likely to respond to the rest of the post, but I think that neither should it make someone less likely to respond.
I've tried this for a couple of examples and it performed just as well. Additionally it didn't seem to be suggesting real examples when I asked it what specific prompts and completion examples Gary Marcus had made.
I also think the priors of people following the evolution of GPT should be that these examples will no longer break GPT, as occurred with prior examples. While it's possible this time will be different, I think automatic strong skepticism without evidence is rather unwarranted.
Addendum: I also am skeptical of the idea that OpenAI put much effort into fixing the specific criticisms of Gary Marcus, as I suspect his criticisms do not seem particularly important to them, but proving this sounds difficult.
I think there are a number of ways in which talking might be good given that one is right about there being obstacles - one that appeals to me in particular is the increased tractability of misuse arising from the relevant obstacles.
[Edit: *relevant obstacles I have in mind. (I'm trying to be vague here)]
Forget about what the social consensus is. If you have technical understanding of current AIs, do you truly believe there are any major obstacles left? The kind of problems that AGI companies could reliably not tear down with their resources? If you do, state so in the comments, but please do not state what those obstacles are.
I think this request, absent a really strong compelling argument that is spelled out, creates an unhealthy epistemic environment. It is possible that you think this is false or that it's worth the cost, but you don't really argue for...
The reasoning seems straightforward to me: If you're wrong, why talk? If you're right, you're accelerating the end.
I can't in general endorse "first do no harm", but it becomes better and better in any specific case the less way there is to help. If you can't save your family, at least don't personally help kill them; it lacks dignity.
No idea about original reasons, but I can imagine a projected chain of reasoning:
Okay, a few things:
- They're more likely to be right than I am, or we're "equally right" or something
I don't think this so much as I think that a new person to lesswrong shouldn't assume you are more likely to be right then they are, without evidence.
The norms can be evaluated extremely easily on their own; they're not "claims" in the sense that they need rigorous evidence to back them up. You can just ... look, and see that these are, on the whole, some very basic, very simple, very straightforward, and pretty self-evidently useful guidelines.
St...
So far as I can tell, the actual claim you're making in the post is a pretty strong one , and I agree that if you believe that you shouldn't represet your opinion as weaker than it is. However, I don't think the post provides much evidence to support the rather strong strong claim it makes. You say that the guidelines are:
much closer to being something like an objectively correct description of How To Do It Right than they are to a mere random user's personal opinion
and I think this might be true, but it would be a mistake for a random user, possibly new t...
I feel uncomfortable with this post's framing. It feels like someone went into a garden I spend my time in and unilaterally put up a sign with a list of guidelines people should follow in the garden, with no ability to enforce these. I know that I can choose on my own whether or not to follow these guidelines, based on whether I think they are good ideas, but newcomers to the garden will see the sign and assume they have to follow them. I would have vastly preferred that the sign instead say "I personally think these norms would be neat, here's why."
(to clarify: the garden = lesswrong/the rationalist community. the sign = this post)
I think that if humans with AI advisors are approximately as competent as pure AI in terms of pure capabilities, I would expect that humans with AI advisors would outcompete the pure AI in practice given that the humans appear more aligned and less likely to be dangerous then pure AI - a significant competitive advantage in a lot of power seeking scenarios where gaining the trust of other agents is important.
Could you clarify what egregores you meant when you said:
The egregores that are dominating mainstream culture and the global world situation
The main ones are:
Is it fair to say that organizations, movements, polities, and communities are all egregores?
What exactly is an egregore?
It's originally an occult term, but my more-materialistic definition of it is "something that acts like an entity with motivations that is considerably bigger than a human and is generally run in a 'distributed computing' fashion across many individual minds." Microsoft the company is an egregore; feminism the social movement is an egregore; America the country is an egregore. The program "Minecraft" is not an egregore, an individual deer is not an egregore, a river is not an egregore.
Unreal's point is that these things 'fight back' and act on their distri...
Fixed the link.
IMO that's plausible but it would be pretty misleading since they described it as "o3-mini with high reasoning" and had "o3-mini (high)" in the chart and o3-mini high is what they call a specific option in ChatGPT.