I deeply value evidence, reason, and letting people draw their own conclusions. I dislike telling anyone what to think or do.
I believe you, yes YOU, are capable of reading and understanding everything you want to read and understand.
This is a good point! Typically I start from a clean commit in a fresh chat, to avoid this problem from happening too easily, proceeding through the project in the smallest steps I can get Claude to make. That's what makes the situation feel so strange; it feels just like this problem, but it happens instantly, in Claude's first responses.
I happened to be discussing this in the Discord today. I have a little hobby project that was suddenly making fast progress with 3.7 for the first few days, which was very exciting, but then a few days ago it felt like something changed again and suddenly even the old models are stuck in this weird pattern of like... failing to address the bug, and instead hyper-fixating on adding a bunch of surrounding extra code to handle special cases, or sometimes even simply rewriting the old code and claiming it fixes the bug, and the project is suddenly at a complete standstill. Even if I eventually yell at it strongly enough to stop adding MORE buggy code instead of fixing the bug, it introduces a new bug and the whole back-and-forth argument with Claude over whether this bug even exists starts all over. I cannot say this is rigorously tested or anything- it's just one project, and surely the project itself is influencing its own behavior and quirks as it becomes bigger, but I dunno man, something just feels weird and I can't put my finger on exactly what.
I do think it's helpful that managers now have a reliable way to summarize large amounts of comments, instead of making some poor intern with Excel try and figure out "sentiment analysis" to "read" thousands of comments without having to pay for a proper data scientist, and I wonder if that's already had some effects in the world.
Ah, what a fun idea! I wonder if coloring or marking the ropes and/or edges somehow would make it easier to assemble ad hoc- I think Veritaseum's video about non-periodic tilings included some sort of little markers on the edges that helped him orient new tiles, but that was on Penrose tiles and I'm not sure this shape has the same option.
This is absolutely a selfish request, so bear that in mind, but could you include screenshots and/or quotes of all X.com posts, and link to what the post links to when applicable? I have it blocked.
I thought these were pretty... let's say "exciting"... reads, but I'd be interested to hear more people's opinion of this as a trustworthy source.
Thank you.
It seems like if there is any non-determinism at all, there's always going to be an unavoidable potential for naughty thoughts, so whatever you call the "AI" must address them as part of its function anyway- either that or there is a deterministic solution?
I've actually noticed this in a hobby project, where I have some agents running around a little MOO-like text world and talking to each other. With DeepSeek-R1, just because it's fun to watch them "think" like little characters, I noticed I see this sort of thing a lot (maybe 1-in-5-ish, though there's a LOT of other scaffolding and stuff going on around it which could also be causing weird problems):