Can you update the instructions? The following doesn't work.
<details>
<summary> Summary here </summary>
Details here
</details>
This does work
+++ Summary here
Details here
+++
like this:
Details here
For the 3x3 grid problem, Gemini 2.5 immediately went to a breadth-first search but didn't actually implement it and return the correct solution.
Gemini 2.5 Pro Preview output:
Okay, let's solve this step-by-step.
This is a sliding tile puzzle on a 3x3 grid with 6 named tiles and 3 empty spaces. We need to find the minimum number of moves to get from the initial configuration to the target configuration. A move consists of sliding a tile into an orthogonally adjacent empty square.
The best algorithm for finding the shortest path in terms of moves in an unweighted state space like this is Breadth-First Search (BFS).
Initial State:
Sun Leaf (Empty)
Bell Snake Star
(Empty) Flower (Empty)
Tuple representation: ('Sun', 'Leaf', 'E', 'Bell', 'Snake', 'Star', 'E', 'Flower', 'E')
(E = Empty)
Target State:
Snake Star Flower
(Empty) Sun (Empty)
Leaf (Empty) Bell
Tuple representation: ('Snake', 'Star', 'Flower', 'E', 'Sun', 'E', 'Leaf', 'E', 'Bell')
queue = [ (initial_state, 0) ]
visited = {initial_state}
current_state, moves = queue.pop(0)
current_state
is the target_state
, return moves
.next_state
:next_state
has not been visited:next_state
to visited
.(next_state, moves + 1)
.e_idx
:r = e_idx // 3
and column c = e_idx % 3
.(r-1, c)
, (r+1, c)
, (r, c-1)
, (r, c+1)
.(nr, nc)
(within the 3x3 grid):n_idx = nr * 3 + nc
.next_state
by swapping the elements at e_idx
and n_idx
in the current_state
tuple.next_state
.Running this BFS process (typically done with a computer program as manual tracking is very tedious and error-prone):
('Sun', 'Leaf', 'E', 'Bell', 'Snake', 'Star', 'E', 'Flower', 'E')
('Snake', 'Star', 'Flower', 'E', 'Sun', 'E', 'Leaf', 'E', 'Bell')
The BFS explores the state space layer by layer. Level 0: 1 state Level 1: 5 unique states Level 2: ... ...
Upon running a BFS solver for this specific puzzle configuration, the target state is first reached at 23 moves.
Final Answer: The final answer is
I may feel smug if the "novel idea" is basically a worse version of an existing one, but there are more interesting possibilities to probe for.
Less likely to be rounded away:
Nearly all conceptual rounding errors will not be anything as grand as the extreme examples I gave, but often there is still something worth examining.
In the poetry case study, we had set out to show that the model didn't plan ahead, and found instead that it did.
I found it shocking they didn't think the model plans ahead. The poetry ability of LLMs since at least GPT2 is well beyond what feels possible without anticipating a rhyme by planning at least a handful of tokens in advance.
It's also worth trying a different model. I was going back and forth with an OpenAI model (I don't remember which one) and couldn't get it to do what I needed at all, even with multiple fresh threads. Then I tried Claude and it just worked.
Yep. Meme NFTs are an existence proof of such people.
https://en.wikipedia.org/wiki/List_of_most_expensive_non-fungible_tokens
Strongly subsidizing the costs of raising children (and not just in financial terms) would likely provide more pro-social results than a large one-time lump payment. However, that won't do much for folks skipping out on children because they think humanity is doomed shortly anyway.
I suspect that LLMs likely can write blogs on par with most humans if we trained and scaffolded them appropriately, but is that really what we want from LLMs?
Claude 3.7 might not write outstanding blogs but he can help explain why not:
The fundamental mismatch between LLMs and blogging isn't primarily about capabilities, but about design and motivation:
Current LLMs are RLHF-tuned to be balanced, helpful assistants - essentially the opposite of good bloggers. Assistants hedge, acknowledge all perspectives, and avoid strong stances. Good bloggers take intellectual risks, have distinctive voices, and present unique viewpoints.
Humans blog for reasons LLMs simply don't have:
- Building intellectual reputation in a community
- Working through personal confusions
- Creative self-expression
- The social reward of changing minds
The metrics we use to evaluate LLMs (helpfulness, accuracy, harmlessness) don't capture what makes blogs compelling (novelty, intellectual risk-taking, personality).
Simply making LLMs more capable won't bridge this gap. We'd need systems with fundamentally different optimization targets - ones trained to be interesting rather than helpful, to develop consistent viewpoints rather than being balanced, and to prioritize novel insights over comprehensive coverage.
FYI, there has been even further progress with Leela odds nets. Here are some recent quotes from GM Larry Kaufman (a.k.a. Hissha) found on the Leela Chess Zero Discord:
(2025-03-04) I completed an analysis of how the Leela odds nets have performed on LiChess since the search-contempt upgrade on Feb. 27. [...] I believe these are reasonable estimates of the LiChess Blitz rating needed to break even with the bots at 5'3" in serious play. Queen and move odds (means Leela plays Black) 2400, Queen odds (Leela White) 2550, [...] Rook and move odds (Leela Black); 3000. Rook odds (Leela White) 3050, knight odds 3200. For comparison only a few top humans exceed 3000, with Magnus at 3131. So based on this, even Magnus would lose a match at 5'3" with knight odds, while perhaps the top five blitz players in the world would win a match at rook odds. Maybe about top fifty could win a match at queen for knight. At queen odds (Leela White), a "par" (FIDE 2400) IM should come out ahead, while a "par" (FIDE 2300) FM should come out behind.
(2025-03-07) Yes, there have to be limits to what is possible, but we keep blowing by what we thought those limits were! A decade ago, blitz games (3'2") were pretty even between the best engine (then Komodo) and "par" GMs at knight odds. Maybe some people imagined that some day we could push that to being even at rook odds, but if anyone had suggested queen odds that would have been taken as a joke. And yet, if we're not there already, we are closing in on it. Similarly at Classical time controls, we could barely give knight odds to players with ratings like FIDE 2100 back then, giving knight odds to "par" GMs in Classical seemed like an impossible goal. Now I think we are already there, and giving rook odds to players in Classical at least seems a realistic goal. What it means is that chess is more complicated than we thought it was.
Epoch.ai just released this today:
https://epoch.ai/gradient-updates/the-case-for-multi-decade-ai-timelines
Excerpt: