LESSWRONG
LW

2773
robo
6151870
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
1robo's Shortform
2y
33
</rant> </uncharitable> </psychologizing>
robo9h10

Light disagree.  Prefix modifiers are cognitively burdensome compared to postfix modifiers.  Imagine reading:

"What I'm about to say is a bit of a rant.  I'm about 30% confident it's true.  Disclosure, I have a personal stake in the second organization involved.  I'm looking for good counter arguments.  Based on a conversation with Paul.  I have a formal writeup at this blog post.  Part of the argument is unfair, I apologize.  I..."

Gaaa, just give me something concrete already!  It's going to be hard enough understanding your argument as it is; it's even harder to understanding your argument while having to keep these unresolved modifiers loaded in my mental stack.

Reply
Tomás B.'s Shortform
robo4d70

Ha, and I have been writing up a long-form for when AI-coded-GOFAI might become effective, one might even say unreasonably effective.
LLMs aren't very good at learning in environments with very few data samples, such as "learning on the job" or interacting with the slow real world.  But there often exist heuristics, ones that are difficult to run on a neural net, with excellent specificity that are capable of proving their predictive power with a small number of examples.  You can try to learn the position of the planets by feeding 10,000 examples into a neural network, but you're much better off with Newton's laws coded into your ensemble.  Data constrained environments (like, again, robots and learning on the job) are domains where the bitter lesson might not have bite.

Reply
robo's Shortform
robo1mo*73

Back in the GOFAI days, when AI meant A* search, I remember thinking: 

  1. Computers are wildly superhuman at explicit (System 2 reasoning) like doing arithmetic or searching through chess moves
  2. Computers are garbage at (System 1 reasoning), like recognizing a picture of a cat
  3. When computers get good at System 1, they will be wildly superhuman at everything

Now transformers appear to be good at System 1 reasoning, but computers aren't better at humans at everything.  Why?
I think it comes down to:

Computers' System 1 is still wildly sub-human at sample efficiency; they're just billions of times faster than humans

LLM's work because they can train on an inhuman amount of reading material.  When trained on only human amounts of material, they suck.

LLM Agents aren't very good because they can't learn on the job.  Even dumb humans learn better instincts after a little on-the-job practice.  We can just barely improve LLM's System 1 from its System 2, but only by brute forcing an inhuman number of roll-outs.

Robots suck, because the real world is slow and we don't have good tricks to train their System 1 by brute force.

We're in a weird paradigm where computers are billions of times faster than humans, but thousands of times worse at learning from a datum.

Reply
ParrotRobot's Shortform
robo2mo*50

I think I disagree.  It's more informative to answer in terms of value as it would be measured today, not value after the economy adjusts.

Suppose someone from 1800 wants to figure out how big a deal mechanized farm equipment will be for humanity.  They call up 2025 and ask "How big a portion of your economy is devoted to mechanized farm equipment, or farming enabled by mechanized equipment?"  We give them a tiny number.  They also ask about top-hats, and we also give them a tiny number.  From these tiny numbers they conclude both mechanized farm equipment and top-hats won't be important for humanity.

EDIT The sort of situation I'm worried about your definition missing is if remote-worker AGI becomes too cheap to meter, but human hands are still valuable.

Reply
Tomás B.'s Shortform
robo2mo20

Would you agree your take is rather contrarian?

 * This is not a parliamentary system.  The President doesn't get booted from office when they lose majority support -- they have to be impeached[1].
 * Successful impeachment takes 67 Senate votes.
 * 25 states (half of Senate seats) voted for Trump 3 elections in a row (2016, 2020, 2024).
 * So to impeach Trump, you'd need the votes of Senators from at least 9 states where Trump won 3 elections in a row.
 * Betting markets expect (70% chance) Republicans to keep their 50 seats majority in the November Election, not a crash in support.

  1. ^

    Or removed by the 25th amendment, which is strictly harder if the president protests (requires 2/3 vote to remove in both House and Senate).

Reply
Tomás B.'s Shortform
robo2mo12

...your modal estimate for the timing of Vance ascending to the presidency is more than two years before Trump's term ends?

Reply
Tomás B.'s Shortform
robo2mo50

And the market's top pick for President has read AI 2027.

Reply2
Anthropic Faces Potentially “Business-Ending” Copyright Lawsuit
robo2mo10

$750 per books seems surprisingly reasonable to me as a royalty rate for a compulsory AI ingest license.  Compulsory licenses are common in e.g. the music industry, you must license your musical work for covers (and get a 12¢ royalty per distribution)

Reply
Alexander Gietelink Oldenziel's Shortform
robo3mo5117

I second the video recommendation.

A friend in China, in a rare conversation we had about international politics, was annoyed at US politicians for saying China was "supporting" Russia.  "China has the production capacity to make easily 500,000 drones per day.[1]", he said.  "If China were supporting Russia, the war would be over".  And I had to admit I had not credited the Chinese government for keeping its insanely competitive companies from smuggling more drones into Russia.

  1. ^

    This seemed like a drastic underestimate to me.

Reply3
Habryka's Shortform Feed
robo3mo*2918

Huh, I didn't expect to take Gary Marcus's side against yours but I do for almost all of these.  If we take your two strongest cases:

  • No massive advance (no GPT-5, or disappointing GPT-5)
    • There was no GPT-5 in 2024?  And there is still no GPT 5?  People were talking in late 2023 like GPT 5 might come out in a few months, and they were wrong.  The magic of "everything just gets better with scale" really seemed to slow after GPT-4?
    • On reasoning models: I thought of reasoning models happening internally at Anthropic in 2023 and being distilled into public models, which was why Claude was so good at programming.  But I could be wrong or have my timelines messed up.
  • Modest lasting corporate adoption
    • I'd say this is true?  Read e.g. Dwarkesh talking about how he's pretty AI forward but even he has a lot of trouble getting AIs to do something useful.  Many corporations are trying to get AIs to be useful in California, fewer elsewhere, and I'm not convinced these will last.

I don't think I really want to argue about these, more I find it weird people can in good faith have such different takes.  I remember 2024 as a year I got continuously more bearish on LLM progress[1].

  1. ^

    Until DeepSeek in late December.

Reply1
Load More
1robo's Shortform
2y
33