LESSWRONG
LW

Jonathan Paulson
3325800
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
"AI achieves silver-medal standard solving International Mathematical Olympiad problems"
Jonathan Paulson1y30

Answer: it was not given the solution. https://x.com/wtgowers/status/1816839783034843630?s=46&t=UlLg1ou4o7odVYEppVUWoQ

Reply
"AI achieves silver-medal standard solving International Mathematical Olympiad problems"
Jonathan Paulson1y50

Anyone have a good intuition for why Combinatorics is harder than Algebra, and/or why Algebra is harder than Geometry? (For AIs). Why is it different than for humans?

Reply
"AI achieves silver-medal standard solving International Mathematical Olympiad problems"
Jonathan Paulson1y10

It’s funny to me that the one part of the problem the AI cannot solve is translating the problem statements to Lean. I guess it’s the only part that the computer has no way to check.

Does anyone know if “translating the problem statements” includes the providing the solution (eg “an even integer” for P1), and the AI just needs to prove the solution correct? Its not clear to me what’s human-written and what’s AI-written, and the solution is part of the “theorem” part which I’d guess is human-written.

Reply
D&D.Sci: Whom Shall You Call? [Evaluation and Ruleset]
Jonathan Paulson1y40

For row V, why is SS highlighted but DD is lower?

Reply
D&D.Sci: Whom Shall You Call?
Jonathan Paulson1y20

I think there's a typo; the text refers to "Poltergeist Pummelers" but the input data says "Phantom Pummelers".

  My first pass was just to build a linear model for each exorcist based on the cases where they were hired, and assign each ghost the minimum cost exorcist according to the model. This happens to obey all the constraints, so no further adjustment is needed

My main concern with this is that the linear model is terrible (r2 of 0.12) for the "Mundanifying Mystics". It's somewhat surprising (but convenient!) that we never choose the Entity Eliminators.

A: Spectre Slayers (1926)
B: Wraith Wranglers (1930)
C: Mundanifying Mystics (2862)
D: Demon Destroyers (1807)
E: Wraith Wranglers (2154)
F: Mundanifying Mystics (2843)
G: Demon Destroyers (1353)
H: Phantom Pummelers (1923)
I: Wraith Wranglers (2126)
J: Demon Destroyers (1915)
K: Mundanifying Mystics (2842)
L: Mundanifying Mystics (2784)
M: Spectre Slayers (1850)
N: Phantom Pummelers (1785)
O: Wraith Wranglers (2269)
P: Mundanifying Mystics (2776)
Q: Wraith Wranglers (1749)
R: Mundanifying Mystics (2941)
S: Spectre Slayers (1667)
T: Mundanifying Mystics (2822)
U: Phantom Pummelers (1792)
V: Demon Destroyers (1472)
W: Demon Destroyers (1834)

Estimated total cost: 49822

Reply
Ability to solve long-horizon tasks correlates with wanting things in the behaviorist sense
Jonathan Paulson2y71

I think you are failing to distinguish between "being able to pursue goals" and "having a goal".

Optimization is a useful subroutine, but that doesn't mean it is useful for it to be the top-level loop. I can decide to pursue arbitrary goals for arbitrary amounts of time, but that doesn't mean that my entire life is in service of some single objective.

Similarly, it seems useful for an AI assistant to try and do the things I ask it to, but that doesn't imply it has some kind of larger master plan.

Reply
Am I going insane or is the quality of education at top universities shockingly low?
Jonathan Paulson2y2921

Professors are selected to be good at research not good at teaching. They are also evaluated at being good at research, not at teaching. You are assuming universities primarily care about undergraduate teaching, but that is very wrong.

(I’m not sure why this is the case, but I’m confident that it is)

Reply1
Do humans still provide value in correspondence chess?
Jonathan Paulson2y10

I think you are underrating the number of high-stakes decisions in the world. A few examples: whether or not to hire someone, the design of some mass-produced item, which job to take, who to marry. There are many more.

These are all cases where making the decision 100x faster is of little value, because it will take a long time to see if the decision was good or not after it is made. And where making a better decision is of high value. (Many of these will also be the hardest tasks for AI to do well on, because there is very little training data about them).

Reply
Do humans still provide value in correspondence chess?
Jonathan Paulson2y30

Why do you think so?

Presumably the people playing correspondence chess think that they are adding something, or they would just let the computer play alone. And it’s not a hard thing to check; they can just play against a computer and see. So it would surprise me if they were all wrong about this.

Reply
Do humans still provide value in correspondence chess?
Jonathan Paulson2y20

https://www.iccf.com/ allows computer assistance

Reply
Load More
24Do humans still provide value in correspondence chess?
Q
2y
Q
16
25Why should we expect AIs to coordinate well?
Q
2y
Q
9
0Why Instrumental Goals are not a big AI Safety Problem
3y
7
24How can I spend money to improve my life?
11y
233
-2The first AI probably won't be very smart
11y
63