All of Gabriel Alfour's Comments + Replies

By this logic, any instrumental action taken towards an altruistic goal would be "for personal gain".

I think you are making a genuine mistake, and that I could have been clearer.

There are instrumental actions that favour everyone (raising epistemic standards), and instrumental actions that favour you (making money).

The latter are for personal gains, regardless of your end goals.

 

Sorry for not getting deeper into it in this comment.  This is quite a vast topic.
I might instead write a longer post about the interactions of deontology & consequentialism, and egoism & altruism.

4Joe Collman
(With "this logic" I meant to refer to ["for personal gain" = "getting what you want"]. But this isn't important) If we're sticking to instrumental actions that do favour you (among other things), then the post is still incorrect: [y is one consequence of x] does not imply [x is for y] The "for" says something about motivation. Is an action that happens to be to my benefit necessarily motivated by that? No. (though more often than I'd wish to admit, of course) If you want to claim that it's bad to [Lie in such a way that you get something that benefits you], then make that claim (even though it'd be rather silly - just "lying is bad" is simpler and achieves the same thing). If you're claiming that people doing this are necessarily lying in order to benefit themselves, then you are wrong. (or at least the only way you'd be right is by saying that essentially all actions are motivated by personal gain) If you're claiming that people doing this are in fact lying in order to benefit themselves, then you should either provide some evidence, or lower your confidence in the claim.

(I strongly upvoted the comment to signal boost it, and possibly let people who agree easily express their agreement to it directly if they don't have any specific meta-level observation to share)

Suppose I take out a coin and flip it 100 times in front of your eyes, and it lands heads every time. Will you have no ability to predict how it lands the next 30 times? Will you need some special domain knowledge of coin aerodynamics to predict this?

  • Coin = problem
  • Flipping head = not being solved
  • Flipping tail = being solved
  • More flips = more time passing

Then, yes. Because you had many other coins that had started flipping tail at some point, and there is no easily discernable pattern.

By your interpretation, the Solomonoff induced prior for that coin is basi... (read more)

Isn't it about empirical evidence that these problems are hard, not "predictions"? They're considered hard because many people have tried to solve them for a long time and failed.

No, this is Preemption 1 in the Original Post.

"hard" doesn't mean "people have tried and failed", and you can only witness the latter after the fact. If you prefer, even if have empirical evidence for the problem being "level n hard" (people have tried up to level n), you;d still do not have empirical evidence for the problem being "level n+1 hard" (you'd need people to try more t... (read more)

7Richard_Ngo
This is implicitly assuming that our expectation of how long a problem should take to solve is memoryless. But a breakthrough is much more likely on the 1st day of working on a problem than on the 1000th day. More generally, if problems vary greatly in difficulty, then our failure to solve a given problem provides evidence that it's one of the harder problems. So a more reasonable prior in this case is something like logarithmic - e.g. it's equally likely that a problem takes 1-10 days, or 10-100 days, or 100-1000 days, etc, to solve. A similar model can give rise to the Lindy effect, where the expected lifetime is proportional to the lifetime so far. (In this case it'd be the expected time to solving the problem which would be proportional to the time which the problem has been open.)
4Thane Ruthenis
Suppose I take out a coin and flip it 100 times in front of your eyes, and it lands heads every time. Will you have no ability to predict how it lands the next 30 times? Will you need some special domain knowledge of coin aerodynamics to predict this? I mean... That heuristic is that heuristic? "Experts have a precise model of the known subset of the concept-space of their domain,  and they can make vague high-level extrapolations on how that domain looks outside the known subset, and where in the wider domain various unsolved problems are located, and how distant they are from the known domain". The way I see it, that's it. This statement isn't reducible to something more neat and simple. For any given difficult problem, you can walk up to an expert and ask them why it's considered hard, but the answers they give you won't have any unifying theme aside from that. It's all ad hoc. Why would you think there's something else? What shape do you want the answer to have?

Are these perhaps boring, because the difficulty is well understood?

They are not boring, I am simply asking about some specific cluster of problems, and none of them belong to that cluster.

1Dennis Towne
Ack, ok.

I think part of the explanation is that we don't have a model for distance from success.  We have no clue if the researchers who've made serious attempts on these problems got us closer to an answer/proof, or if they just spun their wheels. 

This post is about experts in the fields of number theory and complexity theory claiming to have a clue about this. 
If you think "We have no clue", you likely think they are wrong, and I would be interested in knowing why.

I added more details on this comment, given that someone else already shared a similar thought

The post is about predictions made by experts in number theory and complexity theory.

If you think that this can not be predicted, and that they are thus wrong about their predictions, I would be interested in knowing why.

Namely:

  • Do you have mechanistic / gear-level / inside view reasons for why the difficulty of problems can not be predicted ahead of time, where you disagree with those experts?
  • Do you have empirical / outside view reasons for why those experts are badly calibrated?
2Thane Ruthenis
Isn't it about empirical evidence that these problems are hard, not "predictions"? They're considered hard because many people have tried to solve them for a long time and failed, not because experts glanced at them once and knew on priors they'd be legendarily difficult. Aside from that, an expert can estimate how hard a problem is by eyeballing how distant the abstractions needed to solve it feel from the known ones — whether we have almost the right tools for solving it, or have no idea how the right tools would look at all. They're able to do this because they've developed strong intuitions for their professional domain: they roughly know what's possible, what's on the edge of the possible, and what's very much not. And even then, such intuitions are often very wrong, see Fermat's Last Theorem. But there's no objective property that makes these problems intrinsically hard, only subjectively hard from the point of view of our conceptual toolbox.

It is plausible that the actual Collatz system is one of these for our standard proof systems.

Why? Consider the following:

The Collatz Conjecture has a lot of structure to it:

  • It is false for 3n-1
  • It is undecidable for generalizations of it
  • It is true empirically, up to 10^20
  • It is true statistically, a result of Terrence Tao establishes that "almost all initial values of n eventually iterate to less than log(log(log(log(n))))" (or inverse Ackermann)

Additionally, if you look at the undecidable generalizations of the Collatz Conjecture, I expect that you will fi... (read more)

Interesting.

A nice way to do such a post-mortem would be to actually ask the people who were there if they thought the problem was Super Hard, why so, and how they did update after the solution was found.

Thanks!

And Collatz is just a random-ass problem which doesn't seem to have any special structure to it.

The Collatz Conjecture has a lot of structure to it:

  • It is false for 3n-1
  • It is undecidable for generalizations of it
  • It is true empirically, up to 10^20
  • It is true statistically, a result of Terrence Tao establishes that "almost all initial values of n eventually iterate to less than log(log(log(log(n))))" (or inverse Ackermann)

In the case of Collatz, there might exist some special trick for it, but it's not any of the tricks we know.

I am not sure what you counts o... (read more)

I agree this establishes that the Collatz' and P vs NP Conjectures have longer chain length than everything that has been tried yet. But this looks to me like a restatement of "They are unsolved yet".

Namely, this does not establish any cause for them being much harder than other merely unsolved yet problems. I do not see how your model predicts that the Collatz' and P vs NP Conjectures are much harder than other theorems in their fields that have been proved in the last 15 years, which I believe other experts have expected.

Put differently, the way I unders... (read more)

0Thane Ruthenis
I don't think this can be predicted.