Why was connectionism unlikely to succeed?
Can you clarify? I'm not sure what you mean by this with respect to machine learning.
Interesting, and why is that an improvement?
Hmm, that's interesting. Thanks Peter!
I learned how to do math on paper or blackboard, except for an interlude at a Montessori school, where we used physical media. After a while, any math problem that took the form of a listing of mathematical expressions was one to solve with successive string manipulations. The initial form implied a set of transformations and a write-up that I had to perform. By looking at what I just wrote, and plotting how to create a transformation closer to my final answer, but with just a few manipulations of the earlier expression, I would record successive approaches to the final answer.
"Move the expression there, put that number there, that symbol there, combine those numbers into a new number using that operator, write that new thing underneath that old thing, line up the equals signs, line up those numbers,". Repeat down the page as I write.
Starting from the verbal description, get an idea of the mathematical expressions that I need to write to express what's given in the problem. Write those down. From there, go on to solve the mathematical expression with a successive transformation approach.
Similar to starting with a verbal description, get an idea of the mathematical expressions that I need to write to express what's given in the problem. From there, go on to solve the mathematical expression with a successive transformation approach.
NOTE: sometimes a verbal description could benefit from a picture, for example, in a physics problem. In that case, I would draw a picture of the physical system to help me identify the corresponding mathematical expressions to start with but also to help me feel like I "understand" what the verbal description depicts.
Cognitive aids reduce cognitive load for representing information. If you can choose between having an external picture that you can look at anytime of a graph, versus an internal picture of the same graph, then for most purposes, and to allow you the most freedom of operation in approaching a mathematics problem, I would suggest that you use the cognitive aid.
That's how I've always preferred to do math:
For higher-level math, I think it makes sense to use a computer as much as you can to handle details and visualization, relying on its precision and memory while you concentrate on the identification and use of an algorithm that produces a useful solution to the problem.
Basically, I do it the same way I have since I was little. I remember learning my times tables by memorization and writing them out, then paper and pencil work in class, then a bit of calculator work much later on with a TI-83, but mainly for large-number multiplication or division, or to check my work.
I took a graph theory class for my math minor, and I wish I still had my notes. I suspect that some of the answers were actually pictures, but I don't remember much of what I did in the class, it was 30 years ago.
Mathematics involving a computer is more or less the same. You write out equations, but you might be working with cell references or variable names or varying data sets, so you are basically stopping at the point that you turn a word or graphical problem description into a mathematical expression. Then you let the calculator or computer do the work.
I am curious about how considerations of overlaps could lead to a list of milestones for positive results in AI safety research. If there are enough exit points from pathways to AI doom available through AI Safety improvements, a catalog of those improvements might be visible in models of correlations among AI Safety researcher's predictions about AGI doom of various sorts.
But as a sidenote, here's my response to your mention of climate researchers thinking in terms of P(Doom).
Doubts held by pessimistic climate scientists about future climate plausibly include:
Common traits among more vocal climate scientists forecasting climate destruction could include:
NOTE: I see that commonality through my own browsing of literature and observations of trends among vocal climate scientists, but my list is not the result of any representative survey.
There's 13,000 scientist signatories on a statement of a climate emergency, I think that shows concern (if not pessimism) from a vocal group of scientists about our current situation.
However, if climate scientists are asked to forecast P(Doom), the forecasts will vary depending on what scientists think:
A different question is how climate scientists might backcast not_Doom or P(Doom) < low_value. A comparison of the scenarios they describe could show interesting differences in worldview.
You wrote:
I'm not sure I understand your question. Do you mean, why wouldn't someone who's running the engine I describe end up resenting things like OpenAI that seem to be accelerating AI risk?
For one, I think they often do.
Oh. Good! I'm a bit relieved to read that. Yes, that was the fundamental question that I had. I think that shows common-sense.
I'm curious what you think a sober response to AGI research is for someone whose daily job is working on AI Safety, if you want to discuss that in more detail. Otherwise, thank you for your answer.
I'm a little surprised that doomerism could take off like this, dominate one's thoughts, and yet fail to create resentment and anger toward of its apparent cause source. Is that something that was absent for you or was it not relevant to discuss here?
I wonder:
Factors that might be protecting me from this include:
Thank you for sharing this piece, I found it thought-provoking.
Huh, ok. I will have to check out the new version. Thanks!