Comment author: JamesCole 16 June 2009 08:27:39AM 1 point [-]

This seems to be a common response - Tyrrell_McAllister said something similar:

I think that your distinction is really just the distinction between physics and mathematics.

I take that distinction as meaning that a precise maths statement isn't necessarily reflecting reality like physics does. That is not really my point.

For one thing, my point is about any applied maths, regardless of domain. That maths could be used in physics, biology, economics, engineering, computer science, or even the humanities.

But more importantly, my point concerns what you think the equations are about, and how you can be mistaken about that, even in physics.

The following might help clarify.

A successful test of a mathematical theory against reality means that it accurately describes some aspect of reality. But a successful test doesn't necessarily mean it accurately describes what you think it does.

People successfully tested the epicycles theory's predictions about the movement of the planets and the stars. They tended to think that this showed that the planets and stars were carried around on the specified configuration of rotating circles, but all it actually showed was that the points of light in the sky followed the paths the theory predicted.

They were committing a mind projection 'fallacy' - their eyes were looking at points of light but they were 'seeing' planets and stars embedded in spheres.

The way people interpreted those successful predictions made it very hard to criticise the epicycles theory.

Comment author: derekz 16 June 2009 01:30:09PM *  2 points [-]

The issue people are having is, that you start out with "sort of" as your response to the statement that math is the study of precisely-defined terms. In doing so, you decide to throw away that insightful and useful perspective by confusing math with attempts to use math to describe phenomena.

The pitfalls of "mathematical modelling" are interesting and worth discussing, but it actually doesn't help clarify the issue by jumbling it all together yourself, then trying to unjumble what was clear before you started.

Comment author: asciilifeform 15 June 2009 08:23:45PM *  3 points [-]

Software programs for individuals.... prime association formation at a later time.... some short-term memory aid that works better than scratch paper

I have been obsessively researching this idea for several years. One of my conclusions is that an intelligence-amplification tool must be "incestuously" user-modifiable ("turtles all the way down", possessing what programming language designers call reflectivity) in order to be of any profound use, at least to me personally.

Or just biting the bullet and learning Mathematica to an expert level instead of complaining about its UI

About six months ago, I resolved to do exactly that. While I would not yet claim "black belt" competence in it, Mathematica has already enabled me to perform feats which I would not have previously dared to contemplate, despite having worked in Common Lisp. Mathematica is famously proprietary and the runtime is bog-slow, but the language and development environment are currently are in a class of their own (at least from the standpoint of exploratory programming in search of solutions to ultra-hard problems.)

Comment author: derekz 15 June 2009 08:43:24PM 1 point [-]

Cool stuff. Good luck with your research; if you come up with anything that works I'll be in line to be a customer!

Comment author: Roko 15 June 2009 08:27:58PM 1 point [-]

While I would not yet claim "black belt" competence in it, Mathematica has already enabled me to perform feats which I would not have previously dared to contemplate, despite having worked in Common Lisp. Mathematica is famously proprietary and the runtime is bog-slow, but the development environment is currently in a class of its own (at least from the standpoint of exploratory programming in search of solutions to ultra-hard problems.)

Sounds cool, but this is not quite what I was aiming at.

Comment author: derekz 15 June 2009 08:38:00PM -1 points [-]

Well if you are really only interested in raising the average person's "IQ" by 10 points, it's pretty hard to change human nature (so maybe Bostrom was on the right track).

Perhaps if somehow video games could embed some lesson about rationality in amongst the dumb slaughter, that could help a little -- but people would probably just buy the games without the boring stuff instead.

Comment author: derekz 15 June 2009 08:11:09PM 4 points [-]

I suppose the question is not whether it would be good, but rather how. Some quick brainstorming:

  • I think people are "smarter" now then they were, say, pre-scientific-method. So there may be more trainable ways-of-thinking that we can learn (for example, "best practices" for qualitative Bayesianism)

  • Software programs for individuals. Oh, maybe when you come across something you think is important while browsing the web you could highlight it and these things would be presented to you occasionally sort of like a "drill" to make sure you don't forget it, or prime association formation at a later time. Or some kind of software aid to "stack unwinding" so you don't go to sleep with 46 tabs open in your web browser. Or some short-term memory aid that works better than scratch paper. Or just biting the bullet and learning Mathematica to an expert level instead of complaining about its UI. Or taking a cutting-edge knowledge representation framework like Novamente's PLN and trying to enter stuff into it as an "active" note-taking system.

  • Collaboration tools -- shared versions of the above ideas, or n-way telephone conversations, or freeform "chatroom"-style whiteboards or iteratively-refined debate thesis statements, or lesswrong.com

  • Man-machine hybrids. Like having people act as the utility function or search-order-control of an automated search process.

Of course, neural prostheses may become possible at some point fairly soon. Specially-tailored virtual environments to aid in visualization (like of nanofactories), or other detailed and accurate scientific simulations allowing for quick exploration of ideas... "Do What I Mean" interfaces to CAD programs might be possible if we can get a handle on the functional properties of human cognitive machinery...

Comment author: derekz 15 June 2009 03:19:47PM 3 points [-]

Or: "Physics is not Math"

Comment author: [deleted] 12 June 2009 01:33:14AM 2 points [-]

Doesn't seem to deserve more mention than the creation of computing? Sure. But computing has already been created.

In response to comment by [deleted] on Let's reimplement EURISKO!
Comment author: derekz 12 June 2009 02:12:52AM 1 point [-]

Um, so has Eurisko.

Comment author: Eliezer_Yudkowsky 11 June 2009 08:27:01PM 7 points [-]

Not exactly, Thom. Roughly, for FAI you need precise self-modification. For precise self-modification, you need a precise theory of the intelligence doing the self-modification. To get to FAI you have to walk the road that leads to precise theories of intelligence - something like our present-day probability theory and decision theory, but more powerful and general and addressing issues these present theories don't.

Eurisko is the road of self-modification done in an imprecise way, ad-hoc, throwing together whatever works until it gets smart enough to FOOM. This is a path that leads to shattered planets, if it were followed far enough. No, I'm not saying that Eurisko in particular is far enough, I'm saying that it's a first step along that path, not the FAI path.

Comment author: derekz 11 June 2009 09:52:41PM 4 points [-]

Perhaps a writeup of what you have discovered, or at least surmise, about walking that road would encourage bright young minds to work on those puzzles instead of reimplementing Eurisko.

It's not immediately clear that studying and playing with specific toy self-referential systems won't lead to ideas that might apply to precise members of that class.

Comment author: JamesCole 09 June 2009 12:35:36PM 1 point [-]

A lot of this probably comes down to:

Don’t assume – that you have a rich enough picture of yourself, a rich enough picture of the rest of reality, or that your ability to mentally trace through the consequences of actions comes anywhere near the richness of reality’s ability to do so.

Comment author: derekz 09 June 2009 01:02:16PM 0 points [-]

You could use that feedback from the results of prior actions. Like: http://www.aleph.se/Trans/Individual/Self/zahn.txt

Comment author: derekz 04 June 2009 12:12:19PM 2 points [-]

Interesting exercise. After trying for a while I completely failed; I ended up with terms that are completely vague (e.g. "comfort"), and actually didn't even begin to scratch the surface of a real (hypothesized) utility function. If it exists it is either extremely complicated (too complicated to write down perhaps) or needs "scientific" breakthroughs to uncover its simple form.

The result was also laughably self-serving, more like "here's roughly what I'd like the result to be" than an accurate depiction of what I do.

The real heresy is that this result does not particularly frighten or upset me. I probably can't be a "rationalist" when my utility function doesn't place much weight on understanding my utility function.

Can you write your own utility fuinction or adopt the one you think you should have? Is that sort of wholesale tampering wise?

Comment author: derekz 03 June 2009 01:09:30PM 0 points [-]

People on this site love to use fiction to illustrate their points, and a "biomoderate singularity managed by a superintelligent singleton" is very novel-friendly, so that's something!

View more: Prev | Next