All of derekz's Comments + Replies

derekz20

Well, if we really wanted to other-optimize we'd try to change your outlook on life, but I'm sure you get a lot of such advice already.

One thing you could try is making websites to sell advertising and maybe amazon clickthroughs. You would have to learn some new skills and have a little bit of discipline (and have some ideas about what might be popular). You could always start with the games you are interested in.

There's plenty of information out there about doing this. It will take a while to build up the income, and you may not be motivated enough to learn what you need to do to succeed.

derekz00

"Useful" is negatively correlated with "Correct theory"... on a grand scale.

Sure, having a correct theory has some positive correlation with "useful",

Which is it?

I think all the further you can go with this line of thought is to point out that lots of things are useful even if we don't have a correct theory for how they work. We have other ways to guess that something might be useful and worth trying.

Having a correct theory is always nice, but I don't see that our choice here is between having a correct theory or not having one.

4pjeby
Both. Over the course of history: Useful things -> mostly not true theories. True theory -> usually useful, but mostly first preceded by useful w/untrue theory.
derekz00

Thank you for the detailed reply, I think I'll read the book and revisit your take on it afterward.

derekz50

I suppose for me it's the sort of breathless enthusiastic presentation of the latest brainstorm as The Answer. Also I believe I am biased against ideas that proceed from an assumption that our minds are simple.

Still, in a rationalist forum, if one is to not be bothered by dismissing the content of material based on the form of its presentation, one must be pretty confident of the correlation. Since a few people who seem pretty smart overall think there might be something useful here, I'll spend some time exploring it.

I am wondering about the proposed eas... (read more)

4pjeby
This is one place where PCT is not as enlightening without adding a smidge of HTM, or more precisely, the memory-prediction framework. The MPF says that we match patterns as sequences of subpattern: if one subpattern "A" is often followed by "B"", our brain compresses this by creating (at a higher layer) a symbol that means "AB". However, in order for this to happen, the A->B correlation has to happen at a timescale where we can "notice" it. If "A" happens today, and "B" tomorrow (for example), we are much less likely to notice! Coming back to your question: most of our problematic controller structures are problematic at too long of a timescale for it to be easily detected (and extinguished). So PCT-based approaches to problem solving work by forcing the pieces together in short-term memory so that an A->B sequence fires off ... at which point you then experience an "aha", and change the intercontroller connections or reference levels. (Part of PCT theory is that the function of conscious awareness may well be to provide this sort of "debugging support" function, that would otherwise not exist.) PCT also has some interesting things to say about reinforcement, by the way, that completely turn the standard ideas upside down, and I would really love to see some experiments done to confirm or deny. In particular, it has a novel and compact explanation of why variable-schedule reinforcement works better for certain things, and why certain schedules produce variable or "superstitious" action patterns.
derekz00

Not to be discouraging, but is that really the "logical" reasoning used at the time? They use the word "rationalization" for a reason. "I can always work toward my goals tomorrow instead" will always be true.

Hopefully you had fun dancing, nothing wrong with it at all, but it does seem odd to be so self-congratulatory about deciding to go out and party.

derekz50

Yes, I'm afraid this post is kind of impenetrable, although cousin_it's contribution helped. What is "RDS"?

Also, continually saying "People should..." do this and that and the other thing might be received better if you (meaning Michael, not Vladimir) start us off by doing a little of the desired analysis yourself.

1Richard_Kennaway
From context, Reflective Decision Theory, and from googling that, decision theory for self-modifying systems, a central problem for any theory of intelligence, human or artificial. However, Google only turns up calls for such a thing to exist, not any actual theory. Is Michael Vassar calling for us to use these examples as a concrete case study from which to work towards an RDS? Or simply to bring scientific method to bear on these examples? If cousin_it has accurately located the material that Michael was referring to, I'll add my recent citing of PCT/MOL as a fourth contender.
0[anonymous]
I'd guess "RDS" is Michael's typonym (ouch!) for "Reflective decision theory".
derekz10

If you're wondering whether I'm aware that I can figure out how to steal software licenses, I am.

ETA: I don't condemn those who believe that intellectual property rights are bad for society or immoral. I don't feel that way myself, though, so I act accordingly.

0SilasBarta
It's theoretically possible to believe in IP (on some level), but lack the will not to pluck the forbidden fruit.
derekz10

No specific use cases or examples, just throwing out ideas. On the one hand it would be cool if the notes one jots down could self-organize somehow, even a little bit. Now OpenCog is supposed by its creators to be a fully general knowledge representation system so maybe it's possible to use it as a sort of notation (like a probabilistic-logic version of mathematica? or maybe with a natural language front end of some kind? i think Ben Goertzel likes lojban so maybe an intermediate language like that)

Anyway, it's not really a product spec just one possible... (read more)

0Henrik_Jonsson
While I agree that it it would be cool, anything that doesn't keep your notes exactly like you left them is likely to be more annoying than productive unless it is very cleverly done. (Remember Microsoft Clippy?) You'd probably need to tag at least some things, like persons and places.
derekz10

Thanks for the motivation, by the way -- I have toyed with the idea of getting Mathematica many times in the past but the $2500 price tag dissuaded me. Now I see that they have a $295 "Home Edition", which is basically the full product for personal use. I bought it last night and started playing with it. Very nifty program.

0SilasBarta
I don't know wheter to applaud your ethical restraint, or pity your ignorance. I'll go with the first ;-)
derekz10

If the point of this essay was to advocate pharmaceutical research, it might have been more effective to say so, it would have made the process of digesting it smoother. Given the other responses I think I am not alone in failing to guess that this was pretty much your sole target.

I don't object to such research; a Bostrom article saying "it might not be impossible to have some effect" is weak support for a 10 IQ point avergage-gain pill, but that's not a reason to avoid looking for one. Never know what you'll find. I'm still not clear what th... (read more)

1Roko
Well, there may be tactics other than pharmacology: we might have nutritional interventions or perhaps something like transcranial magnetic stimulation, or even something we haven't thought of yet. But I should emphasize that the sole criterion for such interventions would be that it would be feasible to get lots of people to use them. This article is not a "here's something you can do to enhance your own life today!" type article, it is a discussion of existential risk reduction via mass IQ increase. I may well write some "how to" articles, too though.
derekz-10

I'm still baffled about what you are getting at here. Apparently training people to think better is too hard for you, so I guess you want a pill or something. But there is no evidence that any pill can raise the average person's IQ by 10 points (which kind of makes sense, if some simple chemical balance adjustment could have such a dramatic effect on fitness it would be quite surprising). Are you researching a sci fi novel or something? What good does wishing for magical pills do?

3Roko
Well we haven't looked very hard, and I am trying to advocate that more research is urgently needed in this area, along with people like Nick Bostrom. See The Wisdom of Nature: An Evolutionary Heuristic for Human Enhancement "a greater level of mental activity might also enable us to apply our brains more effectively to process information and solve problems. The brain, however, requires extra energy when we exert mental effort, reducing the normally tightly regulated blood glucose level by about 5 per cent (0.2 mmol/l) for short (<15 min) efforts and more for longer exertions.¹⁵ Conversely, increasing blood glucose levels has been shown to improve cognitive performance in demanding tasks."
1asciilifeform
Please read this short review of the state of the art of chemical intelligence enhancement. We probably cannot reliably guarantee 10 added points for every subject yet. Quite far from it, in fact. But there are some promising leads. Others have made these points before, but I will summarize: fitness in a prehistoric environment is a very different thing from fitness in the world of today; prehistoric resource constraints (let's pick, for instance, the scarcity of refined sugars) bear no resemblance to those of today; certain refinements may be trivial from the standpoint of modern engineering but inaccessible to biological evolution, or at the very least ended up unreachable from a particular local maximum. Consider, for example, the rarity of evolved wheels.
derekz20

The issue people are having is, that you start out with "sort of" as your response to the statement that math is the study of precisely-defined terms. In doing so, you decide to throw away that insightful and useful perspective by confusing math with attempts to use math to describe phenomena.

The pitfalls of "mathematical modelling" are interesting and worth discussing, but it actually doesn't help clarify the issue by jumbling it all together yourself, then trying to unjumble what was clear before you started.

derekz10

Cool stuff. Good luck with your research; if you come up with anything that works I'll be in line to be a customer!

derekz-10

Well if you are really only interested in raising the average person's "IQ" by 10 points, it's pretty hard to change human nature (so maybe Bostrom was on the right track).

Perhaps if somehow video games could embed some lesson about rationality in amongst the dumb slaughter, that could help a little -- but people would probably just buy the games without the boring stuff instead.

derekz40

I suppose the question is not whether it would be good, but rather how. Some quick brainstorming:

  • I think people are "smarter" now then they were, say, pre-scientific-method. So there may be more trainable ways-of-thinking that we can learn (for example, "best practices" for qualitative Bayesianism)

  • Software programs for individuals. Oh, maybe when you come across something you think is important while browsing the web you could highlight it and these things would be presented to you occasionally sort of like a "drill" to

... (read more)
4gwern
Congratulations, you've nearly reinvented spaced repetition! There is a great deal of writing on spaced repetition flashcard systems, so I won't inflict upon you my own writings; but the Wikipedia article will link you to the main programs (Anki, Mnemosyne, and SuperMemo) and some writeups of the topic. SR is a great technique; I love it dearly. Well, you could just improve your working memory. Unusually, working memory is plastic enough to be trainable by WM tasks. The WM exercise I'm most familiar with is Dual n-back. I practice it, but while I have noticed improvements, I'm unsure whether they repay the time I've put into it; SR systems have proven themselves as far as I'm concerned, but the jury is still out on dual n-back. Now that sounds interesting. But looking at this OpenCog link doesn't give me a good idea as to what PLN might do for note-taking (or really, in general); did you have any use-cases or examples?
0Roko
The problem with all of these is that they are all likely to be adopted mostly by the minority of people who are already very smart, whereas this post is aiming at something for the average intelligence people who comprise the majority of the population.
3asciilifeform
I have been obsessively researching this idea for several years. One of my conclusions is that an intelligence-amplification tool must be "incestuously" user-modifiable ("turtles all the way down", possessing what programming language designers call reflectivity) in order to be of any profound use, at least to me personally. About six months ago, I resolved to do exactly that. While I would not yet claim "black belt" competence in it, Mathematica has already enabled me to perform feats which I would not have previously dared to contemplate, despite having worked in Common Lisp. Mathematica is famously proprietary and the runtime is bog-slow, but the language and development environment are currently are in a class of their own (at least from the standpoint of exploratory programming in search of solutions to ultra-hard problems.)
1JamesCole
This seems to be a common response - Tyrrell_McAllister said something similar: I take that distinction as meaning that a precise maths statement isn't necessarily reflecting reality like physics does. That is not really my point. For one thing, my point is about any applied maths, regardless of domain. That maths could be used in physics, biology, economics, engineering, computer science, or even the humanities. But more importantly, my point concerns what you think the equations are about, and how you can be mistaken about that, even in physics. The following might help clarify. A successful test of a mathematical theory against reality means that it accurately describes some aspect of reality. But a successful test doesn't necessarily mean it accurately describes what you think it does. People successfully tested the epicycles theory's predictions about the movement of the planets and the stars. They tended to think that this showed that the planets and stars were carried around on the specified configuration of rotating circles, but all it actually showed was that the points of light in the sky followed the paths the theory predicted. They were committing a mind projection 'fallacy' - their eyes were looking at points of light but they were 'seeing' planets and stars embedded in spheres. The way people interpreted those successful predictions made it very hard to criticise the epicycles theory.
derekz20

Um, so has Eurisko.

7[anonymous]
...indeed. It seems that I failed to figure out just what I was arguing against. Let me re-make that point. As far as first steps along that path go, they have already been taken: we have gone from a world without computers to a world with one, and we can't reverse that. The logical place to focus our efforts would seem to be the next step which has not been taken, which could very well be reimplementing EURISKO. (Though it could also very well be running a neural net on a supercomputer or some guy making the video game "Operant Conditioning Hero".)
derekz40

Perhaps a writeup of what you have discovered, or at least surmise, about walking that road would encourage bright young minds to work on those puzzles instead of reimplementing Eurisko.

It's not immediately clear that studying and playing with specific toy self-referential systems won't lead to ideas that might apply to precise members of that class.

7Eliezer Yudkowsky
I've written up some of the concepts of precise self-modification, but need to collect the posts on a Wiki page on "lawfulness of intelligence" or something.
derekz00

You could use that feedback from the results of prior actions. Like: http://www.aleph.se/Trans/Individual/Self/zahn.txt

derekz20

Interesting exercise. After trying for a while I completely failed; I ended up with terms that are completely vague (e.g. "comfort"), and actually didn't even begin to scratch the surface of a real (hypothesized) utility function. If it exists it is either extremely complicated (too complicated to write down perhaps) or needs "scientific" breakthroughs to uncover its simple form.

The result was also laughably self-serving, more like "here's roughly what I'd like the result to be" than an accurate depiction of what I do.

The re... (read more)

derekz00

People on this site love to use fiction to illustrate their points, and a "biomoderate singularity managed by a superintelligent singleton" is very novel-friendly, so that's something!

derekz130

Eliezer, in the ones I've seen so far I don't think you comes across very well. In particular you tend to ignore the point (or substance) of your partner's arguments which makes you look evasive or inattentive. There is also a fine line for viewers between confidence and arrogant pomposity and you often come across on the wrong side of that line. Hopefully this desire of yours to keep doing it reflects a commitment to improving, in which case keep at it. Perhaps asking a number of neutral parties about specifics would help you train for it... if you're... (read more)

8loqi
I think this is partly the by-product of a fundamental tension when conversing with someone in habit of making meaningless or incoherent statements. To directly address such "points", you basically have to ask the person to explain what they mean or rephrase their statement. If the explanation is junk, you're right back where you started, minus the time they spent explaining themselves. In the limit, indulging these non-terminating arguments equates to just letting them talk the entire time.
derekz10

If dark arts are allowed, it certainly seems like hundreds of millions of dollars spent on AI-horror movies like Terminator are a pretty good start. Barring an actual demostration of progress toward AI, I wonder what could actually be more effective...

Sometime reasonably soon, getting real actual physical robots into the uncanny valley could start to help. Letting imagination run free, I imagine a stage show with some kind of spookily-competent robot... something as simple as competent control of real (not CGI) articulated robots would be rather scary...... (read more)

0Roko
interesting. I wouldn't want to rule out the "dark arts" , i.e. highly non rational methods of persuasion. Robotics is not advanced enough for a robot to look scary, though military robotics is getting there fast. A demonstration involving the very latest military robots could have the intended effect in perhaps 10 years.
derekz30

Apparently you and others have some sort of estimate of probability distribution over time leading you to being alarmed enough to demand action. Maybe it's say "1% chance in the next 20 years of hard takeoff" or something like that. Say what it is and how you got to it from "conceivability" or "non-impossibility". If there is a reasoned link that can be analyzed producing such a result, it is no longer a leap of faith; it can be reasoned about rationally and discussed in more detail. Don't get hung up on the number exactl... (read more)

derekz60

Steven, I'm a little surprised that the paper you reference convinces you of a high probability of imminent danger. I have read this paper several times, and would summarize its relevant points thusly:

  1. We tend to anthropomorphise, so our intuitive ideas about how an AI would behave might be biased. In particular, assuming that an AI will be "friendly" because people are more or less friendly might be wrong.

  2. Through self-improvement, AI might become intelligent enough to accomplish tasks much more quickly and effectively than we expect.

  3. This

... (read more)
0Roko
Hanson's position was that something like a singularity will occur due to smarter than human Cognition, but he differs from eliezer by claiming that it will be a distributed intelligence analogous to the economy, trillions of smart human uploads and narrow AIs exchanging skills and subroutines. He still ultimately supports the idea of a fast transition, based on historical transitions. I think robin would say that something midway between 2 weeks and 20 years is reasonable. Ultimately, if you think hanson has a stronger case, you're still talking about a fast transition to superintelligence that we need to think about very carefully.
3Vladimir_Nesov
Given the stakes, if you already accept the expected utility maximization decision principle, it's enough to become convinced that there is even a nontrivial probability of this happening. The paper seems to be adequate for snapping the reader's mind out of conviction in the absurdity and impossibility of dangerous AI.
2steven0461
Hmm, I was thinking more of being convinced there's a "significant probability", for a definition of "significant probability" that may be much lower than the one you intended. I'm not sure if I'd also claim the paper convinces me of a "high probability". Agreed that it would be more convincing to the general public if there were an argument for that. I may comment more after rereading.
derekz60

One thing that might help change the opinion of people about friendly AI is to make some progress on it. For example, if Eliezer has had any interesting ideas about how to do it in the last five years of thinking about it, it could be helpful to communicate them.

A case that is credible to a large number of people needs to be made that this is a high-probability near-term problem. Without that it's just a scary sci-fi movie, and frankly there are scarier sci-fi movie concepts out there (e.g. bioterror). Making an analogy with a nuclear bomb is simply not... (read more)

1Roko
I disagree strongly. World atmospheric carbon dioxide concentration is still increasing, indeed the rate at which it is increasing is increasing (i.e CO2 output per annum is increasing), so antiprogress is being made upon the global warming problem - yet people still think it's worth putting more effort into, rather than simply giving up. Anthropogenic global warming is a low probability, long term problem. At least the most SERIOUS consequences of anthropogenic global warming are long term (e.g. 2050 plus) and low probability (though no scientist would put a number on the probability of human extinction through global warming)
4Douglas_Knight
This bears repeating: (I think your comment contained a couple of unrelated pieces that would have been better in separate comments.)
0steven0461
I think this is a convincing case but clearly others disagree. Do you have specific suggestions for arguments that could be expanded upon?
derekz10

For a continuation of the ideas in Beyond AI, relevant to this LW topic, see:

http://agi-09.org/papers/paper_22.pdf

0MrHen
Thanks; added to reading list.
derekz20

Hello all. I don't think I identify myself as a "rationalist" exactly -- I think of rationality more as a mode of thought (for example, when singing or playing a musical instrument, that is a different mode of thought, and there are many different modes of thought that are natural and appropriate for us human animals). It is a very useful mode of thought, though, and worth cultivating. It does strike me that the goals targeted by "Instrumental Rationality" are only weakly related to what I would consider "rationality" and ... (read more)