All of Karl_Smith's Comments + Replies

You are at the state flagship. 82% at College Park is roughly equal to Urbana-Champaign's 80%. The point is that top schools pick students who can get through and/or do a better job of getting students through.

Tim,

Thanks, input like this helps me try to think about the economic issues involved.

Can you talk a little about the depth of recursion already possible. How much assistance are these refactoring programs providing? Can the results the be used to speed up other programs or does can it only improve its own development, etc?

0timtyler
To quote from my essay relating to this: "Refactoring: Refactoring involves performing rearrangements of code which preserve its function, and improve its readability and maintainability - or facilitate future improvements. Much refactoring is done by daemons - and their existence massively speeds up the production of working code. Refactoring daemons enable tasks which would previously have been intractable." * http://alife.co.uk/essays/the_intelligence_explosion_is_happening_now/ Refactoring programs are indispensable for most application programmers in Java, and other machine readable languages. They are of limited use for C/C++ because of preprocessor mangling. When refactoring hit the mainstream in Eclipse, years ago, many programmers found their productivity increased dramatically, and they also found they could easily perform refactorings that would have been practically impossible to perform manually. Refactoring is a fairly general tool. I am not sure about your "recursion" question. Modeling this as some kind of recursive function that bottoms out somewhere does not seem particularly appropriate to me. Rather, it represents the partial automation of programming. Similarly, unit tests are the automation of testing, and compilers are the automation of assembly. Computer programming and software development have many places where automation is possible, and the opportunities are gradually being taken up.

I'd appreciate some feedback on a brain dump I did on economics and technology. Nothing revolutionary here. Just want people with more experience on the tech side to check my thinking.

Thanks in advance

http://modeledbehavior.com/2010/03/11/the-economics-of-really-big-ideas/

0timtyler
Re: "Already we have computer programs which can re-write existing to programs to run faster. These programs can also re-write themselves to run faster. However, they cannot rewrite themselves to become better at re-writing themselves faster." You mean that they can't do that alone? Refactoring programs help speed up their own development, and make it easier and faster to make improvements in a set of programs that often includes their own source code. It's not total automation - but partial automation is still very significant progress.
0RobinZ
It looks correct to me, but I'm not an experienced judge of such things.

I have a 2000+ word brain dump on economics and technology that I'd appreciate feedback on. What would be the protocol. Should I link to it? Copy it into a comment? Start a top level article about it?

I am not promising any deep insights here, just my own synthesis of some big ideas that are out there.

0RobinZ
I would post a link on the latest Open Thread - I don't believe an explicit protocol exists.

Perhaps I am missing something but it seems to me that a world in which Godzilla was common knowledge would have a completely different history of biology. For one thing it's hard to imagine that explaing Godzillia would not be a major goal of philosophers and scientists since the earliest days.

I imagine one of the basic questions would be whether Godzillia was a beast or a god and answering this would be a high priority. What does Godzillia want? Where did he come from? Has he always existed? Are there more? Do they mate?

These seem like really big deal questions when confronted by a sea monster which occasionally destroys towns.

0Strange7
Elementary education might include things like this.

So the easy answers might be:

Ben Bernanke

Mark Gertler

Micheal Wooford

Greg Mankiw

Its not clear to me why macro-economists are rightly subject to such criticism. To me its like asking a mathematician, "If you're so good at logical reasoning why didn't you create the next killer app"

Understanding how the economy works and applying that knowledge to a particular task are completely different.

1SecondWind
'Designing the next killer app' seems to rely heavily on predicting what people will want, which is many steps and a lot of knowledge away from logical reasoning.

So clearly adapting the new idea is useful.

However, it may also be the case that there is an old idea which if re-examined will be seen to be useful in and of itself.

The problem with the Austrians is that their ideas are being considered and they are being rejected. See Byran Caplan's Why I am Not an Austrian Economist. (link seems not to be working)

I think this post overstates the case a bit. My general impression is that the scientific method "wins" even in economics and that later works are better than earlier works.

Now it might be true that the average macro-economist of today understands less than Keynes did but I'd be hard pressed to say that the best don't understand more. Moreover, there are really great distillers. In macro for example, Hicks distilled Keynes into something that I would consider more useful that the original.

Nonetheless, I think it is correct that someone should be ... (read more)

6MichaelVassar
I'm sure some people understand more than Keynes, both today and in his time, but can you name them? The understanding of the best unrecognized synthesizing geniuses of both today and Keynes' day aren't available. If you think that the most famous contemporary macro people know more than Keynes I won't laugh, just observe that they are probably using that knowledge to make hedge fund managers rich, not sharing it with you. Macro-economists are rightly subject to the criticism "if your so smart, why aren't you rich".
4TrevinPeterson
There is a difference between rediscovering and old idea, and adapting an old idea to a new situation. Simply rediscovering an old idea does not grant much prestige. Austrians are constantly coming across Hayek quotes and parading them around as definitive solutions to current problems. The problem is that these ideas are every bit as untestable as they were on the day Hayek wrote them. A confirmation bias leads Austrians to see them as Truth, while Keysians remain skeptical. When old ideas are adapted into a testable form they endow a great deal of prestige. There are all sorts of anecdotes about this happening, such as Henry Ford taking the idea of an assembly line from Oldsmobile and mixing it with his observations from a meat factory, to create the moving assembly line. The difference is that this is a testable idea that creates immediate results.

I remember reading that one of the most g loaded tests was recognition time. I think the experiment involved flashing letters and timing how fast it took to press the letter on a keyboard. The key correlate was "time until finger left the home keys" which the authors interpreted as the moment you realized what the letter was.

I also heard a case that sensory memory lasts for a short a relatively constant time among humans and that difference in cognitive ability were strongly related to how speed on pushing information into sensory memory. The greater the speed the larger a concept could be pushed in before key elements started to leak out.

I had conceived of something like the Turing test but for intelligence period, not just general intelligence.

I wonder if general intelligence is about the domains under which a control system can perform.

I also wonder whether "minds" is a too limiting criteria for the goals of FAI.

Perhaps the goal could be stated as a IUCS. However, we dont know how to build ICUS. So perhaps we can build a control system whose reference point is IUCS. But we don't know that so we build a control system whose reference point is a control system whose reference point . . . until we get to some that we can build. Then we press start.

Maybe this is a more general formulation?

This was my original thought until I realized that of course it cancels or else the earth would crack into pieces.

0RolfAndreassen
The non-cracking of the Earth demonstrates only that the tidal force is small relative to that required to crack the Earth apart, which may not be a particularly strong upper bound on human scales. :) However, RobinZ's numbers show that it's also small relative to human weights, so there we go.

Richard, do you believe that the quest for FAI could be framed as a special case of the quest for the Ideal Ultimate Control System (IUCS). That is, intelligence in and of itself is not what we are after but control. Perhaps, FAI is the only route to IUCS but perhaps not?

Note: Originally I wrote Friendly Ultimate Control System but the acronym was unfortunate.

-1Richard_Kennaway
I don't want to tout control systems as The Insight that will create AGI in twenty years, but if I was working on AGI, hierarchical control systems organised as described by Bill Powers (see earlier references) are where I'd start from, not Bayesian reasoning[1], compression[2], or trying to speed up a theoretically optimal but totally impractical algorithm[3]. And given the record of toy demos followed by the never-fulfilled words "now we just have to scale it up", if I was working on AGI I wouldn't bother mentioning it until I had a demo of a level that would scare Eliezer. Friendliness is a separate concern, orthogonal to the question of the best technological-mathematical basis for building artificial minds. 1. LessWrong, passim. 2. Marcus Hutter's Compression Prize. 3. AIXItl and the Gödel machine.
2markrkrebs
The neurology of human brains and the architecture of modern control systems are remarkably similar, with layers of feedback, and adaptive modelling of the problem space, in addition to the usual dogged iron filing approach to goal seeking. I have worked on a control systems which, as they add (even minor) complexity at higher layers of abstraction, take on eerie behaviors that seem intelligent, within their own small fields of expertise. I don't personally think we'll find anything different or ineffable or more, when we finally understand intelligence, than just layers of control systems. Consciousness, I hope, is something more and different in kind, and maybe that's what you were really after in the original post, but it's a subjective beast. OTOH, if it is "mere" complex behavior we're after, something measurable and Turing-testable, then intelligence is about to be within our programming grasp any time now. I LOVE the Romeo reference but a modern piece of software would find its way around the obstacle so quickly as to make my dog look dumb, and maybe Romeo, too.

Well I would consider the Pencil-MrHen system as intelligent. I think further investigation would be required to determine that the pencil is not intelligent when it is not connected to MrHen, but that MrHen is intelligent when not connected to the pencil. It then makes sense to say that the intelligence originates from MrHen.

The problem with the self-referential from my perspective is that it presumes a self.

It seems to me that ideas like "I" and "want" graph humanness on to other objects.

So, I want to see what happens if I try to di... (read more)

0MrHen
Sure, that makes perfect sense. I haven't really given this a whole lot of thought; you are getting the fresh start. :) The self in self-referential isn't implied to be me or you or any form of "I". Whatever source of identity you feel comfortable with can use the term self-referential. In the case of your intelligent pencil, it very well may be the case that the pencil is self-updating in order to achieve what you are calling a goal. A "want" can describe nonhuman behavior, so I am not convinced the term is a problem. It does seem that I am beginning to place atypical restrictions on its definition, however, so perhaps "goal" would work better in the end. The main points I am working with: * An entity can have a goal without being intelligent (perhaps I am confusing goal with purpose or behavior?) * A non-intelligent entity can become intelligent * Some entities have the ability to change, add, or remove goals * These changes, additions, deletions are likely governed by other goals. (Perhaps I am confusing goals with wants or desires? Or merely causation itself?) * The "original" goal could be deleted without making an entity unintelligent. The pencil could pick a different spot on the ground but this would not cause you to doubt its intelligence. Please note that I am not trying to disagree (or agree) with you. I am just talking because I think the subject is interesting and I haven't really given it much thought. I am certainly no authority on the subject. If I am obviously wrong somewhere, please let me know.

It doesn't. My though process was too silly to even bother explaining.

Thoughts about intelligence.

My hope is that some altruistic person will read this comment, see where I am wrong and point me to the literature I need to read. Thanks in advance.

I've been thinking about the problem of general intelligence. Before going too deeply I wanted to see if I had a handle on what intelligence is period.

It seems to me that the people sitting in the library with me now are intelligent and that my pencil is not. So what is the minimum my pencil would have to do before I suddenly thought that it was intelligent?

Moving alone doesn't cou... (read more)

3Richard_Kennaway
You are talking about control systems. A control system has two inputs (called its "perception" and "reference") and one output. The perception is a signal coming from the environment, and the output is a signal that has an effect on the environment. For artificial control systems, the reference is typically set by a human operator; for living systems it is typically set within the organism. What makes it a control system is that firstly, the output has an effect, via the environment, on the perception, and secondly, the feedback loop thus established is such as to cause the perception to remain close to the reference, in spite of all other influences from the environment on that perception. The answers to your questions are: 1. A "goal" is the reference input of a control system. 2. An "obstacle" is something which, in the absence of the output of the control system, would cause its perception to deviate from its reference. 3. "Complicated" means "I don't (yet) understand this." Suggestions for readings. And a thought: "Romeo wants Juliet as the filings want the magnet; and if no obstacles intervene he moves towards her by as straight a line as they. But Romeo and Juliet, if a wall be built between them, do not remain idiotically pressing their faces against its opposite sides like the magnet and the filings with the card. Romeo soon finds a circuitous way, by scaling the wall or otherwise, of touching Juliet's lips directly. With the filings the path is fixed; whether it reaches the end depends on accidents. With the lover it is the end which is fixed, the path may be modified indefinitely." -- William James, "The Principles of Psychology"
0[anonymous]
So if something is capable, contrary to expectations, of achieving a constant state despite varying conditions, it's probably intelligent? I guess that in space, everything is intelligent.
1whpearson
Some food for philosophical thought, an oil drop that "solves" a maze. TL;DR it follows a chemical gradient due to it changing surface tension. I'd read something on the intentional stance.
2MrHen
If I were standing there catching the pencil and directing it to the spot on the floor, you wouldn't consider the pencil intelligent. The behavior observed is not pointing to the pencil in particular being intelligent. Just my two cents. I don't know anything about the concept of intelligence being defined as being able to pursue goals through complicated obstacles. If I had to guess at the missing piece it would probably be some form of self-referential goal making. Namely, this takes the form of the word, "want." I want to go to this spot on the floor. I can ignore a goal but it is significantly harder to ignore a want. At some point, my wants begin to dictate and create other wants. If I had to start pursing a definition of intelligence, I would probably start here. But I don't know anything about the field so this could have already been tried and failed.
1Kaj_Sotala
If you don't mind a slightly mathy article, I thought Legg & Hutter's Universal Intelligence was nice. It talks about machine intelligence, but I believe it applies to all forms of intelligence. It also addresses some of the points you made here.

I just read their website.

Its embarrassing but I have to say that honestly the centripetal force argument never occurred to me before. Rough calculations seem to indicate that a large man 100Kg should be almost half a pound heavier in the day time as he is at night. Kinda cool.

Now I am dying to get something big and stable enough to see if my home scale can pick it up.

4Eliezer Yudkowsky
Quick look didn't find it, but I don't see why this follows (and at a wild guess, I'm guessing it doesn't). Can you link?
3RolfAndreassen
Don't forget to adjust your calculations for not being on the equator, and to take into account that 'nighttime' is not equivalent to 'the Sun pulls you directly towards the center of the Earth'. Both tend to make the effect smaller.

Yes,

I could try to say that my work focuses only on understand how growth and development take place for example but this in practice this it doesn't work that way.

A conversation with students, policy makers, even fellow economists will not go more than 5 - 10 mins without taking a normative tact. Virtually everyone is in favor of more growth and so the question is invariably, "what should we DO to achieve it"

I don't have any connection to BIAC.

My specialty is human capital (education) and economic growth and development

0realitygrill
Ah. I know something of the former and little of the latter. I'd presume your interests are much more normative than mine.

Name: Karl Smith

Location: Raleigh, North Carolina

Born: 1978

Education: Phd Economics

Occupation: Professor - UNC Chapel Hill

I've always been interested in rationality and logic but was sidetracked for many (12+) years after becoming convinced that economics was the best way to improve the lives of ordinary humans.

I made it to Less Wrong completely by accident. I was into libertarianism which lead me to Bryan Caplan which lead me Robin Hanson (just recently). Some of Robin's stuff convinced me that Cryonics was a good idea. I searched for Cryonics and found ... (read more)

0realitygrill
Awesome. I'd love to hang with you if I'm there next year; you don't have any connections to BIAC do you? I just applied for a postbac fellowship there.. What's your specialty in econ?

I am no where near caught up on FAI readings but here are is a humble thought.

What I have read so far seems to be assuming a single jump FAI. That is once the FAI is set it must take us to where we ultimately want to go without further human input. Please correct me if I am wrong.

What about a multistage approach?

The problem that people might immediately bring up is that a multistage approach might lead elevating subgoals to goals. We say, "take us to mastery of nanotech" and the AI decides to rip us apart and organize all existing ribosomes under... (read more)

Yes, I am working my way through the sequences now. Hearing these ideas makes one want to comment but so frequently its only a day or two before I read something that renders my previous thoughts utterly stupid.

It would be nice to have a "read this and you won't be a total moron on subject X" guide.

Also, it would be good to encourage the readings about Eliezer Intellectual Journey. Though its at the bottom of the sequence page I used it a "rest reading" between the harder sequences.

It did a lot to convince me that I wasn't inherently stupid. Knowing that Eliezer has held foolish beliefs in the past is helpful.

Well thats of course not right. The primary loss in dropping an H-bomb on NYC is the loss of human life - both in a moral and an economic sense.

Here is a point to consider. Over the last 100 years the population of the earth has increased by 5 billion. We have created new places for all of those people to live and work. And that was done with a population much smaller than we have today. Over the next 100 years we may add 3 billion more and we will need place for those people to live and work.

Its not immediately clear that the costs of building all of this in a new location is that huge relatively speaking.

"Probably good enough" doesn't engender a lot of confidence. It would seem a tragedy to go through all of this and then not be reanimated because you carelessly chose the wrong org.

On the other hand spending too much time trying to pick the right org does seem like raw material for cryocrastination.

Does anyone have thoughts / links on whole body vitrification? ALCOR claims that this is less effective than going neuro, but CI doesn't seem to offer neuro option anymore.

0Paul Crowley
Disclaimer: I have no relevant expertise. That said, FWIW I suspect that whole-body people will be brought back first: * if through bodily reanimation, because repair of the whole body will be easier than replacement of the body given only the severed head * if through scanning/WBE, because it will be possible to scan their spinal columns as well as their brains and it will be easier to build them virtual bodies using their real bodies as a basis. Though CI don't offer a neuro option, their focus (obviously) is preserving the information in the brain.

Could someone discuss the pluses and minuses of ALCOR vs Cryonics Institute.

I think Eliezer mentioned that he is with CI because he is young. My reading of the websites seem to indicate that CI leaves a lot of work to be potentially done by loved ones or local medical professionals who might not be in the best state of mind or see fit to co-operate with a cryonics contract.

Thoughts?

1Kevin
Alcor is better. CI is cheaper and probably good enough.
8Alicorn
It's not at all obvious to me how to comparison-shop for cryonics. The websites are good as far as they go, but CI's in particular is tricky to navigate, funding with life insurance messes with my estimation of costs, and there doesn't seem to be a convenient chart saying "if you're this old and this healthy and this solvent and your family members are this opposed to cryopreservation, go with this plan from this org".

Eliezer:

Don't you realize that I have work to do and a personal life to engage in without you posting things that I must obviously drop everything and read and think about like the Bostrom paper. Have a heart, man. Have a heart.

I see some problems here but it doesn't seem quite as intractable as Alicorn suggests.

If your beliefs are highly correlated with those of your teachers then you need to immerse yourself in the best arguments of the opposing side. If you notice that you are not changing your mind very often then you have a deeper problem.

To give a few related examples. One of the things that gives me confidence in my major belief structure is that I am an Atheist Capitalist. But, as I child I was raised and immersed in Atheist Communism. I rejected the communism but not th... (read more)

I've also noticed "liberals" making more sense, but I attribute this to smart people abandoning conservative groups and jumping ship to liberal ones. This may mean that "conservative" policies are being under-argued.