Comment author: orthonormal 02 October 2010 11:24:30PM 1 point [-]

I understand how you feel about this, but I think most of the veterans would think more of a person who showed them respect by getting feedback before putting their first post on the main page. Certainly I would.

...I wonder if it would be possible to implement the following feature: the first post for a new account automatically goes to the Discussion page for a few days before it posts to the main site. If that were a known feature,would you be bothered by it?

Comment author: JohnDavidBustard 03 October 2010 01:43:11PM 0 points [-]

I think this comment highlights the distinction between popular and good.

High ranked posts are popular, good may or may not have anything to do with it.

Personally I find all this kowtowing to the old guard a bit distasteful. One of my favorite virtues of academia is the double blind submissions process. Perhaps there are similar approaches that could be taken here.

Comment author: NancyLebovitz 22 September 2010 10:27:23AM 1 point [-]

In regards to prediction: I just heard (starts at 9:20) some claims that no method of prediction for the economy is doing better than extremely crude models. Unfortunately, I haven't been able to find a cite for the "two young economists" who did the research.

However, I'm not sure that prediction is a matter of wisdom-- I think of wisdom as very general principles, and prediction seems to require highly specific knowledge.

It was obvious that real estate prices couldn't go up forever, especially as more and more people were speculating in real estate, but as far as I can tell, it was not at all obvious that such a large amount of the economy was entangled in real estate speculation that a real estate bust would have such large side effects.

Solutions to difficult technical problems became much more feasible after science was around for a while. I'm not dead certain we even have the beginnings for understanding complex social systems.

Part of the difficulty of prediction is that it's dependent on both science and tech which hasn't yet been discovered (our current world is shaped by computation having become easy while battery tech is still fairly recalcitrant) and on what people are doing-- and people are making guesses about what to do in a highly chaotic situation.

Taleb is interesting for working on how to live well when only modest amounts of prediction are feasible.

Comment author: JohnDavidBustard 22 September 2010 12:35:19PM 0 points [-]

Interesting points.

I suspect that predicting the economy with economics is like predicting a persons behaviour from studying their biology. My desire for wisdom is in the form of perspective, I want to know the rough landscape of the economy (like the internal workings of a body).

For example I have little grasp of the industries contributing most to GDP or the taxes within my (or any other) country. In terms of government spending this site provides a nice overview for the UK, but it is only the start. I would love to know the chain of businesses and systems that provide the products I use each day. In particular, I'm very interested in the potential for technologically supported self sufficiency as a means for providing a robust underpinning to society. To do this effectively its necesary to understand the systems that we depend upon.

While such understanding might not enable prediction, I think it does provide perspective on potential opportunities and threats (just as biology does). It also helps to focus on relative importance, similar to how concentrating on cash flow helps prioritise business decisions. E.g. the social equivalent of worrying about too much paper usage in office printers when there are entire business units that aren't profitable. Or similarly, being blind to opportunities that could render many other problems irrelevant (such as easy self sufficiency reducing the necesity for potentially problematic government infrastructure).

Comment author: AnnaSalamon 21 September 2010 02:43:11AM 8 points [-]

I'm inclined to agree with your proposal, but I wonder if there are supplementary community norms that, if made explicit, might make it easier to venture into confusing and polarizing topics without losing LW's usual level of accuracy and of signal to noise. (I assume fear of filling the blog with nonsense, and thereby losing some good readers/commenters, is much of what currently keeps e.g. political discussion off of LW.)

Maybe it would help to have heuristics such as "if you don't have anything clear and obviously correct to say, don't say anything at all", that could be reiterated and enforced when tricky topics come up.

Comment author: JohnDavidBustard 22 September 2010 08:25:04AM 0 points [-]

I fear this would reduce LessWrong to referencing research papers. Perhaps there is more value in applying rigor as disagreements emerge. I.e. a process of going from two people flatly disagreeing to establishing criteria to choose between them. I.e. a norm concerning a process for reaching reasonable conclusions on a controversial topic. In this way there would be greater emphasis on turning ambiguous issues into reasonable ones. Which I view as one of the main benefits of rationality.

Comment author: multifoliaterose 19 September 2010 02:39:25PM 2 points [-]

I'm very sympathetic to your comment. I feel that there's an emerging community of people interested in answering these questions at places like Less Wrong and GiveWell but that the discussion is very much in its infancy. The questions that you raise are fundamentally very difficult but one can still hope to make some progress on them.

I'll say that I find the line of thinking in Nick Bostrom's Astronomical Waste article to be a compelling justification for existential risk reduction in principle. But I'm still left with the extremely difficult question of determining what the most relevant existential risks are and what we can hope to do about them.

My own experience up until now has been that it's better to take some tangible action in real time rather than equivocating. See my Missed opportunities for doing well by doing good posting.

Comment author: JohnDavidBustard 19 September 2010 05:42:48PM *  3 points [-]

Thank you, I also agree with your comments on your posting. I generally prefer a balance of pragmatic action with theory. In fact, I view the 'have a go' approach to theoretical understanding to be very useful as well. I think just roughly listing ones thoughts on a topic and then categorising them can be very revealing and really help provide perspective. I recently had a go at my priorities (utility function) and came up with the following:

  • To be loved
  • To be wise
  • To create things that I am proud of
  • To be entertained
  • To be respected
  • To be independent (ideally including being safe, relatively healthy and financially secure)

This is probably not perfect but it is something to build on (and a list I wouldn't mind a friendly AI optimising for either).

Also, as with the positive effects mentioned in your article, I've found giving to charity makes it easier for me to feel love (or at least friendship) towards others and to feel more cared for in return (perhaps simply because giving to charity makes me slightly nicer towards everyone I meet).

My current focus is wisdom, I feel uncomfortable that I don't have perspective on problems in society or the structure of the economy (i.e. how my quality of life is maintained). When I mention these ideas to others their reaction is generally to describe the problems as being too hard or impossible, I think this is a very interesting form of rationality failure, because the same people would go to enormous lengths to construct a solution to a technical problem if they were told it was not possible. Why don't creative, intellectual and rational people apply their problem solving skills to these kinds of issues? Why don't they 'have a go'?

Comment author: jimrandomh 19 September 2010 04:14:35PM 1 point [-]

Clumsy humans have caused plenty of disasters, too. Matching human dexterity with human-quality hardware is not such a high bar.

Comment author: JohnDavidBustard 19 September 2010 04:27:33PM 1 point [-]

True, in fact despite my comments I am optimistic of the potential for progress in some of these areas. I think one significant problem is the inability to collaborate on improving them. For example, research projects in robotics are hard to build on because replicating them requires building an equivalent robot, which is often impractical. The robocup is a start as at least it has a common criteria to measure progress with. I think a standardised simulator would help (with challenges that can be solved and shared within it) but even more useful would be to create robot designs that could be printed with a 3D printer (plus some assembly like lego) so that progress could be rapidly shared. I realise this is much less capable than human machinery but I feel there is a lot further to go with the software and AI side.

Comment author: jimrandomh 19 September 2010 02:27:37PM *  2 points [-]

For example, we cannot get a robot to walk and run in a robust way (BigDog is a start but it will be a while before its doing martial arts), we can't create a face recognition algorithm that matches human performance. We can't even make a robotic arm that can dynamically stabilise an arbitrary weight (i.e. pick up a general object reliably).

Two of these (walking/running, and stabilizing weights with a robotic arm) are at least partially hardware limitations, though. Human limbs can move in a much broader variety of ways, and provide a lot more data back through the sense of touch than robot limbs do. With comparable hardware, I think a narrow AI could probably do about as well as humans do.

Comment author: JohnDavidBustard 19 September 2010 03:45:04PM 1 point [-]

The real difficulty with both these control problems is that we lack a theory for how to ensure the stability of learning based control systems. Systems that appear stable can self destruct after a number of iterations. A number of engineering projects have attempted to incorporate learning. However, because of a few high profile disasters, such systems are generally avoided.

Comment author: Will_Newsome 19 September 2010 09:22:41AM 3 points [-]

Huh, I got the opposite impression - that the timeline for brain emulation was less uncertain than the timeline for AI.

It is less uncertain, but be careful to distinguish between uploads and emulation. Emulation just takes being able to scan at a sufficient level to get something brain-like; uploading requires getting sufficient resolution to get actual personalities and the like. It's intuitively probable that you can get dangerous neuromorphic AI via emulation before you can get a fully emulated previously in-the-flesh human specific simulation that would count as an 'upload'. But I don't have a strong technical argument for that proposition. Perhaps the Whole Brain Emulation Roadmap (PDF) would have more to say.

Comment author: JohnDavidBustard 19 September 2010 12:05:59PM *  2 points [-]

In terms of emulation, the resolution is currently good enough to identify molecules communicating across synapses. This enables an estimate of synapse strengths as well as a full wiring diagram of physical nerve shape. There are emulators for the electrical interactions of these systems. Also our brains are robust enough that significant brain damage and major chemical alteration (ecstasy etc.) are recoverable from, so if anything brains are much more robust than electronics. AI, in contrast, has real difficulty achieving anything but very specific problem areas which rarely generalise. For example, we cannot get a robot to walk and run in a robust way (BigDog is a start but it will be a while before its doing martial arts), we can't create a face recognition algorithm that matches human performance. We can't even make a robotic arm that can dynamically stabilise an arbitrary weight (i.e. pick up a general object reliably). All our learning algorithms have human tweaked parameters to achieve good results and hardly any of them can perform online learning beyond the constrained manually fed training data used to construct them. As a result there are very few commercial applications of AI that operate unaided (i.e. not as a specific tool equivalent to a word processor ). I would love to imagine otherwise, but I don't understand where the confidence in AI performance is coming from. Does anyone even have a set of partial Turing-test like steps that might lead to an AI (dangerous or otherwise).

In response to The Meaning of Life
Comment author: JohnDavidBustard 19 September 2010 10:48:01AM *  4 points [-]

I really like this post. It touches on two topics that I am very interested in:

How society shapes our values (domesticates us)

and

What should we value (what is the meaning of life?)

I find the majority of discussions extremely narrow, focusing on details while rarely attempting to provide perspective. Like doing science without a theory, just performing lots of specific experiments without context or purpose.

1 Why are things the way they are and why do we value the things we value? A social and psychological focus, Less Wrong touches on these issues but appears focused on specific psychological studies rather than any overall perspective (I suspect this would start to touch on politics and so would not be discussed). I think our understanding of the system we are a part of significantly shapes our sense of meaning and purpose and, as a result, strongly influences our society.

I would go so far as to suggest we are psychologically incapable of pursuing goals that are inconsistent with our understanding of how the universe functions (sorry Clippy), i.e. if we are selfish gene darwinists we will value winning and reproductive success. If we have a Confucian belief that the universe is a conflict between order and chaos we will pursue social stability and tradition. I have my own take on this for those who are interested (How we obtain our values, the meaning of life)

2 What problems do we want to solve? It seems much easier to find problems to solve than goals to obtain. A recent post about Charity mentioned GiveWell. This organisation at least evaluates whether progress is made but as far as I am aware there is no economics of suffering no utilitarian (or otherwise) analysis of the relative significance of different problems. Is a destructive AI worse than global warming, or cancer or child abuse or obesity or terrorism. Is there a rational means to evaluate this for a given utility function? Has anyone tried? (this is an area I'm looking into so any links would be greatly appreciated)

3 What can we do? Within instrumental rationality and related fields there are a lot of discussions of actions to achieve improvements in capability. Likewise for charity, lots of good causes. However there seems to be relatively little discussion of what is likely to be achieved as a result of the action, as if any progress is justification enough to focus on it. For example, what will be the difference in quality of life if I pursue a maximally healthy lifestyle vs a typical no exercise slacker life. In particular, do I want to die of a heart attack or cancer and alzheimers (which given my family history are the two ways I'm likely to go). If we had a realistic assessment of return on investment, as well as how psychologically likely we are to achieve things, we could focus our actions rationally.

I suggest that if we know how things work, what the problems are and what we can do about them, then we have a pretty good start on the meaning of life. I am frequently frustrated by the lack of perspective on these issues, we seem culturally conditioned to focus on action and specific theoretical points rather than trying to get a handle on it all. Of course that might be more fun, and that might be a sensible utility function. But for my own peace of mind I'd like to check there isn't an alternative.

Comment author: Relsqui 16 September 2010 08:50:56PM *  2 points [-]

I like your post because it makes me feel bad.

Thanks, I think? You're not explicit about why it makes you feel bad, and I'm curious. (Rather, while you address it in the next sentence, I'm not sure I understand what kind of "feeling bad" you mean.)

I think for most people deep down, community is more important than ideology (or indeed achieving anything)

I think you've hit the nail on the head here.

but a community where you cannot be yourself is one in which you always feel uncomfortable

This is why it bothers me to see it happen. I'm an empathetic sort, and seeing my friend try to fit in like a square peg in a round pegboard makes me cringe. (Well, that, and I'm one of the people who finds the behavior obnoxious when applied to the wrong context.)

an intellectually direct way of communicating

I think this is an interesting way to phrase it, although I can't put my finger on why. What would you call the opposite? I'm on the lookout for terms to use for these which don't imply value on either side, since the only criteria for value I see are utility and effectiveness, which are context-dependent.

Comment author: JohnDavidBustard 16 September 2010 10:59:53PM 2 points [-]

I think this section of your post is part of what makes me feel bad about your comment. The reason I said I like it, is because I think it's important that people can talk about these things and the fact that your comments affect me in that way highlights that they are important to me.

I would have worded this more strongly, myself. In my experience, people who are themselves inclined towards reasoned debate, even civilly, drastically overestimate how much other people are also inclined towards debate and argument.

I can't speak for anyone else, but personally I don't think I drastically overestimate others' interest in debate, I'm painfully aware of how much hostility there is to making direct statements about even slightly controversial issues. When I talk that way with others, I'm not doing it to fit in, I'm doing it because I want to and because I feel driven to. I feel frustrated at having a different personality from the majority and don't view others lifestyles as inherently more legitimate than my own. In particular, I have a desire to understand why society and my community works as it does. I feel there is a great deal of unspoken social dynamics and traditions which act as a mask to unjustified status hierarchies and passive aggressive conflict. I love the directness of reasoned argument because I feel that it is basically fair. It can quickly sear away self delusions and unjustified assumptions, getting to a lasting truth. A truth that while unpalatable is, at its best, independent of who has said it and how it has been said. Avoiding the undesirable (for me at least) political maneuvering that seems to dominate so much of society.

For me, I'm looking for a community which is honest and fearless with itself and others. I'm less interested in productivity or instrumental rationality than simply being able to discuss issues in a direct way so that I can get a better understanding of them for my own satisfaction. Without this opportunity, I feel I am engaging in a social dance that never satisfies my desire to find what is true and what is important.

In terms of a neutral opposite something like:

Psychologically accommodating

might be good. It emphasises the fact that the communication is designed to be easy to absorb without implying manipulation. Both sound like they would be useful and both subtly imply their weaknesses (i.e. insult and compromise).

Oh and I should add, I like your forest : )

Comment author: satt 16 September 2010 08:41:53PM *  3 points [-]

With the disclaimer that I'm no expert and quite possibly wrong about some of this, here goes.

Is it correct, to say that the entropy prior is a consequence of creating an internally consistent formalisation of the aesthetic heuristic of preferring simpler structures to complex ones?

No. Or, at least, that's not the conscious motivation for the maximum entropy principle (MAXENT). As I see it, the justification for MAXENT is that entropy measures the "uncertainty" the prior represents, and we should choose the prior that represents greatest uncertainty, because that means assuming the least possible additional information about the problem.

Now, it does sometimes happen that MAXENT tells you to pick a prior with what I'd guess you think of as "simpler structure". Suppose you're hiding in your fist a 6-sided die I know nothing about, and you ask me to give you my probability distribution for which side'll come up when you roll it. As I know nothing about the die, I have no basis for imposing additional constraints on the problem, so the only operative constraint is that P(1) + P(2) + P(3) + P(4) + P(5) + P(6) = 1; given just that constraint, MAXENT says I should assign probability 1/6 to each side.

In that particular case, MAXENT gives a nice, smooth, intuitively pleasing result. But if we impose a new constraint, e.g. that the expected value of the die roll is 4.5 (instead of the 3.5 implied by the uniform distribution), MAXENT says the appropriate probability distribution is {0.054, 0.079, 0.114, 0.165, 0.240, 0.348} for sides 1 to 6 respectively (from here), which doesn't look especially simple to me. So for all but the most basic problems, I expect MAXENT doesn't conform to the "simpler structures" heuristic.

There is probably some definition of "simple" or "complex" that would make your heuristic equivalent to MAXENT, but I doubt it'd correspond to how we normally think of simplicity/complexity.

Comment author: JohnDavidBustard 16 September 2010 09:08:56PM 1 point [-]

Thank you, that's very interesting, and comforting.

View more: Prev | Next