All of JohnDavidBustard's Comments + Replies

Thanks for the comment, I think it is very interesting to think about the minimum complexity algorithm that could plausibly be able to have each conscious experience. The fact that we remember events and talk about them and can describe how they are similar e.g. blue is cold and sad, implies that our internal mental representations and the connections we can make between them must be structured in a certain way. It is fascinating to think about what the simplest 'feeling' algorithm might be, and exciting to think that we may someday be able to create new conscious sensations by integrating our minds with new algorithms.

Thanks it is very handy to get something that is compatible with SUMO.

Thank you for the thoughtful comments. I am not certain that the approach that I am suggesting will be successful but I am hoping that more complex experiences may be explainable from simpler essences, similar to the behaviour of fluids from simpler atomic rules. I am currently focused on the assumption that the brain is similar to a modern reinforcement learning algorithm where there is a one or more large learnt structures and a relatively simple learning algorithm. The first thing I am hoping to look at is if all the concious experiences could be explai... (read more)

Great links, thank you, I hadn't considered the drug effects before that is an interesting perspective on positive sensations. Also I wanted to say I am a big fan of your work, particularly your media synthesis stuff. I use it in teaching of deep learning to show examples of how to use academic source code to explore cutting edge techniques.

A high level post on its use would be very interesting.

I think my main criticism of the Bayes approach is that it leads to the kind of work you are suggesting i.e. have a person construct a model and then have a machine calculate its parameters.

I think that much of what we value in intelligent people is their ability to form the model themselves. By focusing on parameter updating we aren't developing the AI techniques necessary for intelligent behavior. In addition, because correct updating does not guarantee good performance (because the model properties... (read more)

3jsteinhardt
But even "learning to learn" is done in the context of a model, it's just a higher-level model. There are in fact models that allow experience gained in one area to generalize to other areas (by saying that the same sorts of structures that are helpful for explaining things in one area should be considered in that other area). Talking about what an AI researcher would do is asking much more out of an AI than one would ask out of a human. If we could get an AI to even be as intelligent as a 3-year-old child then we would be more or less done. People don't develop sophisticated problem solving skills until at least high school age, so it seems difficult to believe that such a problem is fundamental to AGI. Another reference, this time on learning to learn, although unfortunately it is behind a pay barrier (Tenenbaum, Goodman, Kemp, "Learning to learn causal models"). It appears that there is also a book on more general (mostly non-Bayesian) techniques for learning to learn: Sebastian Thrun's book. I got the latter just by googling, so I have no idea what's actually in it, other than by skimming through the chapter descriptions. It's also not available online.

Eh not impossible... just very improbable (in a given world) and certain across all worlds.

I would have thought the more conventional explanation is that the other versions are not actually you (just very like you). This sounds like the issue of only economists acting in the way that economists model people. I would suspect that only people who fixate on such matters would confuse a copy with themselves.

I suspect that people who are vulnerable to these ideas leading to suicide are in fact generally vulnerable to suicide. There are lots of better reasons to... (read more)

-1David_Allen
"Very improbable" is the typical assumption with MWI, but I think that it is mistaken in most cases dealing with complex systems. Each wave-function sets limits on what can occur. Wave-functions don't have infinite extents, there are areas with zero amplitude. Each additional wave-function that must meet specific requirements further restricts the possible outcomes. In general, the likelihood of failing to meet the simultaneous condition grows exponentially as the system size grows linearly. Since quantum survival (avoiding death in some worlds, in some meaningful context) will usually require a very large number of quantum level alternatives to be simultaneously selected for, quantum survival will almost always be impossible. A person who experiences quantum survival once is very lucky, but almost certainly won't survive the next time. A person who fails to experience quantum survival never gets another chance. So my conclusion is that quantum immortality is impossible, not just very improbable.

Thanks for your reference it is good to get down to some more specific examples.

Most AI techniques are model based by necessity: it is not possible to generalise from samples unless the sample is used to inform the shape of a model which then determines the properties of other samples. In effect, AI is model fitting. Bayesian techniques are one scheme for updating a model from data. I call them incomplete because they leave a lot of the intelligence in the hands of the user.

For example, in the thesis reference the author designs a model of transformations... (read more)

2jsteinhardt
Model selection is definitely one of the biggest conceptual problems in GAI right now (I would say that planning once you have a model is of comparable importance / difficulty). I think the way to solve this sort of problem is by having humans carefully pick a really good model (flexible enough to capture even unexpected situations while still structured enough to make useful predictions). Even with SVMs you are implicitly assuming some sort of structure on the data, because you usually transform your inputs into some higher-dimensional space consisting of what you see as useful features in the data. Even though picking the model is the hard part, using Bayes by default seems like a good idea because it is the only general method I know of for combining all of my assumptions without having to make additional arbitrary choices about how everything should fit together. If there are other methods, I would be interested in learning about them. What would the "really good model" for a GAI look like? Ideally it should capture our intuitive notions of what sorts of things go on in the world without imposing constraints that we don't want. Examples of these intuitions: superficially similar objects tend to come from the same generative process (so if A and B are similar in ways X and Y, and C is similar to both A and B in way X, then we would expect C to be similar to A and B in way Y, as well); temporal locality and spatial locality underly many types of causality (so if we are trying to infer an input-output relationship, it should be highly correlated over inputs that are close in space/time); and as a more concrete example, linear momentum tends to persist over short time scales. A lot of work has been done in the past decade on formalizing such intuitions, leading to nonparametric models such as Dirichlet processes and Gaussian processes. See for instance David Blei's class on Bayesian nonparametrics (http://www.cs.princeton.edu/courses/archive/fall07/cos597C/index.h

From what I understand, in order to apply Bayesian approaches in practical situations it is necessary to make assumptions which have no formal justification, such as the distribution of priors or the local similarity of analogue measures (so that similar but not exact predictions can be informative). This changes the problem without necessarily solving it. In addition, it doesn't address the issue of AI problems not based on repeated experience, e.g. automated theorem proving. The advantage of statistical approaches such as SVMs is that they produce practi... (read more)

1jsteinhardt
Bayesian approaches tend to be more powerful than other statistical techniques in situations where there is a relatively limited supply of data. This is because Bayesian approaches, due to being model-based, tend to have a richer structure that allows it to take advantage of more of the structure of the data; a second reason is because Bayes allows for the explicit integration of prior assumptions and is therefore usually a more aggressive form of inference than most frequentist methods. I tried to find a good paper demonstrating this (called "learning from one example"), unfortunately I only came across this PhD thesis --- http://www.cs.umass.edu/~elm/papers/thesis.pdf , although there is certainly a lot of work being done on generalizing from one, or a small number of, examples.

Thank you very much for your great reply. I'll look into all of the links. Your comments have really inspired me in my exploration of mathematics. They remind me of the aspect of academia I find most surprising. How it can so often be ideological, defensive and secretive whilst also supporting those who sincerely, openly and fearlessly pursue the truth.

Thank you, my main goal at the moment is to get a handle on statistical learning approaches and probability. I hope to read Jaynes's book and the nature of statistical learning theory once I have some time to devote to them. however I would love to find an overview of mathematics. Particularly one which focuses on practical applications or problems. One of the other posts mentioned the Princeton companion to Mathematics and that sounds like a good start. I think what I would like is to read something that could explain why different fields of mathematics w... (read more)

3multifoliaterose
Upvoted for a thoughtful comment. 1. I don't know anything about statistical learning theory. 2. I don't know what kinds of probability you're interested in learning, but would recommend Concrete Mathematics: A Foundation for Computer Science by Graham, Knuth and Patashnik and William Feller's two volume set An Introduction to Probability Theory and Its Applications. 3. I would second the recommendation of the Princeton Companion to Mathematics but would also warn it does not go into enough depth for one to get an accurate understanding of what many of the subjects discussed therein are about. This is understandable given space constraints. 4. The edifice of pure mathematics is vast and the number of people alive who could give a good overview of existing mathematics as a whole is tiny and possibly zero. 5. As a matter of practice, much of the information about how mathematicians learn and think about a given subject is never recorded. See this comment by SarahC and Bill Thurston's MathOverflow question Thinking and Explaining. 6. On average I've found reading math books that adopt a historical approach to the material therein to be considerably more useful than reading math books that adopt an axiomatic approach to the material therein. 7. Based on my (limited) impression of applied math, it's not uncommon for people to use advanced mathematical techniques to solve a practical problem because doing so makes for a good marketable story rather than because the advanced mathematical techniques are genuinely useful to analyzing the practical problem at hand. 8. There is an issue of a high noise-to-signal ratio in mathematics textbooks corresponding to the fact that many authors of textbooks don't have the depth of understanding of the creators of the theories that they're writing about and correspondingly do not emphasize the key points. 9. Concerning your suspicion that "mathematics is as it is because it appeals to those who like puzzles, r

So, assuming survival is important, a solution that maximises survival plus wireheading would seem to solve that problem. Of course it may well just delay the inevitable heat death ending but if we choose to make that important, then sure, we can optimise for survival as well. I'm not sure that gets around the issue that any solution we produce (with or without optimisation for survival) is merely an elaborate way of satisfying our desires (in this case including the desire to continue to exist) and thus all FAI solutions are a form of wireheading.

One frustration I find with mathematics is that it is rarely presented like other ideas. For example, few books seem to explain why something is being explained prior to the explanation. They don't start with a problem, outline its solution provide the solution and then summarise this process at the end. They present one 'interesting' proof after another requiring a lot of faith and patience from the reader. Likewise they rarely include grounded examples within the proofs so that the underlying meaning of the terms can be maintained. It is as if the field ... (read more)

1multifoliaterose
I agree with your remarks here and share your frustration. While books of the type that you're looking for are relatively uncommon; over the years I've amassed a list of ones that I've found very good. What subject(s) are you interested in learning? (N.B. There are large parts of math that I'm ignorant of - in particular I don't know almost anything about applied math and so may not be able to say anything useful - I just thought I'd ask in case I can help.)

I'm not sure I understand the distinction between an answer that we would want and a wireheading solution. Are not all solutions wireheading with an elaborate process to satisfy our status concerns. I.e. is there a real difference between a world that satisfies what we want and directly altering what we want? If the wire in question happens to be an elaborate social order rather than a direct connection why is that different? What possible goal could we want pursued other than the one which we want?

0whpearson
From an evolutionary point of view those things that manage to procreate will out compete those things that change themselves to not care about that and just wirehead. So in non-singleton situations, alien encounters and any form of resource competition it matters whether you wirehead or not. Pleasure, in an evolved creature, can be seen as the giving (very poor) information on the map to the territory of future influence for the patterns that make up you.

Ok, so how about this work around.

The current approach is to have a number of human intelligences continue to explore this problem until they enter a mental state C (for convinced they have the answer to FAI). The next stage is to implement it.

We have no other route to knowledge other than to use our internal sense of being convinced. I.e. no oracle to tell us if we are right or not.

So what if we formally define what this mental state C consists of and then construct a GAI which provably pursues only the objective of creating this state. The advantage bein... (read more)

Interesting, if I understand correctly the idea is to find a theoretically correct basis for deciding on a course of action given existing knowledge and then to make this calculation efficient and then direct towards a formally defined objective.

As distinct from a system which potentially sub optimally, attempts solutions and tries to learn improved strategies. i.e. one in which the theoretical basis for decision making is ultimately discovered by the agent over time (e.g. as we have done with the development of probability theory). I think the perspective... (read more)

2Vladimir_Nesov
Yes, but there is only one top-level objective, to do the right thing, so one doesn't need to define an objective separately from the goal system itself (and improving state of knowledge is just another thing one can do to accomplish the goal, so again not a separate issue). FAI really stands for a method of efficient production of goodness, as we would want it produced, and there are many landmines on this path, in particular humanity in its current form doesn't seem to be able to retain its optimization goal in the long run, and the same applies to most obvious hacks that don't have explicit notions of preference, such as upload societies. It's not just a question of speed, but also of ability to retain the original goal after quadrillions of incompletely understood self-modifications.

If there is an answer to the problem of creating an FAI, it will result from a number of discussions and ideas that lead a set of people to agreeing that a particular course of action is a good one. By modelling psychology it will be possible to determine all the ways this can be done. The question then is why choose one over any of the others? As soon as one is chosen it will work and everyone will go along with it. How could we rate each one? (they would all be convincing by definition). Is it meaningful to compare them? Is the idea that there is some transcendent answer that is correct or important that doesn't boil down to what is convincing to people?

2Vladimir_Nesov
Understanding the actual abstract reasons for agents' decisions (such as decisions about agreeing with a given argument) seems to me a promising idea, I'm trying to make progress on that (agents' decisions don't need to be correct or well-defined on most inputs for the reasons behind their more well-defined behaviors to lead the way to figuring out what to do in other situations or what should be done where the agents err). Note that if you postulate an algorithm that makes use of humans as its elements, you'd still have the problems of failure modes, regret for bad design decisions and of the capability to answer humanly incomprehensible questions, and these problems need to be already solved before you start the thing up.

When I say feel, I include:

I feel that is correct. I feel that is proved etc.

Regardless of the answer, it will ultimately involve our minds expressing a preference. We cannot escape our psychology. If our minds are deterministic computational machines within a universe without any objective value, all our goals are merely elaborate ways to make us feel content with our choices and a possibly inconsistent set of mental motivations. Attempting to model our psychology seems like the most efficient way to solve this problem. Is the idea that there is some othe... (read more)

3Vladimir_Nesov
Which problem? You need to define which action should AI choose, in whatever problem it's solving, including the problems that are not humanly comprehensible. This is naturally done in terms of actual humans with all their psychology (as the only available source of sufficiently detailed data about what we want), but it's not at all clear in what way you'd want to use (interpret) that human data. "Attempting to model psychology" doesn't answer any questions. Assume you have a proof-theoretic oracle and a million functioning uploads living in a virtual world however structured, so that you can run any number of experiments involving them, restart these experiments, infer the properties of whole infinite collections of such experiments and so on. You still won't know how to even approach creating a FAI.

Ok, I certainly agree that defining the goal is important. Although I think there is a definite need for a balance between investigation of the problem and attempts at its solution (as each feed into one another). Much as how academia currently functions. For example, any AI will need a model of human and social behaviour in order to make predictions. Solving how an AI might learn this would represent a huge step towards solving FAI and a huge step in understanding the problem of being friendly. I.e. whatever the solution is will involve some configuration... (read more)

1Vladimir_Nesov
Unfortunately, if you think about it, "predicting how a person feels" isn't really helpful to anything, and doesn't contribute to the project of FAI at all (see Are wireheads happy? and The Hidden Complexity of Wishes, for example). The same happens with other obvious ideas that you think up in the first 5 minutes of considering the problem, and which appear to argue that "research into nuts and bolts of AGI" is relevant for FAI. But on further reflection, it always turns out that these arguments don't hold any water. The problem comes down the the question of understanding of what it is exactly you want FAI to do, not of how you'd manage to write an actual program that does that with reasonable efficiency. The horrible truth is that we don't have the slightest technical understanding of what it is we want.

I suggest just getting some casual exercise or watching some good films and tv shows. They're full of emotionally motivating experiences.

I think there is a worrying tendency to promote puritan values on LW. I personally see no moral problem with procrastination, or even feeling bad every so often. I feel worried that I might not hit deadlines or experience some practical consequence from not working on a task but I wouldn't want to add moral guilt. I think if people lose sight of the pleasures in life they become nihilistic which in turn leads them to be ... (read more)

8erratio
I don't think you have an accurate model of the community here. For example, people don't talk about being productive all the time, they talk about reducing akrasia, which is being unproductive when you would genuinely prefer to be more productive. I see this site as promoting perfectionism - trying to get the most value out of what you currently want to be doing. If what I really want to do is travel the world then I should make concrete plans to do that rather than sit at home watching travel shows, which is what most people do. And if I have an assignment due but I want to relax, the best way to enjoy that relaxation would be to get that assignment done so that I don't feel guilty about it during my fun time. If you really think that the people here would prefer to be puritanical, I refer you to the recentish posts about video games, board games, and Go.

I am sure these are interesting references for studying pure mathematics but do they contribute significantly to solving AI?

In particular, it is interesting that none of your references mention any existing research on AI. Are there any practical artificial intelligence problems that these mathematical ideas have directly contributed towards solving?

E.g. Vision, control, natural language processing, automated theorem proving?

While there is a lot of focus on specific, mathematically defined problems on LessWrong (usually based on some form of gambling), the... (read more)

2Vladimir_Nesov
The main mystery in FAI, as I currently see it, is how to define its goal. The question of efficient implementation comes after that and depending on that. There is no point in learning how to efficiently solve the problem you don't want to be solved. Hence the study of decision theory, which in turn benefits from understanding math. See the "rationality and FAI" section, Eliezer's paper for a quick introduction, also stuff from sequences, for example complexity of value.

I'm not sure of the merit of studying philosophy as opposed to just personally thinking about philosophical ideas. For me, the most profound pragmatic benefit has been to deeply alter my own psychology as a result of examining ideas like free-will and morality. I had a lot of unexamined assumptions and strongly felt conventions and taboos that I managed to overcome through examining my own feelings in a philosophical way. This is very different from the kind of learnt understanding that can be obtained by reading other peoples ideas. I think it is very com... (read more)

Each time a question like this comes up it seems to get down voted as a bad question. I think it's a great question, just one for which there are no obviously satisfactory answers. Dennet's approach seems to be to say, if you just word things differently its all fine, nothing to see here. But to me this is a weird avoiding of the question.

We feel there is a difference between living things and inanimate ones. We believe that other people and some animals are feeling things that are similar to the feelings we have. Many people would find it absurd to think... (read more)

8humpolec
This sounds like the point Pinker makes in How the Mind Works - that apart from the problem of consciousness, concepts like "thinking" and "knowing" and "talking" are actually very simple:

There have been (and continue to be) many approaches to this, in fact the term Good old fashioned AI basically refers to this. It is very interesting that significant progress has not been made with this approach. This has led to greater use of statistical techniques, such as support vector machines or Bayesian networks. A basic difficulty with any approach to AI is that many techniques merely change the problem of learning, generalisation and problem solving into another form rather than solving it. E.g. Formal methods for software development move the pr... (read more)

Has anyone encountered a formal version of this? I.e. a site for the creation of formal logical arguments. Users can create axioms, assign their confidence to them and structure arguments using them. Users can then see the logical consequences of their beliefs. I think it would make a very interesting format for turning debate into a competitive game, whose results are rigorous, machine readable, arguments.

0DilGreen
While I am certainly not against the idea of a tool that can be used to create formal arguments, the proposal has a subtle but radical difference. DISCLAIMER: I am not a mathematician, and do not fully understand the concepts I attempt to explain in the following. In his work published as 'Notes on the Synthesis of Form', Chris. Alexander developed an algorithm for converting a matrix of relationship strengths between analysed sub-elements of a design problem into a 'tree-like' structure. In other words, a hierarchical diagram in which each node can have one connection only, to a higher status node. The number of nodes in each level decreases as one moves upwards, culminating in a single 'master' or 'root' node. Following the success the publication of 'Notes...' brought, Alexander was employed to work on the development of the metro rail system in San Francisco (the BART), and put his method to work. As a rationalist, he was concerned to find that the results of his work appeared to be failing to fully address the realities of the design problems involved. His conclusion was that the necessary function of his transformative algorithm which selected the least significant relationship linkages to be broken in order to derive the tree-like diagram was the cause of the problem; some identified real-world relationships were being ignored. And even though these might be ranked lowly, omitting them altogether was destructive. The essay which captures this understanding is published as 'A City is not a Tree' - read it here: http://www.rudi.net/pages/8755. In it, Alexander contrasts the tree-like diagram with another; the semi-lattice diagram, which, although still hierarchical, allows for connections across branches, as it were, so that overlapping sets of relationships are legal. Semi-lattices, I believe, are not susceptible to formal logical analysis, but nevertheless can be better mapping tools for complex, real-world systems. My proposal would deliberately allow

I think this comment highlights the distinction between popular and good.

High ranked posts are popular, good may or may not have anything to do with it.

Personally I find all this kowtowing to the old guard a bit distasteful. One of my favorite virtues of academia is the double blind submissions process. Perhaps there are similar approaches that could be taken here.

Interesting points.

I suspect that predicting the economy with economics is like predicting a persons behaviour from studying their biology. My desire for wisdom is in the form of perspective, I want to know the rough landscape of the economy (like the internal workings of a body).

For example I have little grasp of the industries contributing most to GDP or the taxes within my (or any other) country. In terms of government spending this site provides a nice overview for the UK, but it is only the start. I would love to know the chain of businesses and sys... (read more)

I fear this would reduce LessWrong to referencing research papers. Perhaps there is more value in applying rigor as disagreements emerge. I.e. a process of going from two people flatly disagreeing to establishing criteria to choose between them. I.e. a norm concerning a process for reaching reasonable conclusions on a controversial topic. In this way there would be greater emphasis on turning ambiguous issues into reasonable ones. Which I view as one of the main benefits of rationality.

Thank you, I also agree with your comments on your posting. I generally prefer a balance of pragmatic action with theory. In fact, I view the 'have a go' approach to theoretical understanding to be very useful as well. I think just roughly listing ones thoughts on a topic and then categorising them can be very revealing and really help provide perspective. I recently had a go at my priorities (utility function) and came up with the following:

  • To be loved
  • To be wise
  • To create things that I am proud of
  • To be entertained
  • To be respected
  • To be independent (id
... (read more)
1NancyLebovitz
In regards to prediction: I just heard (starts at 9:20) some claims that no method of prediction for the economy is doing better than extremely crude models. Unfortunately, I haven't been able to find a cite for the "two young economists" who did the research. However, I'm not sure that prediction is a matter of wisdom-- I think of wisdom as very general principles, and prediction seems to require highly specific knowledge. It was obvious that real estate prices couldn't go up forever, especially as more and more people were speculating in real estate, but as far as I can tell, it was not at all obvious that such a large amount of the economy was entangled in real estate speculation that a real estate bust would have such large side effects. Solutions to difficult technical problems became much more feasible after science was around for a while. I'm not dead certain we even have the beginnings for understanding complex social systems. Part of the difficulty of prediction is that it's dependent on both science and tech which hasn't yet been discovered (our current world is shaped by computation having become easy while battery tech is still fairly recalcitrant) and on what people are doing-- and people are making guesses about what to do in a highly chaotic situation. Taleb is interesting for working on how to live well when only modest amounts of prediction are feasible.
8multifoliaterose
My guess would be that the situation is that the "self-help" genre has a really bad name among creative/intellectual/rational people because the quality of people who have written in it is so low, and that consequently creative/intellectual/rational people feel squeamish about even entertaining the thought of doing an analysis of the type you describe. Basically, when problems are really obviously important, lots of low quality people get attracted to them, so that when high quality people work on them they're at risk of signaling that they're of low quality. When high quality people work on more arcane things that are of subtle importance there's not the issue of being confused with hoards of low quality people. The dynamic described above has the very unfortunate consequence that many of the most important problems are simply not addressed.

True, in fact despite my comments I am optimistic of the potential for progress in some of these areas. I think one significant problem is the inability to collaborate on improving them. For example, research projects in robotics are hard to build on because replicating them requires building an equivalent robot, which is often impractical. The robocup is a start as at least it has a common criteria to measure progress with. I think a standardised simulator would help (with challenges that can be solved and shared within it) but even more useful would be t... (read more)

0[anonymous]
I would use makerbot instead since the development trajectory is enhanced with thousand of interested makerbot operators who can improve and build upgrade for the printer. UP! 3D printer on the other hand is not open source and a lot more expensive.

The real difficulty with both these control problems is that we lack a theory for how to ensure the stability of learning based control systems. Systems that appear stable can self destruct after a number of iterations. A number of engineering projects have attempted to incorporate learning. However, because of a few high profile disasters, such systems are generally avoided.

1jimrandomh
Clumsy humans have caused plenty of disasters, too. Matching human dexterity with human-quality hardware is not such a high bar.

In terms of emulation, the resolution is currently good enough to identify molecules communicating across synapses. This enables an estimate of synapse strengths as well as a full wiring diagram of physical nerve shape. There are emulators for the electrical interactions of these systems. Also our brains are robust enough that significant brain damage and major chemical alteration (ecstasy etc.) are recoverable from, so if anything brains are much more robust than electronics. AI, in contrast, has real difficulty achieving anything but very specific proble... (read more)

0Will_Newsome
I'm confused. You're saying de novo AGI is harder than brain emulation. That's debatable (I'd rather not debate it on Less Wrong), but I don't see how it's a response to anything I said.
2jimrandomh
Two of these (walking/running, and stabilizing weights with a robotic arm) are at least partially hardware limitations, though. Human limbs can move in a much broader variety of ways, and provide a lot more data back through the sense of touch than robot limbs do. With comparable hardware, I think a narrow AI could probably do about as well as humans do.

I really like this post. It touches on two topics that I am very interested in:

How society shapes our values (domesticates us)

and

What should we value (what is the meaning of life?)

I find the majority of discussions extremely narrow, focusing on details while rarely attempting to provide perspective. Like doing science without a theory, just performing lots of specific experiments without context or purpose.

1 Why are things the way they are and why do we value the things we value? A social and psychological focus, Less Wrong touches on these issues but ap... (read more)

2multifoliaterose
I'm very sympathetic to your comment. I feel that there's an emerging community of people interested in answering these questions at places like Less Wrong and GiveWell but that the discussion is very much in its infancy. The questions that you raise are fundamentally very difficult but one can still hope to make some progress on them. I'll say that I find the line of thinking in Nick Bostrom's Astronomical Waste article to be a compelling justification for existential risk reduction in principle. But I'm still left with the extremely difficult question of determining what the most relevant existential risks are and what we can hope to do about them. My own experience up until now has been that it's better to take some tangible action in real time rather than equivocating. See my Missed opportunities for doing well by doing good posting.

I think this section of your post is part of what makes me feel bad about your comment. The reason I said I like it, is because I think it's important that people can talk about these things and the fact that your comments affect me in that way highlights that they are important to me.

I would have worded this more strongly, myself. In my experience, people who are themselves inclined towards reasoned debate, even civilly, drastically overestimate how much other people are also inclined towards debate and argument.

I can't speak for anyone else, but pers... (read more)

1Relsqui
Ah, I think I understand now. Thank you. Hmm--I don't think that either honesty or fearlessness requires directness. You can learn a lot from the social dance if you know how to read it, including some things it's very hard to communicate any other way. My point here is not to refute your perspective, just to observe that your goals (honesty, truth, and so forth) do not necessarily require directness. Human language is an imperfect tool for conveying the contents of human minds. Only ever using it directly limits us to expressing the symbols it has words for. Taking advantage of implication and social convention lets us derive more information from our limited symbol set. The difference is like counting in unary vs. counting in decimal. Instead of only having the presence or absence of symbols to communicate value, you get the benefit of place values. With a frustratingly subtle change in expression (moving a digit to the left), you get the power to say much more, and more succinctly. Obviously it's not as useful when discussing topics that we do have words for, but for difficult-to-nail-down things like emotion and desire, I find it invaluable. I like that. I might not call it catchy, but it's definitely a clear descriptor, and I think it's accurate. Thanks! I don't put as much active work into it as perhaps it deserves.

Thank you, that's very interesting, and comforting.

Thank you, this is very interesting. I'm not sure of the etiquette, but I'm reposting a question from an old article, that I would really appreciate your thoughts on.

Is it correct, to say that the entropy prior is a consequence of creating an internally consistent formalisation of the aesthetic heuristic of preferring simpler structures to complex ones?

If so I was wondering if it could be extended to reflect other aesthetics. For example, if an experiment produces a single result that is inconsistent with an existing simple physics theory, it may be that t... (read more)

3satt
With the disclaimer that I'm no expert and quite possibly wrong about some of this, here goes. No. Or, at least, that's not the conscious motivation for the maximum entropy principle (MAXENT). As I see it, the justification for MAXENT is that entropy measures the "uncertainty" the prior represents, and we should choose the prior that represents greatest uncertainty, because that means assuming the least possible additional information about the problem. Now, it does sometimes happen that MAXENT tells you to pick a prior with what I'd guess you think of as "simpler structure". Suppose you're hiding in your fist a 6-sided die I know nothing about, and you ask me to give you my probability distribution for which side'll come up when you roll it. As I know nothing about the die, I have no basis for imposing additional constraints on the problem, so the only operative constraint is that P(1) + P(2) + P(3) + P(4) + P(5) + P(6) = 1; given just that constraint, MAXENT says I should assign probability 1/6 to each side. In that particular case, MAXENT gives a nice, smooth, intuitively pleasing result. But if we impose a new constraint, e.g. that the expected value of the die roll is 4.5 (instead of the 3.5 implied by the uniform distribution), MAXENT says the appropriate probability distribution is {0.054, 0.079, 0.114, 0.165, 0.240, 0.348} for sides 1 to 6 respectively (from here), which doesn't look especially simple to me. So for all but the most basic problems, I expect MAXENT doesn't conform to the "simpler structures" heuristic. There is probably some definition of "simple" or "complex" that would make your heuristic equivalent to MAXENT, but I doubt it'd correspond to how we normally think of simplicity/complexity.

Is there a bound on the amount of data that is necessary to adjust a prior of a given error magnitude? Likewise, if the probability is the result of a changing system I presume it could well be the case that the pdf estimates will be consistently inaccurate as they are constantly adjusting to events whose local probability is changing. Does the Bayesian approach help, over say, model fitting to arbitrary samples? Is it, in effect, an example of a model fitting strategy no more reasonable than any other?

I suppose the question is, how to calculate the priors so they do make sense. In particular, how can an AI estimate priors. I'm sure there is a lot of existing work on this. The problem with making statements about priors that don't have a formal process for their calculation is that there is no basis for comparing two predictions. In the worst case, by adjusting the prior the resulting probabilities can be adjusted to any value. Making the approach a formal technique which is potentially just hiding the unknowns in the priors. In effect being no more reasonable because the priors are a guess.

2jsalvatier
In statistics, I think 'weakly informative priors' are becoming more popular. Weakly informative priors are distributions like a t distribution (or normal) with a really wide standard deviation and low degrees of freedom. This allows us to avoid spending all out data on merely narrowing down the correct order of order of magnitude, which can be a problem in many problems using non-informative priors. It's almost never the case that we literally know nothing prior to the data.
3satt
There is. For example, one can use the Jeffreys prior, which has the desirable property of being invariant under different parametrization choices, or one can pick a prior according to the maximum entropy principle, which says to pick the prior with the greatest entropy that satisfies the model constraints. I don't know if anyone's come up with a meta-rationale that justifies one of these approaches over all others (or explains when to use different approaches), though.

I like your post because it makes me feel bad.

What I mean by that is that it gets at something really important that I don't like. The problem is that I get more pleasure from debates than almost anything else. I search for people who don't react in the intensely negative way you describe, and I find it hard to empathise with those that do. I don't do this because I think one method is 'right' and the other 'wrong' I just don't enjoy trying to conform to others expectations and prefer to find others who can behave in the same way. I think for most people ... (read more)

2Relsqui
Thanks, I think? You're not explicit about why it makes you feel bad, and I'm curious. (Rather, while you address it in the next sentence, I'm not sure I understand what kind of "feeling bad" you mean.) I think you've hit the nail on the head here. This is why it bothers me to see it happen. I'm an empathetic sort, and seeing my friend try to fit in like a square peg in a round pegboard makes me cringe. (Well, that, and I'm one of the people who finds the behavior obnoxious when applied to the wrong context.) I think this is an interesting way to phrase it, although I can't put my finger on why. What would you call the opposite? I'm on the lookout for terms to use for these which don't imply value on either side, since the only criteria for value I see are utility and effectiveness, which are context-dependent.

Wow, this really brings home the arbitrary nature of the Bayesian approach. If we're trying to get an AI to determine what to do, it can't guess meaningful priors (and neither can we come to that). I presume when it is applied there is a load of theoretical approaches to prior model estimation or is a uniform prior just used as default? In which case are there other occasions when a frequentist and bayesians probability estimates differ?

3TobyBartels
Hopefully an AI will be able to get its hands on large amounts of data. Once it has that, it doesn't matter very much what its priors were.
4DSimon
Sure, if the priors are arbitrary, the Bayesian approach's output is arbitrary. But if the priors make sense, the Bayesian approach works. Or in other words: just like any other algorithm good or bad, GIGO.

Thank you for your reply. It really highlights the difficulty of making an appropriate choice. There is also the difficulty that a lot of professions require specialised training before they can be experienced.

I did not find any of the careers guidance information at school or university to be particularly helpful. However after working in games for a number of years it was clear that there were a number of types with very similar backgrounds. I think it would be very valuable to read honest autobiographical accounts of different professions and ideally s... (read more)

I think your very first step Identify is the key to all this.

Is it rational to pursue an irrational goal rationally?

Our culture focuses on external validation, achievement and winning. My concern is that this is a form of manipulation focused on improving a societies economic measures of value over an individual's personal satisfaction.

In contrast, the science of happiness seems like a good start. This work seems to focus on developing techniques to come to feel satisfaction with ones current state. Perhaps a next step is to look at how communities and or... (read more)

3Perplexed
At the risk of pointing out the obvious, different careers provide satisfaction in different ways. Some jobs, such as that of a beautician, provide the satisfaction of a job well done several times a day. Others, such as computer-game programmer, provide that kind of satisfaction several times a decade. Which appeals to you? What balance of success and failure do you want? There are jobs in which success is achieved only one time in ten tries, yet the psychic payoff from that one success more than makes up for all the failures. Personally, I couldn't live like that. How about you? What kinds of social interaction do you want out of your career? There are careers in which you work in almost monastic seclusion and others in which you are in continual interaction with colleagues. How do you feel about interaction with the public as a whole? Repeated contact with complete strangers? Contact restricted to particular age groups or particular social classes? It is up to you. Are you the kind of person who draws satisfaction out of simply getting the job done, or are you only satisfied when it is done with a certain artistry? Do you want recognition? Do you detest criticism? Would you rather find the cure for disease or cure many patients with various diseases? Prove a theorem or explain a proof? Make a fishing pole, or catch a fish? Or maybe save a species of fish? The answer to each of these questions may be relevant in making a good career choice. But the trouble is that the typical 20-year-old is completely unequipped to answer them truthfully. Truthful answers to these questions are learned by experience. But the answers that our twenty-year-old actually gives are based on what kind of person he/she wants to be, rather than what kind of person he/she is. For this reason, I would suggest that every young person take a few years out of the standard educational career track to learn something about his/her self, before committing to a difficult and costly period of t
1lukstafi
I see the following occupations as absolutely required to live a happy life: one has to be (1) an Artificial Intelligence researcher, to change the world to be a better place, (2) a dancer, to experience one's own embodiment to the fullest, (3) a writer/poemer, to explore and reify one's understanding of existence. ETA: ;-)

Thank you! That's a great link I'll look into it.

For about 3 years now I've been giving to a number of charities through a monthly standing order. Initially setting it up was very satisfying and choosing the charities was a little like purchasing a new gadget, assuming a hands on experience is not available, and there are no trusted reviewers, I look at the various options and go with the ones whose advertising most closely reflected my personality and who looked the least like charlatans. With gadget purchases I find these indirect signals much more informative of the experience with the product than an... (read more)

5multifoliaterose
Have you seen GiveWell before? If not, it's well worth looking into. If you have seen GiveWell before and don't trust its recommendations, why? They're always looking to improve and would welcome suggestions.

Thanks for the link.

You make a good point about the lack of a clear distinction, and at a fundamental level I believe that our genes and external environment determine our behaviour (I am a determinist, i.e. I don't believe in free will). However, I think that it is also possible to be highly motivated about different things which can cause a lot of mental stress and conflict. I think this occurs because we have a number of distinct evolved motivations which can drive us in opposing ways (e.g. the desire to eat, the status desire of being thin, the moral ... (read more)

Thanks for the link, very interesting.

I've wrestled with this disparity myself, the distance between my goals and my actions. I'm quite emotional and when my goals and my emotions are aligned I'm capable of rapid and tireless productivity. At the same time my passions are fickle and frequently fail to match what I might reason out. Over the years I've tried to exert my will over them, developing emotionally powerful personal stories and habits to try and control them. But every time I have done so it tends to cause more problems that it fixes. I experience a lot of stress fighting with myself ... (read more)

1snarles
I've had the same experiences re: passion and productivity. On your last comment: "I think there is a real risk of having ones culture and community define goals for ourselves that are not actually what we want." It's not clear to me what your concern is. You draw a distinction between cultural goals and values, and personal goals and values, but how would you be able to draw the line between the two? (What does it mean to feel something "deep down"?) And even if you could draw that distinction, why is it automatically bad to acquire cultural goals? What would be the consequences of pursuing these "incorrect" goals or values? The most eye-opening article I've read recently, of possible relation to the subject, is a series on hunter-gatherer tribes by Peter Gray (see http://www.overcomingbias.com/2010/08/school-isnt-about-learning.html). While I'm skeptical of Gray's seemingly oversimplified depiction of hunter-gatherer tribes, the salient point of his argument is that there is a strong anti-authority norm in typical hunter-gather tribes. This leads me to think that the "natural" human psyche is resistant to authority, and conformity has to be "beaten in." Some of my own emotional conflicts have been due to a conflictedness about obeying authority; it seems to me that the "emotional mind" is more in line with these primal psychologies, which are exhibited more strongly in hunter-gatherer tribes than in modern society. Certainly I would argue that following the emotional mind is not something everyone should do; it seems like there are a few niches in our society for the totally "free", who have the luxury of being able to make a living while largely ignoring the demand for individuals to find and conform to a specific externally-rewarded role in society. The positive and negative feedback individuals receive for following or ignoring their emotional minds, I would hypothesize, plays a large part in determining how much they ultimately listen to their emotional mi

Thank you, it's such a pleasure to find so many interesting discussions of these ideas.

I’m glad you like it : )

I suppose the question is, to what extent can ideas be separated from social dynamics, such as status and legitimacy, and therefore not carry with them the risk of causing anger and fear.

Well ideas can certainly create positive as well as negative responses. For example, more accurate understanding and the communication of practically useful approaches are often intrinsically enjoyable. As is the communication of experience that might help determine the correct course of action or help avoid problems (i.e. personal stories, news).... (read more)

Thank you for the link.

I think the discussion distinguishing like and want is the beginning of the answer. My view is that there are a number of distinct complementary motivations which can cause subtly different emotions, each of which can be referred to as happiness. With each motivation having evolved because it contributes towards our survival. These distinctions become clearer when trying to create enjoyment experiences and I'll elaborate on my take on them in the next article.

What I think is so interesting (and important) about this is that without ... (read more)

Load More