Comment author: michael_vassar 01 August 2008 09:53:18PM 0 points [-]

BTW, I have significant personal experience with one of "the smartest guys in the room" and yes, they were (or at least he is) VERY smart by any normal business world standards. He's particularly great at giving obvious in retrospect answers to marketing type problems. It would be a somewhat unusual room full of business people or other social elites where the guy I'm thinking of isn't the smartest guy.

In response to The Meaning of Right
Comment author: michael_vassar 29 July 2008 08:00:00PM 4 points [-]

Caledonian: I can't think of anyone EVER choosing to interpret statements as stupid rather than sensible to the degree to which you do on this blog. There is usually NO ambiguity and you still get things wrong and then blame them for being stupid.

In all honesty why do you post here? On your own blog you are articulate and intelligent. Why not stick with that and leave commenting to people who want to actually respond to what people say rather than to straw men?

Comment author: michael_vassar 30 June 2008 02:03:00AM 1 point [-]

All: I'm really disappointed that no-one else seems to have found my "after the FAI does nothing" frame useful for making sense of this post. Is anyone interested in responding to that version? It seems so much more interesting and complete than the three versions E.C. Hopkins gave.

Dynamically: My "moral philosophy" if you insist on using that term (model of a recipe for generating a utility function considered desirable by certain optimizers in my brain would be a better term) is the main thing that HAS told me to steal, cheat, and murder. Simpler optimization patterns based on herd behavior, operant conditioning, moderately strong typical male primate aversions to violence, projections of parental authority through internalized neural agents etc have told me not to do those things and have won enough attention from the more complex optimizers to convince them (since the complex optimizers can reflect and be convinced of things) not to do so after all, and upon examination those simpler patterns have mostly turned out to be right judged by the standards of the moral philosophy. On a few occasions that I am aware of my conditioned etc morality was very wrong (judged reflectively), and possibly on a few other occasions, but they were much much less wrong than the occasions on which they were right and casual examination of my reflective self was in doubt.

Comment author: michael_vassar 29 June 2008 07:31:00PM 5 points [-]

The way I frame this question is "what if I executed my personal volition extrapolating FAI, it ran, created a pretty light show, and then did nothing, and I checked over the code many times with many people who also knew the theory and we all agreed that it should have worked, then tried again with completely different code many (maybe 100 or 1000 or millions) times, sometimes extrapolating somewhat different volitions with somewhat different dynamics and each time it produced the same pretty light show and then did nothing. Lets say I have spend a few thousand years on this while running as an upload. Now what?"

In this scenario there's no optimization reason I shouldn't just execute cached thoughts. In fact, that's pretty much what anything I do in this scenario amounts to doing. Executing cached thoughts does, of course, happen lawfully, so there is a reason to dress in black etc in that sense. I used to be pretty good at writing some sad but mostly non-gloomy poetry and denouncing people as fools. Might be even more fun to do that with other modified upload copies of myself. When that got old, maybe use my knowledge of FAI theory to build myself a philosophy of math oracle neural module. Hard to guess how my actions would differ once it was brought on-line. It seems to me that it might add up to normality because there might be an irreducible difference between utility for me and utility for an external AGI even if it was an extrapolation of my volition, but for now I'm a blind man speculating on the relative merits of Picasso and Van Gogh.

Honestly I'm much less concerned about this scenario than I once was. Pretty convinced that there are ways to extrapolate me that do something even if they discover infinite computing power.

Dynamically linked: No-one but nerds and children care what moral philosophies say anyway, at least, not in a way that effects their actions. You, TGGP and Unknown are very atypical. Poke is much closer to correct. If anything, when the dust settled the world would be more peaceful if most people understood the proof.

Eric Mesoy: If utilities = 0 then dying from malnourishment isn't horrible.

Andy M: Your answer sounds more appropriate for someone fairly shallow and 20 years old who discovers that the world or his life will end in 6 months than for someone for whom utilities are set to zero or morality is lost.

Constant Pablo and especially Sebastian: Clearly thought! I should probably start reading your comments more carefully in the future.

Laura: Why unsympathetic? My guess is that you still confuse my and Eliezer's aspirations with some puerile Nietzschean ambition. I like who I am now too thank you very much, and if my extrapolated volition does want to replace who I am it is for reasons that I would approve of if I knew them, e.g. what it will replace me with is not "completely different, incomprehensible, and unsympathetic". That's the difference between a positive and a negative singularity. Death isn't abhorrent, life/experience/growth/joy/flourishing/fulfillment, rather, is good, and a universe more full of them more good than one less full, whether viewed from inside or from outside. Math is full of both death and flourishing and is not lessened by the former.

Phil: Very entertaining and thoughtful post.

In response to Timeless Beauty
Comment author: michael_vassar 29 May 2008 03:47:00PM 1 point [-]

Bambi: The particular reason for blogging rather than rum is that the math says he blogs here and now. The future isn't immune to our actions, it is what it is as the result of our actions, which likewise are what they are. We cause it to be in the same manner that the earlier states of a Turing Machine cause the later states to be.

Comment author: michael_vassar 14 April 2008 02:01:13PM 5 points [-]

Eliezer: I really like this post, but it seems to me that empirically it was substantially a cultural practice in philosophy, including Kant etc, that enabled those early 20th century Germans (and only those people, in that particular culture, with that particular philosophical tradition) to seem, vaguely a significant subset of those assumptions that they did know existed but that other philosophers and lay people didn't know existed. That philosophy also lead them down some wrong roads, such as towards thinking mind was fundamental rather than emergent, and it certainly didn't enable them to see all of the assumptions that they didn't know existed, but there seems to be a known partial reason that the quantum revolution was so local, one credited by some of the physicists in question.

For what it's worth, I have only a lay understanding of quantum physics, still don't really know what you mean by configurations and amplitudes, and was able to see, fairly easily, the assumption that "Bob" in your assumption didn't see, basically about particles being "things" with "properties" attached to them, (an assumption that Chalmers, in "The Conscious Mind" seems to know is rejected by physics but to find it impossible to reject, leading him to mention his disturbance in a physical view of particles as "pure causal flux", which I would call "pure relationship" and which at least a few philosophers surely mean by "radical emergence") although I would have described it somewhat differently, e.g. not by explicit reference to technical information that I didn't have.

I don't think that the problem is that it is impossible with effort and training to learn to recognize one's blind spots a-priori. Rather, I think that philosophy attracts many kinds of people, only one of which is the type of person who has a talent that he wants to develop in recognizing his blind spots. Philosophy then provides, to different extents in different places and times, some training in this skill and some reward of status for the development of it. Currently, it seems to me that neither Analytic nor Continental philosophy provides significant training or status relating to this as opposed to other skills. More particularly, both seem to provide far less such training or reward in status than contemporary theoretical physics, theoretical computer science and probably some parts of math.

The main problem, it seems to me, relates to this issue of rewarding with status. In physics, ultimately status goes to those who make the correct predictions enabling correct beliefs to actually attain dominance in the field even if they are counter-intuitive (or too intuitive to qualify as 'deep'), while in philosophy, without experiments, correct beliefs always exist at a very low incidence at equilibrium, far less popular or 'official' than clever descriptions of those cognitive illusions such as empty labels http://lesswrong.com/lw/ns/empty_labels/ (in this case, the particle without the mathematical relationships it participates in) that act as attractors to human naive ontology. As a result, the average physicist is better at this type of philosophy than the average philosopher is, while the average highly esteemed physicist is astronomically better at it than the average highly esteemed philosopher.

BTW I'm not really convinced that "Bob" would be correct in "any classical universe", or even that classical universes are conceivable rather than apparently conceivable.

In response to Hand vs. Fingers
Comment author: michael_vassar 31 March 2008 07:17:00PM 0 points [-]

Poke: Are you sure about mineralogy and physics as foundations of modern geology?

Patrick:

I agree that something roughly along the lines of what you are discussing can be done and is unavoidable. I am primarily attempting to refute the proposal that it is or can be corrected to become Bayesian, and hence the proposal that the process that we use to do things like this stands with the same sort of logical foundations as Bayesian reasoning does. It definitely seems to me that strictly speaking, once you remove logical omniscience, unless you replace it with some very specific abstraction (most of which have their own problems) you need to assign probabilities to "The RH can be proved", "The RH can be disproved", "The RH is undecidable from ZFC", "The RH can be proved AND the RH can be disproved", "The RH can be proved AND the RH can be disproved AND The RH is undecidable from ZFC" "The RH can be disproved AND The RH is undecidable from ZFC" etc. In practice we can apportion zero probability to the latter cases, but only conditional upon the quality of our reflection being perfect, which we know to be false, and only after SOME reflection. It seems to me that as we assign probabilities we have to do reflection that moves our estimates continually.

Comment author: michael_vassar 14 March 2008 04:14:00AM 0 points [-]

I second tabooing probability, but I think that we need more than two words to replace it. Casually, I think that we need, at the least, 'quantum measure', 'calibrated confidence', and 'justified confidence'. Typically we have been in the habit of calling both "Bayesian", but they are very different. Actual humans can try to be better approximations of Bayesians, but we can't be very close. Since we can't be Bayesian, due to our lack of logical omniscience, we can't avoid making stupid bets and being Dutch Booked by smarter minds. It's therefore disingenuous to claim that vulnerability to Dutch Books is a decisive argument against a behavioral strategy. Calibrated confidence is the strategy that we can try to use to minimize our vulnerability to being Dutch Booked by people who aren't smarter than we are but who know exploits in our heuristics. They tend to be much much closer to 50% than Bayesian confidences, and are pretty much unavoidably subject to some framing based biases as a result.

Comment author: michael_vassar 07 November 2007 04:00:00AM 0 points [-]

To make the simulation really compelling it has to include some sort of assortative mating.

Comment author: michael_vassar 01 November 2007 05:11:00AM 2 points [-]

No Mike, your intuition for really large numbers is non-baffling, probably typical, but clearly wrong, as judged by another non-Utilitarian consequentialist (this item is clear even to egoists).

Personally I'd take the torture over the dust specks even if the number was just an ordinary incomprehensible number like say the number of biological humans who could live in artificial environments that could be built in one galaxy. (about 10^46th given a 100 year life span and a 300W (of terminal entropy dump into a 3K background from 300K, that's a large budget) energy budget for each of them). It's totally clear to me that a second of torture isn't a billion billion billion times worse than getting a dust speck in my eye, and that there are only about 1.5 billion seconds in a 50 year period. That leaves about a 10^10 : 1 preference for the torture.

The only considerations that dull my certainty here is that I'm not convinced that my utility function can even encompass these sorts of ordinary incomprehensible numbers, but it seems to me that there is at least a one-in-a-billion chance that it can.

View more: Prev | Next