I am not perfectly sure how this site has worked (although I skimmed the "tutorials") and I am notorious for not understanding systems as easily and quickly as the general public might. At the same time I suspect a place like this is for me, for what I can offer but also for what I can receive (ie I intend on (fully) traversing the various canons).

I also value compression and time in this sense, and so I think I can propose a subject that might serve as an "ideal introduction" (I have an accurate meaning for this phrase I won't introduce atm).

I've read a lot of posts/blogs/papers that are arguments which are founded on a certain difficulties, where the observation and admission of this difficulty leads the author and the reader (and perhaps the originator of the problem/solution outlines) to defer to some form of a (relative to what will follow) long winded solution.

I would like to suggest, as a blanket observation and proposal, that most of these difficult problems described, especially on a site like this, are easily solvable with the introduction of an objective and ultra-stable metric for valuation.


I think maybe at first this will seem like an empty proposal.  I think then, and also, some will see it as devilry (which I doubt anyone here thinks exists).  And I think I will be accused of many of the fallacies and pitfalls that have already been previously warned about in the canons.

That latter point I think might suggest that I might learn well and fast from this post as interested and helpful people can point me to specific articles and I WILL read them with sincere intent to understand them (so far they are very well written in the sense that I feel I understand them because they are simple enough) and I will ask questions.

But I also think ultimately it will be shown that my proposal and my understanding of it doesn't really fall to any of these traps, and as I learn the canonical arguments I will be able to show how my proposal properly addresses them.

New to LessWrong?

New Comment
82 comments, sorted by Click to highlight new comments since: Today at 9:42 AM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

helpful people can point me to specific articles

I suggest taking a look at the Complexity of Value page on the LW wiki, not because "complexity of value" as defined there is exactly what I think you're missing (it isn't) but because several of the links there will take you to relevant stuff in (as you put it) the canons. The "Fake Utility Functions" post mentioned there and its predecessors are worth a read, for instance. Also "Value is Fragile" and "The Hidden Complexity of Wishes".

(All this talk of "canons&... (read more)

0Flinter7y
Yup you are speaking perfectly to my point. Thankfully I am familiar with Szabo's works to some degree which is very relevant and inter-linked with the link you gave. In regard to cannons I don't mean it in the derogatory sense although I think ultimately it might be shown that they are that too. So I think you are speaking to the problem of creating such a metric. But I urging us to push past that and take it as a given that we have such a unit that is objective value.
2gjm7y
I've no objection to the strategy of decomposing problems into parts ("what notion of value shall we use?" and "now, how do we proceed given this notion of value?") and attacking the parts separately. Just so long as we remember, while addressing the second part, that (1) we haven't actually solved the first part yet, (2) we don't even know that it has a solution, and (3) we haven't ruled out the possibility that the best way to solve the overall problem doesn't involve decomposing it in this fashion after all. (Also, in some cases the right way to handle the second part may depend on the actual answer to the first part, in which case "let us suppose we have an answer" isn't enough.) But in this case I remark that you aren't in fact proposing "a simpler solution to all these difficult observations and problems", you are proposing that instead of those difficult problems we solve a probably-much-simpler problem, namely "how should we handle AI alignment etc. once we have an agreed-upon notion of value that matches our actual opinions and feelings, and that's precisely enough expressed and understood that we could program it into a computer and be sure we got it right?".
0Flinter7y
Yes I mean to outline the concept of a stable metric of value and then I will be able to show how to solve such a problem. "But in this case I remark that you aren't in fact proposing "a simpler solution to all these difficult observations and problems", you are proposing that instead of those difficult problems we solve a probably-much-simpler problem, namely "how should we handle AI alignment etc. once we have an agreed-upon notion of value that matches our actual opinions and feelings, and that's precisely enough expressed and understood that we could program it into a computer and be sure we got it right?"." No I don't understand this, and I suspect you haven't quite understood me (even though I don't think I can be clearer). My proposal, a stable metric of value, immediately resolves many problems that are effectively paradoxes (or rendered such by my proposal, and then re-solved). I'm not sure what you would disagree with, I think maybe you mean to say that the introduction of a stable metric still requires "solutions" to be invented to deal with all of these "problems". I'm not sure I would even agree to that, but it doesn't speak to the usefulness of what I suggest if I can come up with such a metric for stable valuation.
0gjm7y
You have not, so far, proposed an actual "stable metric of value" with the required properties. If you do, and convince us that you have successfully done so, then indeed you may have a "simpler solution to all these difficult problems". (Or perhaps a not-simpler one, but at any rate an actual solution, which is more than anyone currently claims to have.) That would be great. But so far you haven't done that, you've just said "Wouldn't it be nice if there were such a thing?". (And unless I misunderstood you earlier, you explicitly said that that was all you were doing, that you were not purporting to offer a concrete solution and shouldn't be expected to do so.) In which case, no solution to the original problems is on the table. Only a replacement of the original problems with the easier ones that result when you add "Suppose we have an agreed-upon notion of value that, etc., etc., etc." as a premise. Or -- this is just another way of putting the same thing, but it may match your intentions better -- perhaps you are proposing not a solution to an easier problem but an incomplete solution to the hard problem, one that begins "Let's suppose we have a metric of value such that ...". Would you care to be absolutely explicit about the following question? Do you, or do you not, have an actual notion of value in mind, that you believe satisfies all the relevant requirements? Because some of the things you're saying seem to presuppose that you do, and some that you don't.
0Flinter7y
Yes I understand what you mean to say here. And so I mean to attend to your questions: "Would you care to be absolutely explicit about the following question? Do you, or do you not, have an actual notion of value in mind, that you believe satisfies all the relevant requirements? Because some of the things you're saying seem to presuppose that you do, and some that you don't." Yes I do have an actual notion of value in mind, that does satisfy all of the relevant requirements. But first we have to find a shared meaning for the word "ideal": http://lesswrong.com/r/discussion/lw/ogt/do_we_share_a_definition_for_the_word_ideal/ Because the explanation of the notion is difficult (which you already know because you quote the owner of this site as being undecided on it etc.)
0moridinamael7y
There is some hope that the desires/values of human beings might converge in the limit of time, intelligence, and energy. Prior to such a convergence, globally recognized human value is likely not knowable.
0Flinter7y
It converges on money. And it IS knowledgeable. Nash's defined it and gave it to us before he left. Why won't you open the dialogue on the subject?
2gjm7y
At the risk of sounding like a pantomime audience: Oh no it doesn't. A little more precisely: I personally do not find that my values are reducible to money, nor does it appear to me that other people's generally are, nor do I see any good reason to think that they are or should be tending that way. There are some theorems whose formulations contain the words "market" and "optimal", but so far as I can tell it is not correct to interpret them as saying that human values "converge on money". And at the risk of sounding like a cultist, let me point you at "Money: the unit of caring", suggest that the bit about "within the interior of a [single] agent" is really important, and ask whether you are sure you have good grounds for making the further extension you appear to be making.
0Flinter7y
No you haven't interpreted what I said correctly (and its a normal mistake) so you haven't spoken to it, but you still might take issue. I am more suggesting that by definition we all agree on money. Money is the thing we most agree on, the nature of it is that it takes the place of complex barter and optimizes trade, and it does so as the introduction of a universally accepted transfereable utility. If it doesn't do this it would serve no purpose and cease to be money. That you don't value it is probably less true than it is irrational, and speaks to the lack of quality of the money we are offered more than anything (which is something I haven't shown to be true yet). "And at the risk of sounding like a cultist, let me point you at "Money: the unit of caring", suggest that the bit about "within the interior of a [single] agent" is really important, and ask whether you are sure you have good grounds for making the further extension you appear to be making." I think you mean that I have extended the principle of caring through money to AI and you feel that article objects (or perhaps I don't know what you refer to). It is perfectly inline and reasonable to suggest that AI will be a part of us and that money will evolve to bridge the two "entities" to share values and move forward in an (super) rational manner (one money will allow us to function as a single entity).
0gjm7y
If this is true by definition then, necessarily, the fact can't tell us anything interesting about anything else (e.g., whether an AI programmed in a particular way would reliably act in a way we were happy about). If you mean something with more empirical consequences -- and, now I think about it, even if you really do mean "by definition" -- then I think it would help if you were more explicit about what you mean by "agree on money". Do you mean we all agree on the appropriate price for any given goods? I think that's the reverse of the truth. The reason why trade happens and benefits us is that different people value different goods differently. In a "good enough" market we will all end up paying the same amount for a given good at a given time, but (1) while there's a sense in which that measures how much "the market" values the good at that time, there's no reason why that has to match up with how any individual feels about it, and (2) as I've pointed out elsewhere in this discussion there are many things people care about that don't have "good enough" markets and surely never will. I didn't say I don't value money, and if you thought I said that then I will gently suggest that you read what I write more carefully and more charitably. What I said is that my values are not reducible to money, and what I meant is that there are things I value that I have no way of exchanging for money. (And also, though you would have a case of sorts for calling it "irrational", that the values I assign to things that are exchangeable for money aren't always proportional to their prices. If I were perfectly rational and perfectly informed and all markets involved were perfectly liquid for short as well as long trading and buying and selling things carried no overheads of any kind and for some reason I were perfectly insulated from risks associated with prices changing ... then arguably I should value things in proportional to their prices, because if I didn't I could improve my
0Flinter7y
I don't think that its founded in economics or any money theory to suggest that it is something that we don't collectively agree on. It also goes against market theory and the efficient market hypothesis to suggest that the price of a good is not an equilibrium related to the wants of the market is it not? " (1) while there's a sense in which that measures how much "the market" values the good at that time, there's no reason why that has to match up with how any individual feels about it" Yup you have perfectly highlighted the useful point and also (the end of the quote) shown the perspective that you could continue to argue for no reason against. "What I said is that my values are not reducible to money" I can't find it. It was about your wifes love. I think we could simply cut out things not reducible to money from the dialogue, but I also suspect that you would put a value on your wife's love. But if its in relation to USD that doesn't make sense because its not stable over time. But you could do it in relation to a stable value metric, for example, would you pay for a movie if she expressed her love to you for it. I'm not sure what the problem here is, but you aren't speaking to any of my argument. A metric for value is super useful for humans, and solves the problem of how to keep AI on the correct track. Why aren't we speaking to Nash's proposal for such a metric? You are fighting a strawman by arguing versus me. And I still think its bs that the mod buried Nash's works and any dialogue on it. And that we aren't attending to it in this dialogue speaks to that.
0gjm7y
Given time, markets reach equilibria related to the wants of the participants. Sure. So far as I know, there are no guarantees on how long they will take to do so (and, e.g., if you're comparing finding market equilibria with solving chess, the sort of market setup you'd need to contrive to get something equivalent to solving chess surely will take a long time to converge, precisely because finding the optimum would be equivalent to solving chess; in the real world, what would presumably actually happen is that the market would settle down to something very much not equivalent to actually solving chess, and maybe a few hedge funds with superpowered computers and rooms full of PhDs would extract a little extra money from it every now and then); and there are any number of possible equilibria related to the wants of the participants. Well, we could cut out anything from the dialogue. But you can't make bits of human values go away just by not talking about them, and the fact is that lots of things humans value are not practically reducible to money, and probably never will be. That ... isn't quite how human relationships generally work. But, be that as it may, I'm still not seeing a market here that's capable of assigning meaningful prices to love or even to concrete manifestations of love. I mean, when something is both a monopoly and a monopsony, you haven't really got much of a market. I don't know why you aren't. I'm not because I have only a hazy idea what it is (and in fact I am skeptical that what he was trying to do was such a metric), and because it's only after much conversation that you've made it explicit that your proposal is intended to be exactly Nash's proposal (if indeed it is). Was I supposed to read your mind? Regrettably, that is not among my abilities. Perhaps there is some history here of which I'm unaware; I have no idea what you're referring to. I haven't generally found that the moderators here zap things just out of spite, and if somethin
0Flinter7y
Thats where poker is relevant. Firstly I am not speaking to reality, that is your implication. I spoke about a hypothetical future from an asymptotic limit. In regard to poker I have redesigned the industry to foster a future environment in which the players act like a market that brute force solve the game. The missing element from chess, or axiom in regard to poker, is that it should be arranged so players can accurately asses who the skilled players are. So we are saying theoretically it COULD be solved this way, and not speaking to how reality will unfold (yet). I will show this isn't wholly true but I cannot do it before understanding Nash's proposal together, therefore I cannot speak to it atm. Some as the above, I can't speak to this intelligibly yet. Yes it was: Its exactly his proposal. Yes exactly the mod messed up our dialogue, you weren't properly introduced, but the introduction was written, and moderated away. The mod said Hayek > Nash It was irrational.
0gjm7y
I agree that Nash hoped that ("ideal") money could become "a standard of measurement" comparable in some way to scientific units. The question, though, is how broad a range of things he expected it to be able to measure. Nothing in the lecture you linked to a transcript of makes the extremely strong claim you are making, that we should use "ideal money" as a measure for literally all human values. One of three things [EDITED to add: oops, I meant two things] is true. (1) Your account of what "the mod" did is inaccurate, or grossly misleading by omission, or something. (2) My notion of what the LW moderators do is surprisingly badly wrong. If whoever did what Flinter is describing would care to comment (PM me if you would prefer not to do it in public), I am extremely curious. (For what it's worth, my money is on "grossly misleading by omission". I am betting that whatever was removed was removed for some other reason -- e.g., I see some signs that you tried to post something in Main, which is essentially closed at present -- and that if indeed the mod "said Hayek > Nash" this was some kind of a joke, and they said some other things that provided an actual explanation of what they were doing and why. But that's all just guesswork; perhaps someone will tell me more about what actually happened.)
0Flinter7y
You are not using the standard accepted definition of the word ideal, please look it up, and or create a shared meaning with me. "Nothing in the lecture you linked to a transcript of makes the extremely strong claim you are making, that we should use "ideal money" as a measure for literally all human values." This is a founded extrapolation of mine and an implication of his. Yes the mod said Hayek > Nash and its not a joke its a prevailing ignorant attitude, by those that read Hayek but won't read and address Nash. It's not significant if I omitted something, I was told it was effectively petty (my words) but it isn't. It's significant because John Nash said so.
1gjm7y
As I have just said elsewhere in our discussion, I am not using any definition of the word "ideal". I may of course have misunderstood what you mean by "ideal money", but if so it is not because I am assuming it means "money which is ideal" according to any more general meaning of "ideal". I have so far seen nothing that convinces me that he intended any such implication. In any case, of course the relevant question is not what Nash thought about it but what's actually true; even someone as clever as Nash can be wrong (as e.g. he probably was when he thought he was the Pope) so we could do with some actual arguments and evidence on this score rather than just an appeal to authority. That depends on what you omitted. For instance, if the person who removed your post gave you a cogent explanation of why and it ended with some jokey remark that "personally I always preferred Hayek anyway", it would be grossly misleading to say what you did (which gives the impression that "Hayek > Nash" was the mod's reason for removing your post). I do not know who removed your post (for that matter I have only your word that anything was removed, though for the avoidance of doubt I would bet heavily that you aren't lying about that) but my impression is that on the whole the LW community is more favourably disposed towards Nash than towards Hayek. Not that that should matter. In any case: I'm sorry if this is too blunt, but I flatly disbelieve your implication that your post was removed because a moderator prefers Hayek to Nash, and I gravely doubt that it was removed with a given reason that a reasonable person other than you would interpret as being because the moderator prefers Hayek to Nash.
0Flinter7y
Ya you misunderstood. And you still haven't double checked your definition of ideal. Are you sure its correct? Ya you are a smart person that can completely ignore the argument posed by Nash but can still kinda sorta backhandedly show that he is wrong, without risking your persona....you are a clever arguer aren't you? It is the reason, and you would call it grossly misleading. Let's find the significance of Nash's work, and then it will be obvious the mod moderated me because of their own (admitted) ignorance. So are stuck in trying to win arguments which is the root reason why you haven't even heard of the main body of work that Nash was working on nearly his whole life. You are ignorant to the entire purpose of his career and thesis to his works. It's an advanced straw man to continue to suggest a mod wouldn't mod me the way I said, and not to address Nash's works. Nash is not favored over Hayek, Nash is being ignored here, the most significance work he has produced nobody here even knows existed (if you find one person that heard of it hear, do you think that would prove me wrong?). Ignorance towards Nash, is the reason the mod moved my thread, unsurprisingly they came to the public thread to say Hayek > Nash...You don't know, but that is a theme among many players in regard to their theories on economics and money...but the Hayek's are simply ignorant and wrong. And they haven't traversed Nash's works.
0gjm7y
OK. Would you care to help me understand correctly, or are you more interested in telling me how stupid I am? There is no possible modification I could make to my definition of "ideal" that would make any difference to my understanding of your use of the phrase "ideal money". I have already explained this twice. If you would like to make some actual arguments rather than sneering at me then I am happy to discuss things. At this point, I simply do not believe you when you say it is the reason. Not because I think you are lying; but it doesn't look to me as if you are thinking clearly at any time when your opinions or actions are being challenged. In the absence of more information about what the moderator said, no possible information about the significance of Nash's work could bring that about. Er, what? No, I haven't (more accurately, hadn't) heard of it because no one mentioned it to me before. Is that difficult to understand? Nash is not being ignored here. "Ideal money" has not been a topic of conversation here before, so far as I can recall. If your evidence that Nash is "ignored" here is that we have not been talking about "ideal money", you should consider two other hypotheses: (1) that the LW community is interested in Nash but not in "ideal money" and (2) that the LW community is interested in Nash but happens not to have come across "ideal money" before. I think #2 is probably the actual explanation, however preposterous you may find it that anyone would read anything about Nash and not know about your pet topic. (I think I already mentioned that Nasar's book about Nash doesn't see fit to mention "ideal money" in its index. It's a popular biography rather than an academic study, and the index may not perfectly reflect the text, but I think this is sufficient to show that it's possible for a reasonable person to look quite deeply at Nash's life and not come to the conclusion that "ideal money" is "the main body of work that Nash was working on nearly
0Flinter7y
Existing merely as an image in the mind: I think you erred saying there is no possible modification. Yes and you are going to suggest we are not ignoring Nash, but we are. Yes and in the future everyone is going to laugh at you all for claiming and pretending to be smart, and pretending to honor Nash, when the reality is, Nash spanked you all. Yup she ignored his life's work, his greatest passion, and if you watch his interviews he thinks its hilarious. No, it will be shown they thought it was an inconsequential move because they felt Nash's Ideal Money was insignificant. It was a subjective play.
0gjm7y
Nope. But if you tell me that when you say "ideal money" you mean "a system of money that is ideal in the sense of existing merely as an image in the mind", why then I will adjust my understanding of how you use the phrase. Note that this doesn't involve any change at all in my general understanding of the word "ideal", which I already knew sometimes has the particular sense you mention (and sometimes has other senses); what you have told me is how you are using it in this particular compound term. In any case, this is quite different from how Nash uses the term. If you read his article in the Southern Economic Journal, you will see things like this, in the abstract: (so he is thinking of this as something that can actual happen, not something that by definition exists merely as an image in the mind). Similarly, later on, (so, again, he is thinking of such systems as potentially actualizable) and, a couple of paragraphs later, (so he says explicitly it could actually happen if we had the right guidance; note also that he here envisages ideal monetary systems, plural, suggesting that in this instance he isn't claiming that there's one true set of value ratios that will necessarily be reached). In between those last two he refers to so he clearly has in mind (not necessarily exclusively) a quite different meaning of "ideal", namely "perfect" or "flawless". It doesn't look much like that for me. In the future everyone will have forgotten us all, most likely. Do please show me where I either claimed or pretended to be smart. (If you ask my opinion of my intelligence I will tell you, but no such thing has come up in this discussion and I don't see any reason why it should. If my arguments are good, it doesn't matter if I'm generally stupid; if my arguments are bad, it doesn't matter if I'm generally clever.) So far, you have presented no evidence at all that this is a reasonable description. Would you care to remedy that? In any case, what's relevant right now i
0moridinamael7y
This is a dialogue. We are dialoguing. You're saying value converges on money correct? Money is indeed a notion that humans have invented in order to exchange things that we individually value. Having more money lets you get more of things you value. This is all good and fine. But "converges" and "has converged upon" are very different things. I'm also not sure what it would look like for money to "be" value. A bad person can use money to do something horrible, something that no other person on earth approves of. A bad person can destroy immense value by buying a bomb, for example.
0Flinter7y
This is all the respect Nash's Ideal money gets on this forum? He spent 20 years on the proposal. I think that is shameful and disrespectful. Anyways, no. I am saying that "we" all converge on money. We all agree on it, that is the nature of it. And it is perfectly reasonable to suggest that so would (intelligent (and super bad)) AI be able to. And (so) it would because that is the obviously rational thing to do (I mean to show that Nash's argument explains why this is).
0moridinamael7y
It would help move things along if you would just lay out your argument rather than intimating that you have a really great argument that nobody will listen to. Just spell it out, or link to it if you've done so elsewhere.
0Flinter7y
It was removed by a mod.
0moridinamael7y
It should still be in your drafts. Just copy it here.
0Flinter7y
Yup so I can get banned. I didn't expect this place to be like this.
0moridinamael7y
Just send it to me as a private message.
0Filipe7y
You mean Money is the Unit of Caring ? :)
0Flinter7y
"In our society, this common currency of expected utilons is called "money". It is the measure of how much society cares about something." "This is a brutal yet obvious point, which many are motivated to deny." "With this audience, I hope, I can simply state it and move on." Yes but only to an extent. If we start to spend the heck out of our money to incite a care bear care-a-thon, we would only be destroying what we have worked for. Rather, it is other causes that allow spending to either be a measure or not of caring. So I don't like the way the essay ends. Furthermore, it is more to the point to say it is reasonable that we all care about money and so will AI. That is the nature of money, it is intrinsic to it.

This topic comes up every once in a while. In fact, one of the more recent threads was started by me, though it may not be obvious to you at first how that thread is related to this topic.

I think it's actually fun to talk about the structure of an "ultra-stable metric" or even an algorithm by which some kind of "living metric" may be established and then evolved/curated as the state of scientific knowledge evolves.

0Flinter7y
Yes that is relevant. Now I have said something to this end. It is a stable VALUE metric we should be wanting to levate. And that will allow us to (also) quantify our own sanity. I think I can suggest this and speak to that link.

For a shared and stable value metric to function as a solution to the AI alignment problem it would need also to be:

  • computable;
  • computable in new situations where no comparable examples exist;
  • convergent under self-evaluation.

To illustrate the last requirement, let me make an example. Let's suppose that to a new AI is given the task of dividing some fund between the existing four prototype of nuclear fusion plants. It will need to calculate the value of each prototype and their very different supply chains. But it also need to calculate the value of th... (read more)

0Flinter7y
How is it going to calculate such things with out a metric for valuation? Yes so you are seeing the significance of Nash's proposal, but you don't believe he is that smart, who is that on?
0MrMind7y
Sure, I'm just pointing out that objective and stable are necessary but not sufficient conditions for a value metric to solve the FAI problem, it would also need to have the three features that I detailed, and possibly others. It's not a refutation, it's an expansion.
0Flinter7y
Right but you subtly back handedly agree its a necessary component of AI. If you come back to say "Sure but its not necessarily the ONLY missing component" I will think of you dumb.

I think what you're missing is that metrics are difficult - I've written about that point in a number of contexts; www.ribbonfarm.com/2016/06/09/goodharts-law-and-why-measurement-is-hard/

There are more specific metric / goal problems with AI; Eliezer wrote this https://intelligence.org/2016/12/28/ai-alignment-why-its-hard-and-where-to-start/ - and Dario Amodei has been working on it as well; https://openai.com/blog/faulty-reward-functions/ - and there is a lot more in this vein!

1Flinter7y
Ok. I skimmed it, and I think I understand your post well enough (if not I'll read deeper!). What I am introducing into the dialogue is a theoretical and conceptually stable unit of value. I am saying, let's address the problems stated in your articles as if we don't have the problem of defining our base unit and that it exists and is agreed upon and it is stable for all time. So here is an example from one of your links: "Why is alignment hard? Why expect that this problem is hard? This is the real question. You might ordinarily expect that whoever has taken on the job of building an AI is just naturally going to try to point that in a relatively nice direction. They’re not going to make evil AI. They’re not cackling villains. Why expect that their attempts to align the AI would fail if they just did everything as obviously as possible? Here’s a bit of a fable. It’s not intended to be the most likely outcome. I’m using it as a concrete example to explain some more abstract concepts later. With that said: What if programmers build an artificial general intelligence to optimize for smiles? Smiles are good, right? Smiles happen when good things happen." Do we see how we can solve this problem now? We simply optimize the AI system for value, and everyone is happy. If someone creates "bad" AI we could measure that, and use the measurement for a counter program.
1gjm7y
"Simply"? If we had a satisfactory way of doing that then, yes, a large part of the problem would be solved. Unfortunately, that's because a large part of the problem is that we don't have a clear notion of "value" that (1) actually captures what humans care about and (2) is precise enough for us to have any prospect of communicating it accurately and reliably to an AI.
1Flinter7y
Yes but my claim was IF we had such a clear notion of value then most of the problems on this site would be solved (by this site I mean for example what popular cannons are based around as interesting problems). I think you have simply agreed with me.
2gjm7y
When you say "problem X is easily solved by Y" it can mean either (1) "problem X is easily solved, and here is how: Y!" or (2) "if only we had Y, then problem X would easily be solved". Generally #1 is the more interesting statement, which is why I thought you might be saying it. (That, plus the fact that you refer to "my proposal", which does rather suggest that you think you have an actual solution, not merely a solution conditional on another hard problem.) It transpires that you're saying #2. OK. In that case I think I have three comments. First: yes, given a notion of value that captures what we care about and is sufficiently precise, many of the problems people here worry about become much easier. Thus far, we agree. Second: it is far from clear that any such notion actually exists, and as yet no one has come up with even a coherent proposal for figuring out what it might actually be. (Some of the old posts I pointed you at elsewhere argue that if there is one then it is probably very complicated and hard to reason about.) Third: having such a notion of value is not necessarily enough. Here is an example of a problem for which it is probably not enough: Suppose we make an AI, which makes another AI, which makes another AI, etc., each one building a smarter one than itself. Or, more or less equivalently, we build an AI, which modifies itself, and then modifies itself again, etc., making itself smarter each time. We get to choose the initial AI's values. Can we choose them in such a way that even after all these modifications we can be confident that the resulting AI -- which may work very differently, and think very differently, from the one we start with -- will do things we are happy about?
0Flinter7y
"When you say "problem X is easily solved by Y" it can mean either (1) "problem X is easily solved, and here is how: Y!" or (2) "if only we had Y, then problem X would easily be solved"." Yes I am speaking to (2) and once we understand the value of it, then I will explain why it is not insignificant. "Third: having such a notion of value is not necessarily enough. Here is an example of a problem for which it is probably not enough: Suppose we make an AI, which makes another AI, which makes another AI, etc., each one building a smarter one than itself. Or, more or less equivalently, we build an AI, which modifies itself, and then modifies itself again, etc., making itself smarter each time. We get to choose the initial AI's values. Can we choose them in such a way that even after all these modifications we can be confident that the resulting AI -- which may work very differently, and think very differently, from the one we start with -- will do things we are happy about?" You would create the first AI to seek value, and then knowing that it is getting smarter and smarter, it would tend towards seeing the value I propose and optimize itself in relation to what I am proposing, by your own admission of how the problem you are stating works.
0gjm7y
I am not sure which of two things you are saying. Thing One: "We program the AI with a simple principle expressed as 'seek value'. Any sufficiently smart thing programmed to do this will converge on the One True Value System, which when followed guarantees the best available outcomes, so if the AIs get smarter and smarter and they are programmed to 'seek value' then they will end up seeking the One True Value and everything will be OK." Thing Two: "We program the AI with a perhaps-complicated value system that expresses what really matters to us. We can then be confident that it will program its successors to use the same value system, and they will program their successors to use the same value system, etc. So provided we start out with a value system that produces good outcomes, everything will be OK." If you are saying Thing One, then I hope you intend to give us some concrete reason to believe that all sufficiently smart agents converge on a single value system. I personally find that very difficult to believe, and I know I'm not alone in this. (Specifically, Eliezer Yudkowsky, who founded the LW site, has written a bit about how he used to believe something very similar, changed his mind, and now thinks it's obviously wrong. I don't know the details of exactly what EY believed or what arguments convinced him he'd been wrong.) If you are saying Thing Two, then I think you may be overoptimistic about the link between "System S follows values V" and "System S will make sure any new systems it creates also follow values V". This is not a thing that reliably happens when S is a human being, and it's not difficult to think of situations in which it's not what you'd want to happen. (Perhaps S can predict the behaviour of its successor T, and figures out that it will get more V-aligned results if T's values are something other than V. I'm not sure that this can be plausible when T is S's smarter successor, but it's not obvious to me that the possibility can be rule
1Flinter7y
I REALLY appreciate this dialogue. Yup I am suggesting #1. It's observable reality that smart agents converge to value the same thing yes, but that is the wrong way to say it. "Natural evolution will levate (aka create) the thing that all agents will converge to", this is the correct perspective (or more valuable perspective). Also I should think that is obvious to most people here. Eliezer Y will rethink this when he comes across what I am proposing.
1gjm7y
This seems to me like a category error. The things produced by natural evolution are not values. (Though natural evolution produces things -- e.g., us -- that produce values.) My guess is that you are wrong about that; in any case, it certainly isn't obvious to me.
0Flinter7y
"This seems to me like a category error. The things produced by natural evolution are not values. (Though natural evolution produces things -- e.g., us -- that produce values.)" I am saying, in a newtonian vs Quantum science money naturally evolves as a thing that the collective group wants, and I am suggesting this phenomenon will spread to and drive AI. This is both natural and a rational conclusion and something favorable the re-solves many paradoxes and difficult problems. But money is not the correct word, it is an objective metric for value that is the key. Because money can also be a poor standard for objective measurement.
0gjm7y
Given the actual observed behaviour of markets (e.g., the affair of 2008), I see little grounds for hoping that their preferences will robustly track what humans actually care enough, still less that they will do so robustly enough to answer the concerns of people who worry about AI value alignment.
0Flinter7y
Nash speaks to the crisis of 2008 and explains how it is the lack of an uncorruptable standard basis for value that stops us from achieving such a useful market. You can't target optimal spending for optimal caring though, I just want to clear on that.
0gjm7y
OK. And has Nash found an uncorruptable standard basis for value? Or is this meant to emerge somehow from The Market, borne aloft no doubt by the Invisible Hand? So far, that doesn't actually seem to be happening. I'm afraid I don't understand your last sentence.
0Flinter7y
Yes. And why do we ignore him?
0gjm7y
The things I've seen about Nash's "ideal money" proposal -- which, full disclosure, I hadn't heard of until today, so I make no guarantee to have seen enough -- do not seem to suggest that Nash has in fact found an uncorruptable standard basis for value. Would you care to say more?
0Flinter7y
Yup. Firstly you fully admit that you are, previous to my entry, ignorant to Nash's lifes work. What he spoke of and wrote of for 20 years country to country. It is what he fled the US about when he was younger, to exchange his USD for the Swiss franc because it was of superior quality, and in which the US navy tracked him down and took him back in chains (this is accepted not conspiracy). Nash absolutely defined an incorruptible basis for valuation and most people have it labeled as an "icpi" industrial price consumption index. It is effectively an aggregate of stable prices across the global commodities, and it can be said if our money were pegged to it then it would be effectively perfectly stable over time. And of course it would need to be adjusted which means it is politically corruptible, but Nash's actual proposal solves for this too: Ideal Money is an incorruptible basis for value. Now it is important you attend to this thread, its quick, very quick: http://lesswrong.com/lw/ogt/do_we_share_a_definition_for_the_word_ideal/
0gjm7y
First of all, we will all do better without the hectoring tone. But yes, I was ignorant of this. There is scarcely any limit to the things I don't know about. However, nothing I have read about Nash suggests to me that it's correct to describe "ideal money" as "his life's work". You are not helping your case here. He was, at this point, suffering pretty badly from schizophrenia. And so far as I can tell, the reasons he himself gave for leaving the US were nothing to do with the quality of US and Swiss money. Let me see if I've understood this right. You want a currency pegged to some basket of goods ("global commodities", as you put it), which you will call "ideal money". You then want to convert everything to money according to the prices set by a perfectly efficient infinitely liquid market, even though no such market has ever existed and no market at all is ever likely to exist for many of the things people actually care about. And you think this is a suitable foundation for the values of an AI, as a response to people who worry about the values of an AI whose vastly superhuman intellect will enable it to transform our world beyond all recognition. What exactly do you expect to happen to the values of those "global commodities" in the presence of such an AI? (Yep.) But nothing in what you quote does any such thing. It may be important to you that I do so, but right now I have other priorities. Maybe tomorrow.
0Flinter7y
He lectured and wrote on the topic for the last 20 years of his life, and it is something he had been developing in his 30's Yes he was running around saying the governments are colluding against the people and he was going to be their savior. In Ideal Money he explains how the Keyneisan view of economics is comparable to bolshevik communism. These are facts and they show that he never abandoned his views when he was "schizophrenic", and that they are in fact based on rational thinking. And yes it is his own admission that this is why he fled the US and denounced his citizenship. Yup exactly, and we are to create AI that basis its decisions on optimizing value in relation to procuring what would effectively be "ideal money" I don't need to do anything to show Nash made such a proposal of a unit of value expect quote him saying it is his intention. I don't need to put the unit in your hand. It's simply and quick, your definition of ideal is not inline with the standard definition. Google it.
0gjm7y
He was also running around saying that he was Pope John XXIII because 23 was his favourite prime number. And refusing academic positions which would have given him a much better platform for advocating currency reform (had he wanted to do that) on the basis that he was already scheduled to start working as Emperor of Antarctica. And saying he was communicating with aliens from outer space. Of course that doesn't mean that everything he did was done for crazy reasons. But it does mean that the fact that he said or did something at this time is not any sort of evidence that it makes sense. Could you point me to more information about the reasons he gave for leaving the US and trying to renounce his citizenship? I had a look in Nasar's book, which is the only thing about Nash I have on my shelves, and (1) there is no index entry for "ideal money" (the concept may crop up but not be indexed, of course) and (2) its account of Nash's time in Geneva and Paris is rather vague about why he wanted to renounce his citizenship (and indeed about why he was brought back to the US). Let me then repeat my question. What do you expect to happen to the values of those "global commodities" in the presence of an AI whose capabilities are superhuman enough to make value alignment an urgent issue? Suppose the commodities include (say) gold, Intel CPUs and cars, and the AI finds an energy-efficient way to make gold, designs some novel kind of quantum computing device that does what the Intel chips do but a million times faster, and figures out quantum gravity and uses it to invent a teleportation machine that works via wormholes? How are prices based on a basket of gold, CPUs and cars going to remain stable in that kind of situation? [EDITED to add:] Having read a bit more about Nash's proposal, it looks as if he had in mind minerals rather than manufactured goods; so gold might be on the list but probably not CPUs or cars. The point stands, and indeed Nash explicitly said that gold o
0Flinter7y
No you aren't going to tell Nash how he could have brought about Ideal Money. In regard to, for example, communicating with aliens, again you are being wholly ignorant. Consider this (from Ideal Money): See? He has been "communicating" with aliens. He was using his brain to think beyond not just nations and continents but worlds. "What would it be like for outside observers?" Is he not allowed to ask these questions? Do we not find it useful to think about how extraterrestrials would have an effect on a certain problem like our currency systems? And you call this "crazy"? Why can't Nash make theories based on civilizations external to ours without you calling him crazy? See, he was being logical, but people like you can't understand him. This is the most sick (ill) paragraph I have traversed in a long time. You have said "Nash was saying crazy things, so he was sick, therefore the things he was saying were crazy, and so we have to talk them with a grain of salt. Nash birthed modern complexity theory at that time and did many other amazing things when he was "sick". He also recovered from his mental illness not because of medication but by willing himself so. These are accepted points in his bio. He says he started to reject politically orientated thinking and return to a more logical basis (in other words he realized running around telling everyone he is a god isn't helping any argument). "I emerged from a time of mental illness, or what is generally called mental illness..." "...you could say I grew out of it." Those are relevant quotes otherwise. https://www.youtube.com/watch?v=7Zb6_PZxxA0 12:40 it starts but he explains about the francs at 13:27 "When I did become disturbed I changed my money into swiss francs. There is another interview he explains that the us navy took him back in chains, can't recall the video. You are messing up (badly) the accepted definition of ideal. Nonetheless Nash deals with your concerns: . No he gets more explicate tho and
0Davidmanheim7y
Another point - "What I am introducing into the dialogue is a theoretical and conceptually stable unit of value." Without a full and perfect system model, I argued that creating perfectly aligned metrics, liek a unit of value, is impossible. (To be fair, I really argued that point in the follow-up piece; www.ribbonfarm.com/2016/09/29/soft-bias-of-underspecified-goals/ ) So if our model for human values is simplified in any way, it's impossible to guarantee convergence to the same goal without a full and perfect systems model to test it against.
0Davidmanheim7y
"If someone creates "bad" AI we could measure that, and use the measurement for a counter program." (I'm just going to address this point in this comment.) The space of potential bad programs is vast - and the opposite of a disastrous values misalignment is almost always a different values misalignment, not alignment. In two dimensions, think of a misaligned wheel; it's very unlikely to be exactly 180 degrees (or 90 degrees) away from proper alignment. Pointing the car in a relatively nice direction is better than pointing it straight at the highway divider wall - but even a slight misalignment will eventually lead to going off-road. And the worry is that we need to have a general solution before we allow the car to get to 55 MPH, much less 100+. But you argue that we can measure the misalignment. True! If we had a way to measure the angle between its alignment and the correct one, we could ignore the misaligned wheel angle, and simple minimize the misalignment -which means the measure of divergence implicitly contains the correct alignment. For an AI value function, the same is true. If we had a measure of misalignment, we could minimize it. The tricky part is that we don't have such a metric, and any correct such metric would be implicitly equivalent to solving the original problem. Perhaps this is a fruitful avenue, since recasting the problem this way can help - and it's similar to some of the approaches I've heard Dario Amodei mention regarding value alignment in machine learning systems. So it's potentially a good insight, but insufficient on its own.
0gjm7y
If someone creates "bad" AI then we may all be dead before we have the chance to "use the measurement for a counter program". (Taking "AI" here to mean "terrifyingly superintelligent AI", because that's the scenario we're particularly keen to defuse. If it turns out that that isn't possible, or that it's possible but takes centuries, then these problems are much less important.)
0Flinter7y
That's sort of moot for 2 reasons. Firstly what I have proposed would be the game theoretically optimal approach to solving the problem of a super terrbad ai. There is no better approach against such a player. I would also suggest there is no other reasonable approach. And so this speaks to the speed in relation to other possible proposed solutions. Now of course we are still being theoretical here, but its relevant to point that out.
0gjm7y
The currently known means for finding game-theoretically optimal choices are, shall we say, impractical in this sort of situation. I mean, chess is game-theoretically trivial (in terms of the sort of game theory I take it you have in mind) -- but actually finding an optimal strategy involves vastly more computation than we have any means of deploying, and even finding a strategy good enough to play as well as the best human players took multiple decades of work by many smart people and a whole lot of Moore's law. Perhaps I'm not understanding your argument, though. Why does what you say make what I say "sort of moot"?
0Flinter7y
So lets take poker for example. I have argued (lets take it as an assumption which should be fine) that poker players never have enough empirical evidence to know their own winrates. It's always a guess and since the game isn't solved they are really guessing about whether they are profitable and how profitable they are. IF they had a standard basis for value then it could be arranged that players brute force the solution to poker. That is to say if players knew who was playing correctly then they would tend towards the correct players strategy. So there is an argument, to be explored, that the reason we can't solve chess is because we are not using our biggest computer which is the entirety of our markets. The reason your points are "moot" or not significant, is because there is not theoretically possible "better' way of dealing with ai, than having a stable metric of value. This happens because objective value is perfectly tied to objective morality. That which we all value is that which we all feel is good.
1gjm7y
"The entirety of our markets" do not have anywhere near enough computational power to solve chess. (At least, not unless someone comes up with a novel way of solving chess that's much cleverer than anything currently known.) It sounds as if this is meant to be shorthand for some sort of argument for your thesis (though I'm not sure exactly what thesis) but if so I am not optimistic about the prospects for the argument's success given that "we all" don't value or feel the same things as one another.
0Flinter7y
It is the opinion of some well established (and historical) economics philosophers the markets can determine the optimal distribution of our commodities. Such an endeavor is at least several orders of magnitudes higher than the computing power required to solve chess. "It sounds as if this is meant to be shorthand for some sort of argument for your thesis (though I'm not sure exactly what thesis) but if so I am not optimistic about the prospects for the argument's success given that "we all" don't value or feel the same things as one another." You have stepped outside the premise again, which is a stable metric of value, this implies objectivity, which implies we all agree on the value of it. This is the premise.
0gjm7y
Let me know when they get their Fields Medals (or perhaps, if it turns out that they're right but that the ways in which markets do this are noncomputable) their Nobel prizes, and then we can discuss this further. Oh. Then your premise is flatly wrong, since people in fact don't all agree about value. (In any case, "objective" doesn't imply everyone agrees. Whether life on earth has been around for more than a million years is a matter of objective fact, but people manage to disagree about it.)
0Flinter7y
Well I am speaking of Hayek Nash and Szabo (and smith) and I don't think medals makes for a strong argument (especially vs the stated fellows. "Oh. Then your premise is flatly wrong, since people in fact don't all agree about value." By what definition and application of the word premise, is it "wrong"? I am suggesting we take the premise as given, and I would like to speak of the implications. Calling it wrong is silly. "(In any case, "objective" doesn't imply everyone agrees. Whether life on earth has been around for more than a million years is a matter of objective fact, but people manage to disagree about it.)" The nature of money is such that "everyone agrees" that is how it becomes money and it is therefore and thus "objective". But I am not yet speaking to that, I am speaking to the premise which is a value metric that everyone DOES agree on.
0gjm7y
Maybe you are misunderstanding my argument, which isn't "a bunch of clever people think differently, so Hayek et al must be wrong" but "if you are correctly describing what Hayek et al claim, and if they are right about that, then someone has found either an algorithm worthy of the Fields medal or a discovery of non-algorithmic physics worthy of a Nobel prize". I am suggesting that if I take at face value what you say about the premise, then it is known to be false, and I am not very interesting in taking as given something that is known to be false. (But very likely you do not actually mean to claim what on the face of it you seem to be claiming, namely that everyone actually agrees about what matters.) I think this is exactly wrong. Prices (in a sufficiently free and sufficiently liquid market) tend to equalize, but not because everyone agrees but because when people disagree there are ways to get rich by noticing the fact, and when you do that the result is to move others closer to agreement. In any case, this only works when you have markets with no transaction costs, and plenty of liquidity. There are many things for which no such markets exist or seem likely to exist. (Random example: I care whether and how dearly my wife loves me. No doubt I would pay, if need and opportunity arose, to have her love me more rather than less. But there is no market in my wife's love, it's hard to see how there ever could be, if you tried to make one it's hard to see how it would actually help anything, and by trading in such a market I would gravely disturb the very thing the market was trying to price. This is not an observation about the fuzziness of the word "love"; essentially all of that would remain true if you operationalized it in terms of affectionate-sounding words, physical intimacy, kind deeds, and so forth.)
0Flinter7y
Yes Nash will get the medals for Ideal Money, this is what I am suggesting. I am not proposing something "false" as a premise. I am saying, assume an objective metric for value exists (and then lets tend to the ramifications/implications). There is nothing false about that.... What I am saying about money, that you want to suggest is false, is that it is our most objective valuation metric. There is no more objective device for measuring value, in this world. The rest you are suggesting is a way of saying we don't have free markets now, but if we continue to improve we will asymptotically approach it at the limits. Then you might agree at the limits our money will be stable in the valuation sense and COULD be such a metric (but its value isn't stable at present time!) In regard to your wifes love the market value's it at a constant in relation to this theoretical notion, that your subjective valuation disagrees with the ultimate objective metric (remember its a premise that doesn't necessarily exist) doesn't break the standard.
0gjm7y
If, in fact, no objective metric for value exists, then there is something false about it. If, less dramatically, your preferred candidate for an objective metric doesn't exist (or, perhaps better, exists but doesn't have the properties required of such a metric) and we have no good way of telling whether some other objective metric exists, then there's something unsatisfactory about it even if not quite "false" (though in that case, indeed, it might be reasonable to say "let's suppose there is, and see what follows"). Ah, now that's a different claim altogether. Our most objective versus actually objective. Unfortunately, the latter is what we need. The first part, kinda but only kinda. The second, not so much. Markets can deviate from ideality in ways other than not being "free". For instance, they can have transaction costs. Not only because of taxation, bid-offer spreads, and the like, but also (and I think unavoidably) because doing things takes effort. They can have granularity problems. (If I have a bunch of books, there is no mechanism by which I can sell half of one of them.) They can simply not exist. Hence, "only kinda". And I see no reason whatever to expect markets to move inexorably towards perfect freedom, perfect liquidity, zero transaction costs, infinitely fine granularity, etc., etc., etc. Hence "not so much". I don't understand your last paragraph at all. "The market values it at a constant in relation with this theoretical notion" -- what theoretical notion? what does it mean to "value it at a constant"? It sounds as if you are saying that I may be wrong about how much I care how much my wife loves me, if "the market" disagrees; that sounds pretty ridiculous but I can't tell how ridiculous until I understand how the market is supposedly valuing it, which at present I don't.
0Flinter7y
I doubt it is accepted logic to suggest a premise is intrinsically false. .>If, less dramatically, your preferred candidate for an objective metric doesn't exist (or, perhaps better, exists but doesn't have the properties required of such a metric) and we have no good way of telling whether some other objective metric exists, then there's something unsatisfactory about it even if not quite "false" (though in that case, indeed, it might be reasonable to say "let's suppose there is, and see what follows"). Yes this. I will make it satisfactory, in a jiffy. No we need both. They are both useful, and I present both, in the context of what is useful (and therefore wanted). Yes all these things I mean to say, as friction and inefficiency, would suggest it is not free, and you speak to all Szabo's articles and Nash's works which I am familiar with. But I also say this in a manner such "provided we continue to evolve rationally" or "provided technology continues to evolve". I don't need to prove we WILL evolve rationally and our tech will not take a step back. I don't need to prove that to show in this thought experiment what the end game is. You aren't expected to understand how we get to the conclusion, just that there is a basis for value, a unit of it, that everyone accepts. It doesn't matter if a person disagrees, they still have to use it because the general society has deemed it "that thing". And "that thing" that we all generally accept is actually called money. I am not saying anything that isn't completely accepted by society. Go to a store and try to pay with something other than money. Go try to pay your taxes in a random good. They aren't accepted. Its silly to argue you could do this.
0gjm7y
I'm not sure what your objection actually is. If someone comes along and says "I have a solution to the problems in the Middle East. Let us first of all suppose that Israel is located in Western Europe and that all Jews and Arabs have converted to Christianity" then it is perfectly in order to say no, those things aren't actually true, and there's little point discussing what would follow if they were. If you are seriously claiming that money provides a solution to what previously looked like difficult value-alignment problems because everyone agrees on how much money everything is worth, then this is about as obviously untrue as our hypothetical diplomat's premise. I expect you aren't actually saying quite that; perhaps at some point you will clarify just what you are saying. Many of them seem to me to have other obvious causes. I don't see much sign that humanity is "evolving rationally", at least not if that's meant to mean that we're somehow approaching perfect rationality. (It's not even clear what that means without infinite computational resources, which there's also no reason to think we're approaching; in fact, there are fundamental physical reasons to think we can't be.) If you are not interested in explaining how you reach your conclusions, then I am not interested in talking to you. Please let me know whether you are or not, and if not then I can stop wasting my time. You are doing a good job of giving the impression that you are. There is certainly nothing resembling a consensus across "society" that money answers all questions of value.
0Flinter7y
Yes exactly. You want to say because the premise is silly or not reality then it cannot be useful. That is wholly untrue and I think I recall reading an article here about this. Can we not use premises that lead to useful conclusions that don't rely on the premise? You have no basis for denying that we can. I know this. Can I ask you if we share the definition of ideal: http://lesswrong.com/lw/ogt/do_we_share_a_definition_for_the_word_ideal/ Yes because you don't know that our rationality is tied to the quality of our money in the Nashian sense, or in other words if our money is stable in relation to an objective metric for value then we become (by definition of some objective truth) more rational. I can't make this point though, without Nash's works. Yes I am in the process of it, and you might likely be near understanding, but it takes a moment to present and the mod took my legs out. No that is not what I said or how I said it. Money exists because we need to all agree on the value of something in order to have efficiency in the markets. To say "I don't agree with the American dollar" doesn't change that.
2gjm7y
Not quite. It can be interesting and useful to consider counterfactual scenarios. But I think it's important to be explicit about them being counterfactual. And, because you can scarcely ever change just one thing about the world, it's also important to clarify how other things are (counterfactually) changing to accommodate the main change you have in mind. So, in this case, if I understand you right what you're actually saying is something like this. "Consider a world in which there is a universally agreed-upon currency that suffers no inflation or deflation, perhaps by being somehow pegged to a basket of other assets of fixed value; and that is immune to other defects X, Y, Z suffered by existing currencies. Suppose that in our hypothetical world there are markets that produce universally-agreed-upon prices for all goods without exception, including abstract ones like "understanding physics" and emotionally fraught ones like "getting on well with one's parents" and so forth. Then, let us consider what would happen to problems of AI value alignment in such a world. I claim that most of these problems would go away; we could simply tell the AI to seek value as measured by this universally-agreed currency." That might make for an interesting discussion (though I think you will need to adjust your tone if you want many people to enjoy discussions with you). But if you try to start the same discussion by saying or implying that there is such a currency, you shouldn't be surprised if many of the responses you get are mostly saying "oh no there isn't". Even when you do make it clear that this is a counterfactual, you should expect some responses along similar lines. If what someone actually cares about is AI value alignment in the real world, or at least in plausible future real worlds, then a counterfactual like this will be interesting to them only in so far as it actually illuminates the issue in the real world. If the counterfactual world is too different from the
0Lumifer7y
I have a suspicion that there is a word hanging above your discussion, visible to Flinter but not to you. It starts with "bit" and ends with "coin".
0Flinter7y
Ideal Money is an enthymeme. But Nash speak FAR beyond the advent of an international e-currency with a stably issued supply.
0gjm7y
Actually, it was visible to me too, but I didn't see any particular need to introduce it to the discussion until such time as Flinter sees fit to do so. (I have seen a blog that I am pretty sure is Flinter's, and a few other writings on similar topics that I'm pretty sure are also his.) (My impression is that Flinter thinks something like bitcoin will serve his purposes, but not necessarily bitcoin itself as it now is.)
0Flinter7y
After painting the picture of what Ideal Money, Nash explains the intrinsic difficulties of bringing it about. Then he comes up with the concept of "asymptotically ideal money": Nash explains the parameters of gold in regard to why we have historically valued it, he is methodical, and he also explains golds weaknesses in this context.
0Lumifer7y
I'm impatient and prefer to cut to the chase :-)
0Flinter7y
Its too difficult to cut to, because the nature of this problem is such that we all have incredibly cognitive bias towards not understanding it or seeing it.
0Flinter7y
To the first set of paragraphs...ie: If I start by saying there IS such a currency? What does "ideal" mean to you? I think you aren't using the standard definition: http://lesswrong.com/r/discussion/lw/ogt/do_we_share_a_defintion_for_the_word_ideal/ I did not come here to specifically make claims in regard to AI. What does it mean to ignore Nash's works, his argument, and the general concept of what Ideal Money is...and then to say that my delivery and argument is weak in regard to AI? No you have not understood the nature of money. A money is chosen by the general market, it is propriety. This is what I mean to say in this regard, no more, no less. To tell me you don't like money therefore not "everyone" uses it is petty and simply perpetuating conflict. There is nothing to argue about in regard to pointing out that we converge on it, in the sense that we all socially agree to it. If you want to show that I am wrong by saying that you specifically don't, or one , or two people, then you are not interesting in dialogue you are being petty and silly.
0gjm7y
It means, in this context, "the first word of the technical term 'ideal money' which Flinter has been using, and which I am hoping at some point he will give us his actual definition of". You began by saying this: which, as I said at the time, looks at least as much like "There is such a metric" as like "Let's explore the consequences of having such a metric". Then later you said "It converges on money" (not, e.g., "it and money converge on a single coherent metric of value"). Then when asked whether you were saying that Nash has actually found an incorruptible measure of value, you said yes. I appreciate that when asked explicitly whether such a thing exists you say no. But you don't seem to be taking any steps to avoid giving the impression that it's already around. Nope. But you introduced this whole business in the context of AI value alignment, and the possible relevance of your (interpretation of Nash's) proposal to the Less Wrong community rests partly on its applicability to that sort of problem. I'm here discussing this stuff with you. I am not (so far as I am aware) ignoring anything you say. What exactly is your objection? That I didn't, as soon as you mentioned John Nash, go off and spend a week studying his thoughts on this matter before responding to you? I have read the Nash lecture you linked, and also his earlier paper on Ideal Money published in the Southern Economic Journal. What do you think I am ignoring, and why do you think I am ignoring it? But your question is an odd one. It seems to be asking, more or less, "How dare you have interests and priorities that differ from mine?". I hope it's clear that that question isn't actually the sort that deserves an answer. I think I understand the nature of money OK, but I'm not sure I understand what you are saying about it. "A money"? Do you mean a currency, or do you mean a monetary valuation of a good, or something else? What is "the general market", in a world where there are lots and lots of
0Flinter7y
Ideal, the standard definition, means implies that it is conceptual. Yes he did and he explains it perfectly. And its a device, I introduced into the dialogue and showed how it is to be properly used. It's conceptual in nature. Yup we'll get to that. Nope, those are past sentiments, my new ones are I appreciate the dialogue. Yes but its a product of never actual entering sincere dialogue with intelligent players on the topic of Ideal Money so I have to be sharp when we are not addressing it and instead addressing complex subject, AI, in relation to Ideal Money but before understanding Ideal Money (which is FAR more difficult to understand than AI). Why aren't you using generally accepted definitions? Yes money can mean many things, but if we thing of the purpose of it and how and why it exists it is effectively that thing which we all generally agree on. If one or two people play a different game that doesn't invalidate the money. Money serves a purpose that involves all of us supporting it through unwritten social contract. There is nothing else that serves that purpose better. It is the nature of money. Money is the general accepted form of exchange. There is nothing here to investigate, its a simple statement. Yes. Money has the quality that it is levated by our collective need for an objective value metric. But if I say "our" and someone says "well you are wrong because not EVERYONE uses money" then I won't engage with them because they are being dumb. We all converge to money and to use a single money, it is the nature of the universe. It is obvious money will bridge us with AI and help us interact. And yes this convergence will be such that we will solve all complex problems with it, but we need it to be stable to begin to do that. So in the future, you will do what money tells you. You won't say, I'm going to do something that doesn't procure much money, because it will be the irrational thing to do. Does everyone believe in Christianity? Does ev