Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

The substance of money

2 MrMind 17 January 2017 02:51PM

Not much epistemic effort here: this is just an intuition that I have to model a vast and possibly charged field. I'm calling upon the powers of crowd-finding to clarify my views.

Tl;dr: is the debate in economics over the nature of money really about the definition or just over politics?

I'm currently reading "Money - an unauthorised biography" by Felix Martin. It presents what is to me a beautifully simple and elegant definition of what money is: transferable value over a liquid landscape (these precise words are mine). It also presents many cases where this simple view is contrasted by another view: money as a commodity. This opposition is not merely one of academics definitions, but has important consequences. Policy makers have adopted different points of view and have becuase of that varied very much their interventions.

I've never been much interested in the field, but this book sparked my curiosity, so I'm starting to look around and I'm surprised to discover that this debate is still alive and well in the 21st century.

Basically what I've glanced is that there is this Keynesian school of thougth that posits that yes, money is transferable debts, and since money is merely a technology that expresses an agreement, you should intervente in the matter of economics, especially by printing money when this is needed.

Then there's an opposite view (does it have a name?) that says that no, money is a commodity and for this reason it must be treated as such: it's creation is to be carefully controlled by the market and it's value tied only to the value of an underlying tradeable asset.

I think my uncertainty shows how little I know about this field, so apply Crocker's rule at will. Is this a not completely inaccurate model of the debate?

If it is so, my second question: is this a debate over substance or over politics?
If I think that money is transferable debts, surely I can recognize the merit of intervention but also understand that a liberal use of said tool might breed a disaster.
If I think that money is a standard commodity, can I manipulate which commodity it is exactly tied to to increase the availability of money in times of need?
Am I talking nonsense for some technical reason? Am I missing something big? Is economics the mind-killer too?

Elucidate me!

Comment author: MrMind 17 January 2017 01:37:37PM 0 points [-]

Are there any other documents by Nash on the subject other than this lecture?

Comment author: Flinter 17 January 2017 10:09:36AM *  0 points [-]

How is it going to calculate such things with out a metric for valuation?

If that's what we would have available, then I think FAI would be mostly solved.

Yes so you are seeing the significance of Nash's proposal, but you don't believe he is that smart, who is that on?

Comment author: MrMind 17 January 2017 10:45:33AM *  0 points [-]

How is it going to calculate such things with out a metric for valuation?

Sure, I'm just pointing out that objective and stable are necessary but not sufficient conditions for a value metric to solve the FAI problem, it would also need to have the three features that I detailed, and possibly others.
It's not a refutation, it's an expansion.

Comment author: Flinter 17 January 2017 10:07:49AM 0 points [-]

Well, since it's an automated process, it filters anything, be it spam, Nash' argument or the words of Omega itself. As I said, it's a compromise. The best we could come up, so far. If you have a better solution, spell it out.

You are defending irrationality. It filters out the one thing it needs to not filter out. A better solution would be to eliminate it.

No, mine was just a suggestion for a way that would allow you to lubricate the social friction I think you're experiencing here. On the other side, I am reading your posts carefully and reply when done thinking about.

Sigh, I guess we never will address Ideal Money will we. I've already spent all day with like 10 posters, that refuse to do anything but attack my character. Not surprising since the subject was insta-mod'd anyways.

Well, as a last hail mary, I just want to say I think you are dumb for purposefully trolling me like this and refusing to address Nash's proposal. Its John Nash, and he spent his life on this proposal, ya'll won't even read it.

There is no intelligence here, just pompous robots avoiding real truth.

Do you know who Nash is? It took 40 years the first time to acknowledge what he did with his equilibrium work. Its been 20 in regard to Ideal Money...

Comment author: MrMind 17 January 2017 10:36:54AM 1 point [-]

You are defending irrationality. It filters out the one thing it needs to not filter out. A better solution would be to eliminate it.

I wonder what my failure in communicating my idea is in this case. Let me rephrase my argument in favor of filtering and see if I can get my point across: if we eliminated the filter, the site would be inundated with spam and fake accounts posts. By having a filter we block all this, and people willing to pass a small threshold will not be denied to post their contributions.

Sigh, I guess we never will address Ideal Money will we

In due time, I will.

I've already spent all day with like 10 posters, that refuse to do anything but attack my character.

That is unfortunate, but you must be prepared to make these discussions on the lon run. There are people that come here only once a week or only once every three months. A day can be enough to filter out the most visceral reactions, but here discussions can span days, weeks or years.

Its John Nash, and he spent his life on this proposal, ya'll won't even read it.

I am reading it right now, and exactly because it's Nash I'm reading as careful as I can.

But what won't fly here is insulting. Frustration for not being able to communicate your idea is something that we all felt, after all communicating clearly is hard. But if you let yourself below a certain standard of respect, you will be moderated and possibly even banned. That would allow you to communicate your idea even less.

Comment author: Flinter 17 January 2017 10:03:02AM 0 points [-]

The context is "Ideal Money". When I am asking if we have a shared meaning, I am asking if we agree on what the standard definition for the word is. For someone to say "Ideal Money doesn't exist" is to not use the standard definition of the word Ideal.

I have already discussed Ideal Money with people on this forum that have made this error.

That is the context of this thread. But this thread was a sub point to the main thread, the main thread was moderated away, so no one saw it and so this thread doesn't make sense, because the context was taken from me.

Comment author: MrMind 17 January 2017 10:27:13AM 0 points [-]

Why do you say that it was moderated away? I still see the "Ideal money" thread.

Comment author: MrMind 17 January 2017 10:03:31AM 0 points [-]

For a shared and stable value metric to function as a solution to the AI alignment problem it would need also to be:

  • computable;
  • computable in new situations where no comparable examples exist;
  • convergent under self-evaluation.

To illustrate the last requirement, let me make an example. Let's suppose that to a new AI is given the task of dividing some fund between the existing four prototype of nuclear fusion plants. It will need to calculate the value of each prototype and their very different supply chains. But it also need to calculate the value of those calculation, since it's computational power is not infinite, and decide how much to ponder and to what extent calculate the details of those simulation. But it would also need to calculate the value of those calculation, and so on. Only a value that is convergent under self evaluation can be guaranteed to point to an optimal solution.

If that's what we would have available, then I think FAI would be mostly solved.

Comment author: Flinter 17 January 2017 08:56:58AM 0 points [-]

Thank you. No see. Ideal means "conceptual". But you are probably unaware of the importance of pointing this out because the mod took away my ability to put it all together.

Ideal, example, model refer to something considered as a standard to strive toward or something considered worthy of imitation. An ideal is a concept or standard of perfection, existing merely as an image in the mind, or based upon a person or upon conduct: We admire the high ideals of a religious person.

People are arguing Nash Ideal Money can't exist; they don't understand the meaning of ideal. Might you quickly skim through this thread to understand exactly what I am saying: http://lesswrong.com/r/discussion/lw/ogp/a_proposal_for_a_simpler_solution_to_all_these/

Its a quick skim, you'll figure it out.

Comment author: MrMind 17 January 2017 09:52:45AM 0 points [-]

I think we have a problem of communication.
I thought that your question, "Do we have a shared meaning for this word?", was made to try to arrive at a shared meaning of the world through discussion.
Instead I see that you have a fixed meaning in mind, and that you intend to use that meaning solely in your posts. Please confirm that this is indeed the case, if so I will no longer intervene in this thread.

Comment author: Flinter 17 January 2017 08:53:31AM 0 points [-]

If it filters out Nash's argument, Ideal Money, then it makes no sense and is completely irrational for it.

Think about what you are saying, its ridiculous.

Are you also unwilling to discuss the content, and simply are stuck on my posting methods, writing, and character?

Comment author: MrMind 17 January 2017 09:45:46AM *  1 point [-]

If it filters out Nash's argument, Ideal Money, then it makes no sense and is completely irrational for it.

Well, since it's an automated process, it filters anything, be it spam, Nash' argument or the words of Omega itself. As I said, it's a compromise. The best we could come up, so far. If you have a better solution, spell it out.

Are you also unwilling to discuss the content, and simply are stuck on my posting methods, writing, and character?

No, mine was just a suggestion for a way that would allow you to lubricate the social friction I think you're experiencing here. On the other side, I am reading your posts carefully and reply when done thinking about.

Comment author: ingive 16 January 2017 12:04:42PM 0 points [-]

How would we go about changing human behavior to be more aligned with reality? I was thinking it is undoubtedly the most effective thing to do. Ensure world domination of rationalist, effective altruist and utilitarian ideas. There are two parts to this, I simply mention R, EA and U because it resonates very well here with the types of users here and alignment with reality I explain next. How I expect alignment to reality to be, is accepting facts fully. For example, thinking and emotionally, this includes uncertainty of facts (because of facts like an interpretation of QM).

One example is that consciousness, Qualia, experience is a tool, not a goal. This is facts, consciousness arose or dissociated (Monistic Idealism) as an evolutionary process. If you deny this, you're denying evolution and in a death spiral of experience. If you start accepting facts emotionally, rather than fighting emotionally with reality, you merge and paradoxically get what you wanted emotionally. An example of aligning with reality. But if you are aware of the paradox you might seek for the goal of experience, so be aware.

This is truly the essence of epistemic rationality and it's hard work. Most of us want to deny that experience is not our goal, but that's why we don't care about anything except endless intellectual entertainment. How do we change human behavior to be more aligned with reality? I'm unsure. I'm thinking about locating specific centers of our brains and reducing certain activities which undoubtedly make us less aligned with reality and increase the activations of others.

I think it's important to figure out what drives human behavior to not be aligned with reality and what make us more aligned. When presented with scientific evidence, why do we not change our behavior? That's the question and how do we change it?

When we know how to become the most hardcore altruist, then obviously, everyone should as well.

As far as I can tell, P (read sequences) < P (figure this out)

Comment author: MrMind 17 January 2017 09:11:27AM *  0 points [-]

I think that the problem you state in unsolvable. Human brain evolved to solve social problems related to survival, not to be a perfect Bayesian reasoner (Bayesian models have a tendency to explode in computational complexity as the number of parameters increases). Unless you want to design a brain anew I see no way to modify ourselves to become perfect epistemic rationalist, besides a lot of effort. That might be a shortcoming of my imagination, though.
There's also the case that we shouldn't be perfect rationalists: possibly the cost of adding a further decimal to a probability is much higher than the utility gained because of it, but of course we couldn't know in advance. Also, sometimes our brain prefers to fool itself so that it is better motivated to something / happier, although Eliezer argued at length against this attitude.
So yeah, the landscape of the problem is thorny.

As far as I can tell, P (read sequences) < P (figure this out)

You really meant U(read sequences) < U(figure this out)

Comment author: MrMind 17 January 2017 08:51:54AM 0 points [-]

I don't think we have. An ideal solution to a mathematical problem would be a demonstration that is both computationally accessible and giving a necessary and sufficient solution, but an ideal solution to a political problem would be one that imply the use of few resource as possible and offers a pleasant (or at least reputation-saving) accomodation to all the parties involved. An ideal partner is another thing entirely.
So I don't think the word "ideal" has the same meaning across problem spaces for the same subject, let alone different people that faces different problems.

View more: Next