A List of Nuances

31 abramdemski 10 November 2014 05:02AM

Abram Demski and George Koleszarik


Much of rationality is pattern-matching. An article on lesswrong might point out a thing to look for. Noticing this thing changes your reasoning in some way. This essay is a list of things to look for. These things are all associated, but the reader should take care not to lump them together. Each dichotomy is distinct, and although the brain will tend to abstract them into some sort of yin/yang correlated mush, in reality they have a more complicated structure; some things may be similar, but if possible, try to focus on the complex interrelationships.

 

  1. Map vs. Territory

    1. Eliezer’s sequences use this as a jump-off point for discussion of rationality.

    2. Many thinking mistakes are map vs. territory confusions.

      1. A map and territory mistake is a mix-up of seeming vs being.

      2. Humans need frequent reminders that we are not omniscient.

  2. Cached Thoughts vs. Thinking

    1. This document is a list of cached thoughts.

  3. Clusters vs. Properties

    1. These words could be used in different ways, but the distinction I want to point at is that of labels we put on things vs actual differences in things.

    2. The mind projection fallacy is the fallacy of thinking a mental category (a “cluster”) is an actual property things have.

      1. If we see something as good for one reason, we are likely to attribute other good properties to it, as if it had inherent goodness. This is called the halo effect. (If we see something as bad and infer other bad properties as a result, it is referred to as the reverse-halo effect.)

    3. Categories are inference applicability heuristics; ruling X an instance of Y without expecting novel inferences is cargo cult classification.

  4. Syntax vs. Semantics

    1. The syntax is the physical instantiation of the map. The semantics is the way we are meant to read the map; that is, the intended relationship to the territory.

  5. Semantics vs. Pragmatics

    1. The semantics is the literal contents of a message, whereas the pragmatics is the intended result of conveying the message.

      1. An example of a message with no semantics and only pragmatics is a command, such as “Stop!”.

      2. Almost no messages lack pragmatics, and for good reason. However, if you seek truth in a discussion, it is important to foster a willingness to say things with less pragmatic baggage.

      3. Usually when we say things, we do so with some “point” which is beyond the semantics of our statement. The point is usually to build up or knock down some larger item of discussion. This is not inherently a bad thing, but has a failure mode where arguments are battles and statements are weapons, and the cleverer arguer wins.

    2. The meaning of a thing is the way you should be influenced by it.

  6. Object-level vs. Meta-level

    1. The difference between making a map and writing a book about map-making.

    2. A good meta-level theory helps get things right at the object level, but it is usually impossible to get things right at the meta level before before you’ve made significant progress at the object level.

  7. Seeming vs. Being

    1. We can only deal with how things seem, not how they are. Yet, we must strive to deal with things as they are, not as they seem.

      1. This is yet another reminder that we are not omniscient.

    2. If we optimize too hard for things which seem good rather than things which are good, we will get things which seem very good but which may only be somewhat good, or even bad.

    3. The dangerous cases are the cases where you do not notice there is a distinction.

      1. This is why humans need constant reminders that we are not omniscient.

    4. We must take care to notice the difference between how things seem to seem, and how they actually seem.

  8. Signal vs. Noise

    1. Not all information is equal. It is often the case that we desire certain sorts of information and desire to ignore other sorts.

    2. In a technical setting, this has to do with the error rate present in a communication channel; imperfections in the channel will corrupt some bits, making a need for redundancy in the message being sent.

    3. In a social setting, this is often used to refer to the amount of good information vs irrelevant information in a discussion. For example, letting a mediocre writer add material to a group blog might increase the absolute amount of good information, yet worsen the signal-to-noise ratio.

    4. Attention is a scarce resource; yes everyone has something to teach you, but many people are much more efficient sources of wisdom than others.

  9. Selection Effects

    1. Filtered evidence.

      1. In many situations, if we can present evidence to a Bayesian agent without the agent knowing that we are being selective, we can convince the agent of anything we like. For example, if I want to convince you that smoking causes obesity, I could find many people who became obese after they started smoking.

      2. The solution to this is for the Bayesian agent to model where the information is coming from. If you know I am selecting people based on this criteria, then you will not take it as evidence of anything, because the evidence has been cherry-picked.

      3. Most of the information you receive is intensely filtered. Nothing comes to your attention with a good conscience.

    2. The silent evidence problem.

      1. Selection bias need not be the result of purposeful interference as in cherry-picking. Often, an unrelated process may hide some of the evidence needed. For example, we hear far more about successful people than unsuccessful. It is tempting to look at successful people and attempt to draw conclusion about what it takes to be successful. This approach suffers from the silent evidence problem: we also need to look at the unsuccessful people and examine what is different about the two groups.

    3. Observer selection effects.

  10. What You Mean vs. What You Think You Mean

    1. Very often, people will say something and then that thing will be refuted. The common response to this is to claim you meant something slightly different, which is more easily defended.

      1. We often do this without noticing, making it dangerous for thinking. It is an automatic response generated by our brains, not a conscious decision to defend ourselves from being discredited. You do this far more often than you notice. The brain fills in a false memory of what you meant without asking for permission.

  11. What You Mean vs. What the Others Think You Mean

    1. The illusion of transparency.

    2. The double illusion of transparency.

    3. Wiio’s Laws

  12. What You Optimize vs. What You Think You Optimize

    1. Evolution optimizes for reproduction but in doing so creates animals with a variety of goals which are correlated with reproduction.

    2. Extrinsic motivation is weaker than intrinsic motivation.

    3. The people who value practice for its own sake do better than the people who only value being good at what they’re practicing.

    4. “Consequentialism is true, but virtue ethics is what works.”

  13. Stated Preferences vs. Revealed Preferences

    1. Revealed preferences are the preferences we can infer from your actions. These are usually different from your stated preferences.

      1. X is not about Y:

        1. Food isn’t about nutrition.

        2. Clothes aren’t about comfort.

        3. Bedrooms aren’t about sleep.

        4. Marriage isn’t about love.

        5. Talk isn’t about information.

        6. Laughter isn’t about humour.

        7. Charity isn’t about helping.

        8. Church isn’t about God.

        9. Art isn’t about insight.

        10. Medicine isn’t about health.

        11. Consulting isn’t about advice.

        12. School isn’t about learning.

        13. Research isn’t about progress.

        14. Politics isn’t about policy.

        15. Going meta isn’t about the object level.

        16. Language isn’t about communication.

        17. The rationality movement isn’t about epistemology.

      2. Everything is actually about signalling.

    2. Humans Are Not Automatically Strategic

      1. Never attribute to malice that which can be adequately explained by stupidity. The difference between stated preferences and revealed preferences does not indicate dishonest intent. We should expect the two to differ in the absence of a mechanism to align them.

      2. Hidden Motives vs. Innocent Failure

    3. People, ideas, and organizations respond to incentives.

      1. Evolution selects humans who have reproductively selfish behavioral tendencies, but prosocial and idealistic stated preferences.

        1. Near vs. Far

      2. Social forces select ideas for virality and comprehensibility as opposed to truth or even usefulness.

        1. Motte-and-bailey fallacy

      3. Organizations are by default bad at being strategic about their own survival, but the ones that survive are the ones you see.

  14. What You Achieve vs. What You Think You Achieve

    1. Most of the consequences of our actions are totally unknown to us.

    2. It is impossible to optimize without proper feedback.

  15. What You Optimize vs. What You Actually Achieve

    1. Consequentialism is more about expected consequences than actual consequences.

  16. What You Seem Like vs. What You Are

    1. You can try to imagine yourself from the outside, but no one has the full picture.

  17. What Other People Seem Like vs. What They Are

    1. When people assume that they understand others, they are wrong.

  18. What People Look Like vs. What They Think They Look Like

    1. People underestimate the gap between stated preferences and revealed preferences.

  19. What Your Brain Does vs. What You Think It Does

    1. You are running on corrupted hardware.

      1. The brain’s machinations are fundamentally social; it automatically does things like signal, save face, etc., which distort the truth.

    2. The reverse of stupidity is not intelligence.

      1. Knowing that you are running on corrupted hardware should cause skepticism about the outputs of your thought-processes. Yet, too much skepticism will cause you to stumble, particularly when fast thinking is needed.

        1. Producing a correct result plus justification is harder than producing only the correct result.

        2. Justifications are important, but the correct result is more important.

        3. Much of our apparent self-reflection is confabulation, generating plausible explanations after the brain spits out an answer.

        4. Example: doing quick mental math. If you are good at this, attempting to explicitly justify every step as you go would likely slow you down.

        5. Example: impressions formed over a long period of time. Wrong or right, it is unlikely that you can explicitly give all your reasons for the impression. Requiring your own beliefs to be justifiable would preempt impressions that require lots of experience and/or many non-obvious chains of subconscious inference.

        6. Impressions are not beliefs and they are always useful data.

  20. Clever Argument vs. Truth-seeking; The Bottom Line

    1. People believe what they want to believe.

      1. Believing X for some reason unrelated to X being true is referred to as motivated cognition.

      2. Giving a smart person more information and more methods of argument may actually make their beliefs less accurate, because you are giving them more tools to construct clever arguments for what they want to believe.

    2. Your actual reason for believing X determines how well your belief correlates with the truth.

      1. If you believe X because you want to, any arguments you make for X no matter how strong they sound are devoid of informational context about X and should properly be ignored by a truth-seeker.

    3. If you believe true things when doing so improves your life, that is no credit to you at all. Everyone does that.

  21. Lumpers vs. Splitters

    1. A lumper is a thinker who attempts to fit things into overarching patterns. A splitter is a thinker who makes as many distinctions as possible, recognizing the importance of being specific and getting the details right.

    2. Specifically, some people want big Wikipedia and TVTropes articles that discuss many things, and others want smaller articles that discuss fewer things.

    3. This list of nuances is a lumper attempting to think more like a splitter.

  22. Fox vs. Hedgehog

    1. “A fox knows many things, but a hedgehog knows One Big Thing.” Closely related to a splitter, a fox is a thinker whose strength is in a broad array of knowledge. A hedgehog is a thinker who, in contrast, has one big idea and applies it everywhere.

    2. The fox mindset is better for making accurate judgements, according to Tetlock.

  23. Traps vs. Gardens

    1. Well-kept gardens die by pacifism.

      1. Conversations tend to slide toward contentious and useless topics.

      2. Societies tend to decay.

      3. Systems in general work poorly or not at all.

      4. Thermodynamic equilibrium is entropic.

      5. Without proper institutions being already in place, it takes large amounts of constant effort and vigilance to stay out of traps.

    2. From the outside of a broken Molochian system it is easy to see how to fix. But it cannot be fixed from the inside.

Cross-posted to In Search Of Logic

First(?) Rationalist elected to state government

63 Eneasz 07 November 2014 02:30AM

Has no one else mentioned this on LW yet?

Elizabeth Edwards has been elected as a New Hampshire State Rep, self-identifies as a Rationalist and explicitly mentions Less Wrong in her first post-election blog post.

Sorry if this is a repost

Polymath-style attack on the Parliamentary Model for moral uncertainty

22 danieldewey 26 September 2014 01:51PM

Thanks to ESrogsStefan_Schubert, and the Effective Altruism summit for the discussion that led to this post!

This post is to test out Polymath-style collaboration on LW. The problem we've chosen to try is formalizing and analyzing Bostrom and Ord's "Parliamentary Model" for dealing with moral uncertainty.

I'll first review the Parliamentary Model, then give some of Polymath's style suggestions, and finally suggest some directions that the conversation could take.

continue reading »

2014 iterated prisoner's dilemma tournament results

61 tetronian2 30 September 2014 09:23PM

Followup to: Announcing the 2014 program equilibrium iterated PD tournament

In August, I announced an iterated prisoner's dilemma tournament in which bots can simulate each other before making a move. Eleven bots were submitted to the tournament. Today, I am pleased to announce the final standings and release the source code and full results.

All of the source code submitted by the competitors and the full results for each match are available here. See here for the full set of rules and tournament code.

Before we get to the final results, here's a quick rundown of the bots that competed:

AnderBot

AnderBot follows a simple tit-for-tat-like algorithm that eschews simulation:

  • On the first turn, Cooperate.
  • For the next 10 turns, play tit-for-tat.
  • For the rest of the game, Defect with 10% probability or Defect if the opposing bot has defected more times than AnderBot.

continue reading »

Strawman Yourself

17 katydee 18 May 2014 05:28AM

One good way to ensure that your plans are robust is to strawman yourself. Look at your plan in the most critical, contemptuous light possible and come up with the obvious uncharitable insulting argument for why you will fail.

In many cases, the obvious uncharitable insulting argument will still be fundamentally correct.

If it is, your plan probably needs work. This technique seems to work not because it taps into some secret vault of wisdom (after all, making fun of things is easy), but because it is an elegant way to shift yourself into a critical mindset.

For instance, I recently came up with a complex plan to achieve one of my goals. Then I strawmanned myself; the strawman version of why this plan would fail was simply "large and complicated plans don't work." I thought about that for a moment, concluded "yep, large and complicated plans don't work," and came up with a simple, elegant plan to achieve the same ends.

You may ask "why didn't you just come up with a simple, elegant plan in the first place?" The answer is that elegance is hard. It's easier to add on special case after special case, not realizing how much complexity debt you've added. Strawmanning yourself is one way to safeguard against this risk, as well as many others.

Botworld: a cellular automaton for studying self-modifying agents embedded in their environment

50 So8res 12 April 2014 12:56AM

On April 1, I started working full-time for MIRI. In the weeks prior, while I was winding down my job and packing up my things, Benja and I built Botworld, a cellular automaton that we've been using to help us study self-modifying agents. Today, we're publicly releasing Botworld on the new MIRI github page. To give you a feel for Botworld, I've reproduced the beginning of the technical report below.

continue reading »

Terrorist baby down the well: a look at institutional forces

14 Stuart_Armstrong 18 March 2014 02:30PM

Two facts "everyone knows", an intriguing contrast, and a note of caution.

"Everyone knows" that people are much more willing to invest into cures than preventions. When a disaster hits, then money is no object; but trying to raise money for prevention ahead of time is difficult, hamstrung by penny-pinchers and short-termism. It's hard to get people to take hypothetical risks seriously. There are strong institutional reasons for this, connected with deep human biases and bureaucratic self-interest.

"Everyone knows" that governments overreact to the threat of terrorism. The amount spent on terrorism dwarfs other comparable risks (such as slipping and falling in your bath). There's a huge amount of security theatre, but also a lot of actual security, and pre-emptive invasions of privacy. We'd probably be better just coping with incidents as they emerge, but instead we cause great annoyance and cost across the world to deal with a relatively minor problem. There are strong institutional reasons for this, connected with deep human biases and bureaucratic self-interest.

And both these facts are true. But... they contradict each other. One is about a lack of prevention, the other about an excess of prevention. And there are more examples of excessive prevention: the war on drugs, for instance. In each case we can come up with good explanations as to why there is not enough/too much prevention, and these explanations often point to fundamental institutional forces or human biases. This means that the situation could essentially never have been otherwise. But the tension above hints that these situations may be a lot more contingent than that, more dependent on history and particular details of our institutions and political setup. Maybe if the biases were reversed, we'd have equally compelling stories going the other way. So when predicting the course of future institutional biases, or attempting to change them, take into account that they may not be nearly as solid or inevitable as they feel today.

On Irrational Theory of Identity

15 SilentCal 19 March 2014 12:06AM

Meet Alice. Alice alieves that losing consciousness causes discontinuity of identity.

 

Alice has a good job. Every payday, she takes her salary and enjoys herself in a reasonable way for her means--maybe going to a restaurant, maybe seeing a movie, normal things. And in the evening, she sits down and does her best to calculate the optimal utilitarian distribution of her remaining paycheck, sending most to the charities she determines most worthy and reserving just enough to keep tomorrow-Alice and her successors fed, clothed and sheltered enough to earn effectively. On the following days, she makes fairly normal tradeoffs between things like hard work and break-taking, maybe a bit on the indulgent side.

 

Occasionally her friend Bob talks to her about her strange theory of identity. 

 

"Don't you ever wish you had left yourself more of your paycheck?" he once asked.

"I can't remember any of me ever thinking that." Alice replied. "I guess it'd be nice, but I might as well wish yesterday's Bill Gates had sent me his paycheck."

 

Another time, Bob posed the question, "Right now, you allocate yourself enough to survive with the (true) justification that that's a good investment of your funds. But what if that ever ceases to be true?"

Alice resopnded, "When me's have made their allocations, they haven't felt any particular fondness for their successors. I know that's hard to believe from your perspective, but it was years after past me's started this procedure that Hypothetical University published the retrospective optimal self-investment rates for effective altruism. It turned out that Alices' decisions had tracked the optimal rates remarkably well if you disregard as income the extra money the deciding Alices spent on themselves.

So me's really do make this decision objectively. And I know it sounds chilling to you, but when Alice ceases to be a good investment, that future Alice won't make it. She won't feel it as a grand sacrifice, either. Last week's Alice didn't have to exert willpower when she cut the food budget based on new nutritional evidence."

 

"Look," Bob said on a third occasion, "your theory of identity makes no sense. You should either ignore identity entirely and become a complete maximizing utilitarian, or else realize the myriad reasons why uninterrupted consciousness is a silly measure of identity."

"I'm not a perfect altruist, and becoming one wouldn't be any easier for me than it would be for you," Alice replied. "And I know the arguments against the uninterrupted-consciousness theory of identity, and they're definitely correct. But I don't alieve a word of it."

"Have you actually tried to internalize them?"

"No. Why should I? The Alice sequence is more effectively altruistic this way. We donate significantly more than HU's published average for people of similar intelligence, conscientiousness, and other relevant traits."

"Hmm," said Bob. "I don't want to make allegations about your motives-"

"You don't have to," Alice interrupted. "The altruism thing is totally a rationalization. My actual motives are the usual bad ones. There's status quo bias, there's the desire not to admit I'm wrong, and there's the fact that I've come to identify with my theory of identity.

I know the gains to the total Alice-utility would easily overwhelm the costs if I switched to normal identity-theory, but I don't alieve those gains will be mine, so they don't motivate me. If it would be better for the world overall, or even neutral for the world and better for properly-defined-Alice, I would at least try to change my mind. But it would be worse for the world, so why should I bother?"

 

.

 

.

 

If you wish to ponder Alice's position with relative objectivity before I link it to something less esoteric, please do so before continuing.

 

.

 

.

 

.

 

Bob thought a lot about this last conversation. For a long time, he had had no answer when his friend Carrie asked him why he didn't sign up for cryonics. He didn't buy any of the usual counterarguments--when he ran the numbers, even with the most conservative estimates he considered reasonable, a membership was a huge increase in Bob-utility. But the thought of a Bob waking up some time in the future to have another life just didn't motivate him. He believed that future-Bob would be him, that an uploaded Bob would be him, that any computation similar enough to his mind would be him. But evidently he didn't alieve it. And he knew that he was terribly afraid of having to explain to people that he had signed up for cryonics.

So he had felt guilty for not paying the easily-affordable costs of immortality, knowing deep down that he was wrong, and that social anxiety was probably preventing him from changing his mind. But as he thought about Alice's answer, he thought about his financial habits and realized that a large percentage of the cryonics costs would ultimately come out of his lifetime charitable contributions. This would be a much greater loss to total utility than the gain from Bob's survival and resurrection.

He realized that, like Alice, he was acting suboptimally for his own utility but in such a way as to make the world better overall. Was he wrong for not making an effort to 'correct' himself?

 

Does Carrie have anything to say about this argument?

Strategic choice of identity

76 Vika 08 March 2014 04:27PM

Identity is mostly discussed on LW in a cautionary manner: keep your identity small, be aware of the identities you are attached to. As benlandautaylor points out, identities are very powerful, and while being rightfully cautious about them, we can also cultivate them deliberately to help us achieve our goals.

Some helpful identities that I have that seem generally applicable:

  • growth mindset
  • low-hanging fruit picker
  • truth-seeker
  • jack-of-all trades (someone who is good at a variety of skills)
  • someone who tries new things
  • universal curiosity
  • mirror (someone who learns other people's skills)

Out of the above, the most useful is probably growth mindset, since it's effectively a meta-identity that allows the other parts of my identity to be fluid. The low-hanging fruit identity helps me be on the lookout for easy optimizations. The universal curiosity identity motivates me to try to understand various systems and fields of knowledge, besides the domains I'm already familiar with. It helps to give these playful or creative names, for example, "champion of low-hanging fruit". Some of these work well together, for example the "trying new things" identity contributes to the "jack of all trades" identity.

It's also important to identify unhelpful identities that get in your way. Negative identities can be vague like "lazy person" or specific like "someone who can't finish a project". With identities, just like with habits, the easiest way to reduce or eliminate a bad one seems to be to install a new one that is incompatible with it. For example, if you have a "shy person" identity, then going to parties or starting conversations with strangers can generate counterexamples for that identity, and help to displace it with a new one of "sociable person". Costly signaling can be used to achieve this - for example, joining a public speaking club. The old identity will not necessarily go away entirely, but the competing identity will create cognitive dissonance, which it can be useful to deliberately focus on. More specific identities require more specific counterexamples. Since the original negative identity makes it difficult to perform the actions that generate counterexamples, there needs to be some form of success spiral that starts with small steps.

Some examples of unhelpful identities I've had in the past were "person who doesn't waste things" and "person with poor intuition". The aversion to wasting money and material things predictably led to wasting time and attention instead. I found it useful to try "thinking like a trader" to counteract this "stingy person" identity, and get comfortable with the idea of trading money for time. Now I no longer obsess about recycling or buy the cheapest version of everything. Underconfidence in my intuition was likely responsible for my tendency to miss the forest for the trees when studying math or statistics, where I focused on details and missed the big picture ideas that are essential to actual understanding. My main objection to intuitions was that they feel imprecise, and I am trying to develop an identity of an "intuition wizard" who can manipulate concepts from a distance without zooming in. That is a cooler name than "someone who thinks about things without really understanding them", and brings to mind some people I know who have amazing intuition for math, which should help the identity stick.

There can also be ambiguously useful identities, for example I have a "tough person" identity, which motivates me to challenge myself and expand my comfort zone, but also increases self-criticism and self-neglect. Given the mixed effects, I'm not yet sure what to do about this one - maybe I can come up with an identity that only has the positive effects.

Which identities hold you back, and which ones propel you forward? If you managed to diminish negative identities, how did you do it and how far did you get?

Private currency to generate funds for effective altruism

1 Stefan_Schubert 14 February 2014 12:00AM

In the last few years we have seen two interesting revolutionary ideas on how to change the monetary system. The first is Bitcoin: the most well-known peer-to-peer currency. It has been wildly debated recently and I won't go into the detail of allegations of use in criminal activities etc (for one thing, I don't know much about it). My interest is rather in the money creation part. The people who run the Bitcoin software are rewarded for their work with new Bitcoins - a process called mining. Now the pace at which new Bitcoins are mined is limited, which means that Bitcoin creation is a zero-sum game: the more one miner contributes to the Bitcoin software, the less Bitcoins other miners get. Unsurprisingly, this has led to an arms race: miners spend nearly as much on running the software as they get back in form of new Bitcoins.

The second idea is the Chicago Plan, which was debated already in the 30's, after the great crash of 1929, but which recently was resurrected by Michael Kumhof (senior economist at IMF, of all places). The central idea of the Chicago Plan is to abolish fractional reserve banking - the system by which private banks in effect create money out of thin air. Instead of lending out most of the depositors' money, banks would effectively have to let them stay in the bank. 

Instead money would be created by the central bank/government, a process that would generate a massive seignorage for the government. According to Kumhof, it would also have other beneficial effects, such as killing off the "boom-and-bust"-cycles which he thinks fractional reserve banking are mostly responsible for, and diminishing the wasteful parts of the financial sector.

Kumhof ideas' have not been well received. Overall, it is remarkable how little reform there has been of the financial and monetary system given that the world had a major financial meltdown 2008 (and was close to an even greater one, in my understanding). Governments won't challenge the financial system radically in the near future, that's for sure.

Instead radical reforms can only come from private hands. Let us now compare the two ideas. In the Bitcoin system money is created by private hands, but in wasteful ways, which effectively means that there is very little seignorage. Under the Chicago plan, money is created by the government in much more efficient ways, which leads to a large seignorage. Now my idea is to take the best part of both of these ideas: let a private player - more exactly, an altruistic organization such as CEA - produce the money centrally, Chicago plan-like, and let the seignorage be used for altruistic purposes. (Of course, there would be some costs of running the system, but if the system was sufficiently large, these would be negligible in relation to the seignorage.)

If the altruistic organization that did this had a sufficiently good reputation, chances are greater that people would trust the system. Of course, it would try to stop the currency from being used to launder money, drug trade etc. 

Generally, people would be suspicious of private currencies where the central authority collected a seignorage, but if this seignorage was used for charitable and other altruistic purposes (and people really trusted that that would be the case), this would, I hope, be less of a problem.

What do you think? I'd be happy to get comments from people who know more about the Bitcoin system, since I don't really know it (though I find it interesting). Perhaps there is some info concerning Bitcoins that tells against this proposal; if so, I'd be interested in that.

View more: Prev | Next