Google lends further legitimacy to Bitcoin

6 SilasBarta 21 March 2011 10:26PM

Though not yet an "official" project, Google has released a Bitcoin client. As you may remember, there were concerns here about what the government/legal reaction to Bitcoin [1] will be, and the significance of certain groups lending their support to it.  EFF and SIAI accept Bitcoin donations, which helps, and this action by Google is another big step.

Previous articles: SIAI accepting Bitcoin donations, Discussion on making money with Bitcoin (Clippy warning on the latter)

[1] In short, it's an anonymous P2P crypto-currency with no transaction fees, in which new units are generated by spending computer cycles computing hashes until you find one with specific properties.

I'll be in NYC from Oct. 30 to Nov. 21

2 SilasBarta 28 October 2010 04:27PM

Sorry for the self-centered post, but I don’t get many chances to be where there are a lot of rationalists.  (We’ve counted about four in all of Texas that go to this site.)

 

Thanks to the Cosmos’s noticing my need for a place to spend my vacation time this year, I will be staying in his NYC apartment while he’s gone.  I’ll definitely be at the NYC meetups.

 

So, if you are anywhere near this area and were interested in meeting me, let me know (either on this thread or privately) and we can work something out.

 

I’ve informed the NYC OB Google group, but figured there would be good opportunities to meet some of you that aren’t on that list or are a bit further away from the city.

Hazing as Counterfactual Mugging?

3 SilasBarta 11 October 2010 02:17PM

In the interest of making decision theory problems more relevant, I thought I'd propose a real-life version of counterfactual mugging.  This is discussed in Drescher's Good and Real, and many places before.  I will call it the Hazing Problem by comparison to this practice (possibly NSFW – this is hazing, folks, not Disneyland).

 

The problem involves a timewise sequence of agents who each decide whether to "haze" (abuse) the next agent.  (They cannot impose any penalty on previous agent.)  For all agents n, here is their preference ranking:

 

1) not be hazed by n-1

2) be hazed by n-1, and haze n+1

3) be hazed by n-1, do NOT haze n+1

 

or, less formally:

 

1) not be hazed

2) haze and be hazed

3) be hazed, but stop the practice

 

The problem is: you have been hazed by n-1.  Should you haze n+1?

 

Like in counterfactual mugging, the average agent has lower utility by conditioning on having been hazed, no matter how big the utility difference between 2) and 3) is.  Also, it involves you having to make a choice from within a "losing" part of the "branching", which has implications for the other branches.

 

You might object the choice of whether to haze is not random, as Omega’s coinflip is in CM; however, there are deterministic phrasings of CM, and your own epistemic limits blur the distinction.

 

UDT sees optimality in returning not-haze unconditionally.  CDT reasons that its having been hazed is fixed, and so hazes.  I *think* EDT would choose to haze because it would prefer to learn that, having been hazed, they hazed n+1, but I'm not sure about that.

 

I also think that TDT chooses not-haze, although this is questionable since I'm claiming this is isomorphic to CM.  I would think TDT reasons that, "If n's regarded it as optimal to not haze despite having been hazed, then I would not be in a position of having been hazed, so I zero out the disutility of choosing not-haze."

 

Thoughts on the similarity and usefulness of the comparison?

Morality as Parfitian-filtered Decision Theory?

24 SilasBarta 30 August 2010 09:37PM

Non-political follow-up to: Ungrateful Hitchhikers (offsite)

 

Related to: Prices or Bindings?, The True Prisoner's Dilemma

 

Summary: Situations like the Parfit's Hitchhiker problem select for a certain kind of mind: specifically, one that recognizes that an action can be optimal, in a self-interested sense, even if it can no longer cause any future benefit.  A mind that can identify such actions might put them in a different category which enables it to perform them, in defiance of the (futureward) consequentialist concerns that normally need to motivate it.  Our evolutionary history has put us through such "Parfitian filters", and the corresponding actions, viewed from the inside, feel like "something we should do", even if we don’t do it, and even if we recognize the lack of a future benefit.  Therein lies the origin of our moral intuitions, as well as the basis for creating the category "morality" in the first place.

 

Introduction: What kind of mind survives Parfit's Dilemma?

 

Parfit's Dilemma – my version – goes like this: You are lost in the desert and near death.  A superbeing known as Omega finds you and considers whether to take you back to civilization and stabilize you.  It is a perfect predictor of what you will do, and only plans to rescue you if it predicts that you will, upon recovering, give it $0.01 from your bank account.  If it doesn’t predict you’ll pay, you’re left in the desert to die. [1]

 

So what kind of mind wakes up from this?  One that would give Omega the money.  Most importantly, the mind is not convinced to withhold payment on the basis that the benefit was received only in the past.  Even if it recognizes that no future benefit will result from this decision -- and only future costs will result -- it decides to make the payment anyway.

continue reading »

The role of mathematical truths

14 SilasBarta 24 April 2010 04:59PM

Related to: Math is subjunctively objective, How to convince me that 2+2=3

 

Elaboration of points I made in these comments: first, second

 

TL;DR Summary: Mathematical truths can be cashed out as combined claims about 1) the common conception of the rules of how numbers work, and 2) whether the rules imply a particular truth.  This cashing-out keeps them purely about the physical world and eliminates the need to appeal to an immaterial realm, as some mathematicians feel a need to.

 

Background: "I am quite confident that the statement 2 + 3 = 5 is true; I am far less confident of what it means for a mathematical statement to be true." -- Eliezer Yudkowsky

 

This is the problem I will address here: how should a rationalist regard the status of mathematical truths?  In doing so, I will present a unifying approach that, I contend, elegantly solves the following related problems:

 

- Eliminating the need for a non-physical, non-observable "Platonic" math realm.

- The issue of whether "math was true/existed even when people weren't around".

- Cashing out the meaning of isolated claims like "2+2=4".

- The issue of whether mathematical truths and math itself should count as being discovered or invented.

- Whether mathematical reasoning alone can tell you things about the universe.

- Showing what it would take to convince a rationalist that "2+2=3".

- How the words in math statements can be wrong.

continue reading »

Understanding your understanding

69 SilasBarta 22 March 2010 10:33PM

Related to: Truly Part of You, A Technical Explanation of Technical Explanation

Partly because of LessWrong discussions about what really counts as understanding (some typical examples), I came up with a scheme to classify different levels of understanding so that posters can be more precise about what they mean when they claim to understand -- or fail to understand -- a particular phenomenon or domain.

 

Each level has a description so that you know if you meet it, and tells you what to watch out for when you're at or close to that level.  I have taken the liberty of naming them after the LW articles that describe what such a level is like.

 

Level 0: The "Guessing the Teacher's Password" Stage

 

Summary: You have no understanding, because you don't see how any outcome is more or less likely than any other.

continue reading »

The continued misuse of the Prisoner's Dilemma

29 SilasBarta 23 October 2009 03:48AM

Related to: The True Prisoner's Dilemma, Newcomb's Problem and Regret of Rationality

In The True Prisoner's Dilemma, Eliezer Yudkowsky pointed out a critical problem with the way the Prisoner's Dilemma is taught: the distinction between utility and avoided-jail-time is not made clear.  The payoff matrix is supposed to represent the former, even as its numerical values happen to coincidentally match the latter.  And worse, people don't naturally assign utility as per the standard payoff matrix: their compassion for the friend in the "accomplice" role means they wouldn't feel quite so good about a "successful" backstabbing, nor quite so bad about being backstabbed.  ("Hey, at least I didn't rat out a friend.")

For that reason, you rarely encounter a true Prisoner's Dilemma, even an iterated one.  The above complications prevent real-world payoff matrices from working out that way.

Which brings us to another unfortunate example of this misunderstanding being taught.

continue reading »

Rationality Quotes - July 2009

5 SilasBarta 02 July 2009 06:35PM

(Last month's started a little late, I thought I'd bring it back to its original schedule.)

A monthly thread for posting any interesting rationality-related quotes you've seen recently on the Internet, or had stored in your quotesfile for ages.

  • Please post all quotes separately (so that they can be voted up (or down) separately) unless they are strongly related/ordered.
  • Do not quote yourself (or your sockpuppets).
  • Do not quote comments/posts on LW/OB - if we do this, there should be a separate thread for it.
  • No more than 5 quotes per person per monthly thread, please.

View more: Prev | Next