Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: RichardKennaway 22 July 2014 11:00:28AM 1 point [-]

Some putatively Knightian uncertainty and ambiguity aversion can be explained as maximising expected utillity when playing against an adversary.

For the Ellsberg paradox, the person offering the first bet can minimise his payout by putting no black balls in the urn. If I expect him to do that (and he can do so completely honestly, since he volunteers no information about the method used to fill the urn) then I should bet on red, for a 1/3 chance of winning, and not black, for a zero chance.

The person offering the second bet can minimise his payout by putting no yellow balls in the urn. Then black-or-yellow has a 2/3 chance and red-or-yellow a 1/3 chance and I should bet on black-or-yellow.

The lesson here is, don't take strange bets from strangers. I'd quote again the lines from Guys And Dolls about this, but the Google box isn't helping me find when it was last in a quotes thread. (Is there some way the side bar could be excluded from Google's search spiders? It's highly volatile content and shouldn't be indexed.)

In the tennis example, someone betting on the mysterious game or the unbalanced game is in the position of someone betting on horse races who knows nothing about horses. He should decline to bet, because while it is possible to beat the bookies, it's a full-time job to maintain the necessary knowledge of horse-racing.

Comment author: Mark_Friedenbach 21 July 2014 06:53:41PM 0 points [-]

How would one assess which of those currencies, if any, has a future? Or whether one should instead invent a 484th? What will it take for a digital currency to succeed, where others fail?

In case it's not clear, if you do not know the answers to these questions, you should not be investing in altcoins.

Comment author: RichardKennaway 21 July 2014 08:54:31PM 0 points [-]

I figured that, I'm wondering whether i should ignore the questions or look for answers.

Comment author: lsparrish 21 July 2014 06:13:50AM *  0 points [-]

I recently made an kind of oddball decision, and bought a couple of billion emoticions. It was around a thousand dollars worth at the time (a month ago). The price hasn't changed much since I bought in, but I am technically a whale in that market as 2% owner. The currency has a couple, well several, glaring flaws:

  • The founder seems unserious -- maybe even kind of shady.
  • I really dislike the crazy-eyed logo and the official website design in general.
  • The block reward algorithm is set to adjust at a large number of blocks (1 day's worth of 1-minute blocks), which means it gets mined maniacally sometimes and very slowly at other times. People then complain their client doesn't sync on the slow "days", which tend to stretch out for weeks. This can only be fixed with a hardfork.
  • The code appears to have an incorrect address prefix (supposed to be E, but comes up 6).
  • The DNS seeds baked into the code do not work; you have to manually specify a seed as a workaround to get bootstrapped into the network.
  • Several services advertised on the official website are down, e.g. the block explorer is pointed at an orphaned block sequence and hasn't been updated for a couple of months.

Terrible! But there are some positive features to consider:

  • Already on a decent exchange (mintpal)
  • Limited to 100 billion total coins. (This is due to a hardfork subsequent to the launch; the original number was 1 trillion -- price has not changed much since this news was released.)
  • Seems designed/marketed to encourage low expectations / gambling mindset
  • Trivial inconveniences for running the client or mining may represent a market hole, which could disappear when it is fixed
  • The code problems don't seem too bad to fix, and reference examples abound already.

My plan is, if the price drops by more than half or so, buy another $1k worth (if I can scrape it together), and keep increasing my ownership slowly over time with every subsequent halving in price until there is a rebound. Eventually, I try to take over development if things haven't improved. (I could either ally with the founder, or strike out on my own with my own patches and websites.)

In any case, I would then supply the patches necessary for the software to function like it should (which I have pretty much already developed, though proper testing remains another story) and also supply the necessary properly functional web resources (blockchain, forum, faq, etc.) to lend it legitimacy. This should theoretically trigger a boost in the currency's value, all else equal. There is just so much room for improvement.

Anyway, buying into a failing-for-reasons business with "potential" and then fixing it up is a valid strategy to beat the market if you have the skills and inclination to do so (I think Buffet used this approach?) The limiting factor is labor/creativity/determination. It sort of resembles the real estate practice of buying broken-down appearing yet structurally sound houses and fixing them up to resell.

But this kind of thing isn't really gold rush spotting, I don't think. It relies more on established markets and expectations / spotting existing failures to meet them. Gold rush spotting would be anticipating value in something that has never really been seen before so nobody is quite sure what to expect (but you have enough information to see that it is big, nonetheless).

Like maybe someone will come up with a prediction market + chat + news feed + addictive video game + college credit accumulating strategy + programming language familiarizer + social network + currency that somehow integrates seamlessly and boosts your IQ, productivity, emotional health, political influence, resume, sex appeal, and asset portfolio simultaneously the more you use it.

Of course, it would most likely utterly suck at all these things in the early stages, and look like an ugly, boring curiosity to all but the closest and most thoughtful inspection. Because otherwise, someone with money would have jumped on it already, right? (And probably thereby mutated it into something humdrum, safe, and normal -- but with really good graphics and marketing.)

As a tech geek you'd have an advantage, a temptation to perhaps own a small slice of this weird new, oddly useful and underrated thing, or at least figure out how it works so that you can brag about it or try to do one better on it. And your brain would then trick you into thinking a big slice would be too risky or unnecessary because it does not have a clear picture of a world where this kind of thing is valued/cool/normal, and well, who wants to invest real money in these geeky obsessions anyway...

Comment author: RichardKennaway 21 July 2014 08:03:49AM 0 points [-]

I see that http://crypto-prices.com lists 483 (as of this moment) digital currencies, of which, as it happens, Emoticoin is not one, despite one of those Emoticoin pages purporting to link to its listing there.

How would one assess which of those currencies, if any, has a future? Or whether one should instead invent a 484th? What will it take for a digital currency to succeed, where others fail?

Comment author: RichardKennaway 19 July 2014 07:11:45AM 1 point [-]

The ideal FAI would ignore uncomputable possibilities. Therefore I should too.

You have to think about uncomputable possibilities to know what is computable.

Comment author: roystgnr 18 July 2014 06:33:50PM -1 points [-]

These questions are equivalent in the same sense as "how about just not setting X equal to pi" and "how about just setting X equal to e" are equivalent. Assuming you can do the latter is a prediction; assuming you can do the former is an antiprediction.

To the contrary, "just building the [very specific sort of] whole monster" is what's more equivalent to "just building a [very specific definition of] Friendly AI", an a priori improbable task.

Worse for the basilisk: at least in the case of Friendly AI you might end up stuck with nothing better to do but throw a dart and hope for a bulls-eye. But in the case of the basilisk, the acausal trade is only rational if you expect a high likelihood of the trade being carried out. But if that likelihood is low then you're just being nutty, which means it's unlikely for the other side of the trade to be upheld in any case (acausally trying to influence Omega's prediction of you may work if Omega is omniscient, but not so well if Omega is irrational). This lowers the likelihood still further... until the only remaining question is simply "what's the fixed point of "x{n+1} = xn/2"?"

Comment author: RichardKennaway 18 July 2014 06:41:02PM 0 points [-]

These questions are equivalent in the same sense as "how about just not setting X equal to pi" and "how about just setting X equal to e" are equivalent. Assuming you can do the latter is a prediction; assuming you can do the former is an antiprediction.

Consider my parallel changed to "How about, you know, just not building an Unfriendly AI? Uhm... could the solution to the safe AI problem really be so easy?"

Comment author: army1987 18 July 2014 09:28:49AM 0 points [-]

Commercial aviation is just barely economically viable today, and fuel getting much more expensive would it make all but impossible for airlines to make any net profit. But yeah, this might change in the future.

Comment author: RichardKennaway 18 July 2014 05:13:10PM 0 points [-]

Commercial aviation is just barely economically viable today

[citation needed]

Comment author: Viliam_Bur 18 July 2014 12:09:04PM 3 points [-]

Also, in Newcomb's problem, the goal is to go away with as much money as possible. So it's obvious what to optimize for.

What exactly is the goal with the Basilisk? To give as much money as possible, just to build an evil machine which would torture you unless you gave it as much money as possible, but luckily you did, so you kinda... "win"? You and your five friends are the selected ones who will get the enjoyment of watching the rest of humanity tortured forever? (Sounds like how some early Christians imagined Heaven. Only the few most virtuous ones will get saved, and watching the suffering of the damned in Hell will increase their joy of their own salvation.)

Completely ignoring the problem that just throwing a lot of money around doesn't solve the problem of creating a safe recursively self-improving superhuman AI. (Quoting Sequences: "There's a fellow currently on the AI list who goes around saying that AI will cost a quadrillion dollars—we can't get AI without spending a quadrillion dollars, but we could get AI at any time by spending a quadrillion dollars.") So these guys working on this evil machine... hungry, living in horrible conditions, never having a vacation or going on a date, never seeing a doctor, probably having mental breakdowns all the time; because they are writing the code that would torture them if they did any of that... is this the team we could trust with doing sane and good decisions, and getting all the math right? If no, then we are pretty much fucked regardless of whether we donated to the Basilisk or not, because soon we are all getting transformed to paperclips anyway; the only difference is that 99.9999999% of us will get tortured before that.

How about, you know, just not building the whole monster at the first place? Uhm... could the solution to this horrible problem really be so easy?

Comment author: RichardKennaway 18 July 2014 04:53:31PM *  2 points [-]

How about, you know, just not building the whole monster at the first place? Uhm... could the solution to this horrible problem really be so easy?

This question is equivalent to: "How about, you know, just building a Friendly AI? Uhm... could the solution to the safe AI problem really be so easy?"

Comment author: eli_sennesh 17 July 2014 04:29:46PM 0 points [-]

Well of course it worries people! Precisely the function of consciousness (at least in my current view) is to "paint a picture" of wholeness and continuity that enables self-reflective cognition. Problem is, any given system doesn't have the memory to store its whole self within its internal representational data-structures, so it has to abstract over itself rather imperfectly.

The problem is that we currently don't know the structure, so the discord between the continuous, whole, coherent internal feeling of the abstraction and the disjointed, sharp-edged, many-pieced truth we can empirically detect is really disturbing.

It will stop being disturbing about five minutes after we figure out what's actually going on, when everything will once again add up to normality.

Comment author: RichardKennaway 17 July 2014 07:35:22PM 1 point [-]

Well of course it worries people!

It seems to only worry people when they notice unfamiliar (to them) aspects of the complexity of consciousness. Familiar changes in consciousness, such as sleep, dreams, alcohol, and moods, they never see a problem with.

Comment author: Lightwave 17 July 2014 08:13:54AM 0 points [-]

Sleep might be a Lovecraftian horror.

Going even further, some philosophers suggest that consciousness isn't even continuous, e.g. as you refocus your attention, as you blink, there are gaps that we don't notice. Just like how there are gaps in your vision when you move your eyes from one place to another, but to you it appears as a continuous experience.

Comment author: RichardKennaway 17 July 2014 11:17:29AM *  2 points [-]

Consciousness is complex. It is a structured thing, not an indivisible atom. It is changeable, not fixed. It has parts and degrees and shifting, uncertain edges.

This worries some people.

Comment author: johnswentworth 16 July 2014 04:38:57PM 0 points [-]

It's not a question of whether the code "was conscious", it's a question of whether you projected consciousness onto the code. Did you think of the code as something which could be bargained with?

Comment author: RichardKennaway 16 July 2014 07:18:53PM *  0 points [-]

Did you think of the code as something which could be bargained with?

No, if it's been written right, it knows the perfect move to make in any position.

Like the Terminator. "It can't be bargained with. It can't be reasoned with. It doesn't feel pity, or remorse, or fear. And it absolutely will not stop, ever, until you are dead." That's fictional, of course, but is it a fictional conscious machine or a fictional unconscious machine?

View more: Next