Comment author: topynate 08 January 2016 05:28:15AM 1 point [-]

The first image is a dead hotlink. It's in the internet archive and I've uploaded it to imgur.

Comment author: Nornagest 25 February 2015 08:09:18PM 2 points [-]

Getting a handgun in Britain is not straightforward, although I suppose wizards would have certain advantages in this department.

Comment author: topynate 25 February 2015 08:15:20PM 3 points [-]

It was considerably easier before the Dunblane massacre (1996).

Comment author: skeptical_lurker 25 January 2015 09:21:54AM 5 points [-]

An unusual feature of an AI of this form is its speed - while the off-the-blockchain subprocesses can run at normal speed, IIRC the blockchain itself is optimistically going to have a block time of 12 seconds. This means you couldn't have a realtime conversation with the AI as a whole, nor could it drive a car for instance, although a subprocess might be able to complete these tasks. Overall, it would perhaps be more like a superintelligent ant colony.

Comment author: topynate 27 January 2015 12:11:28AM 2 points [-]

That very much depends on what you choose to regard as the 'true nature' of the AI. In other words we're flirting with reification fallacy by regarding the AI as a whole as 'living on the blockchain', or even being 'driven' by the blockchain. It's important to fix in mind what makes the blockchain important to such an AI and to its autonomy. This, I believe, is always the financial aspect. The on-blockchain process is autonomous precisely because it can directly control resources; it loses autonomy in so far as its control of resources no longer fulfils its goals. If you wish, you can consider the part of the AI which verifies correct computation and interfaces with 'financial reality' as being its real locus of selfhood, but bear in mind that even the goal description/fulfilment logic can be zero-knowledge proofed and so exist off-chain. From my perspective, the on-chain component of such an AI looks a lot more like a combination of robotic arm and error-checking module.

Comment author: rwallace 12 December 2010 12:44:46AM 1 point [-]

Here's one of mine:

Chooser of the Slain

Not actually fan fiction, but inspired by two existing works many readers will recognize.

Comment author: topynate 09 October 2014 08:06:44AM 0 points [-]

That was pretty good, thanks.

Comment author: adam_strandberg 09 July 2014 04:14:07AM *  6 points [-]

(How many different DAGs are possible if you have 600 nodes? Apparently, >2^600.)

Naively, I would expect it to be closer to 600^600 (the number of possible directed graphs with 600 nodes).

And in fact, it is some complicated thing that seems to scale much more like n^n than like 2^n: http://en.wikipedia.org/wiki/Directed_acyclic_graph#Combinatorial_enumeration

Comment author: topynate 09 July 2014 03:31:08PM 8 points [-]

There's an asymptotic approximation in the OEIS: a(n) ~ n!2^(n(n-1)/2)/(M*p^n), with M and p constants. So log(a(n)) = O(n^2), as opposed to log(2^n) = O(n), log(n!) = O(n log(n)), log(n^n) = O(n log(n)).

Comment author: topynate 21 March 2014 11:13:28PM 1 point [-]

I want a training session in Unrestrained Pessimism.

Comment author: topynate 06 March 2014 01:17:46AM 0 points [-]

As someone who moved to Israel at the age of 25 with very minimal Hebrew (almost certainly worse than yours), went to an ulpan for five months and then served in the IDF for 18 months while somehow avoiding the 3 month language course I certainly should have been placed in based on my middle-of-ulpan level of fluency:

Ulpan (not army ulpan, real ulpan) is actually pretty good at doing what it's supposed to. I had a great time - it depends on the ulpan but I haven't heard of a single one that would be psychologically damaging. Perhaps your experience with a less intensive system as a minor has coloured your views? I know that I got put off Hebrew by the quality of teaching I had around the age of 11-13. I'm not sure if you could get benefits to do a free course (it would depend on your status) but that would certainly take off the pressure to learn Hebrew quickly. You'd have to delay your draft date, which is usually possible.

'Army ulpan' is, according to my friends, a bit of a joke, but that's three months you'd be with a bunch of Anglos, being taught by 19 year old girls, and going on semi-regular day trips, which is fun, rather than jumping straight into basic training, which sucks. It's also three months less time being bored to tears at the end of your service doing the same thing you've been doing the last two years.

You can't learn spoken Hebrew by reading. No way. Not only do you need grammatical knowledge to know which vowels should be used, but the spoken and written forms become quite divergent above the most basic level. You need to speak and hear Hebrew for most of the day, every day - which could be a pretty lonely experience in the US. Think Hebrew pop music, armed with a copy of the lyrics and the translation. Learn the songs and what they mean - it's just repetition - and you'll automatically pick up the most common vocabulary. Hebrew grammar isn't that hard for an English speaker, the verb conjugation is traditionally considered the hard part, and that's mostly just memorization. Genders are a pain but not knowing the gender of a word won't impair comprehension if you guess wrongly.

Comment author: lukeprog 15 January 2014 09:26:54PM 8 points [-]

fairly novel contribution

Eh? People have been discussing this point for at least a decade, and I previously gave it as the main reason I'm not signed up for cryonics.

Comment author: topynate 15 January 2014 10:14:22PM 1 point [-]

Then perhaps my assessment was mistaken! But in any case, I wasn't referring to the broad idea of cryonics patients ending up in deathcubes, but of their becoming open-access in an exploitative society - c.f. the Egan short.

Comment author: jowen 13 January 2014 11:21:52PM 1 point [-]

This argument has made me start seriously reconsidering my generally positive view of cryonics. Does anyone have a convincing refutation?

The best I can come up with is that if resuscitation is likely to happen soon, we can predict the values of the society we'll wake up in, especially if recovery becomes possible before more potentially "value disrupting" technologies like uploading and AI are developed. But I don't find this too convincing.

Comment author: topynate 15 January 2014 08:22:37PM 1 point [-]

My attempt at a reply turned into an essay, which I've posted here.

Recreational Cryonics

1 topynate 15 January 2014 08:21PM

We recently saw a post in Discussion by ChrisHallquist, asking to be talked out of cryonics. It so happened that I'd just read a new short story by Greg Egan which gave me the inspiration to write the following:

 

It is likely that you would not wish for your brain-state to be available to all-and-sundry, subjecting you to the possibility of being simulated according to their whims. However, you know nothing about the ethics of the society that will exist when the technology to extract and run your brain-state is developed. Thus you are taking a risk of a negative outcome that may be less attractive to you than mere non-existence.

 

I had little expectation of this actually convincing anyone, but thought it was a fairly novel contribution. When jowen's plea for a refutation went unanswered, I began attempting one myself. What I ended up with closes the door on the scenario I outlined, but opens one I find rather more disturbing.

continue reading »

View more: Next