Comment author: SilasBarta 13 June 2013 09:34:36PM *  7 points [-]

Just thought I'd throw this out there:

TabooBot: Return D if opponent's source code contains a D; C otherwise.

To avoid mutual defection with other bots, it must (like with real prudish societies!) indirectly reference the output D. But then other kinds of bots can avoid explicit reference to D, requiring a more advanced TabooBot to have other checks, like defecting if the opponent's source code calls a modifier on a string literal.

Comment author: [deleted] 11 June 2013 04:08:13AM 3 points [-]

A better summary of Aaronson's paper:

I want to know:

Were Bohr and Compton right or weren’t they? Does quantum mechanics (specifically, say, the No-Cloning Theorem or the uncertainty principle) put interesting limits on an external agent’s ability to scan, copy, and predict human brains and other complicated biological systems, or doesn’t it?

EY is mentioned once, for his work in popularizing cryonics, and not for anything fundamental to the paper. Several other LW luminaries like Silas Barta and Jaan Tallinn show up in the acknowledgements.

If you have followed Aaronson at all in the past couple years, the new stuff begins around section 3.3, page 36. His definition of "freedom" is at first glance interesting, and may dovetail slightly with the standard reduction of free will.

In response to comment by [deleted] on [link] Scott Aaronson on free will
Comment author: SilasBarta 11 June 2013 06:43:28PM 3 points [-]

Eh, I don't think I count as a luminary, but thanks :-)

Aaronson's crediting me is mostly due to our exchanges on the blog for his paper/class about philosophy and theoretical computer science.

One of them, about Newcomb's problem where my main criticisms were

a) he's overstating the level and kind of precision you would need when measuring a human for prediction; and

b) that the interesting philosophical implications of Newcomb's problem follow from already-achievable predictor accuracies.

The other, about average-human performance on 3SAT, where I was skeptical the average person actually notices global symmetries like the pigeonhole principle. (And, to a lesser extent, whether the other in which you stack objects affects their height...)

Comment author: thomblake 10 June 2013 06:53:25PM 1 point [-]

I think steelmanning would instead be if you listed more realistic dangers of that place rather than more extreme dangers

I think you missed what was going on there. In the hypothetical, Feynman's mom was concerned about the plague and for the steelman Feynman corrected it to TB. The assumption there is that TB is a more realistic threat than the plague.

Comment author: SilasBarta 10 June 2013 09:15:01PM 0 points [-]

I see that now. It didn't help that Luke_A_Somers, in defending what he did as steelmanning, kept insisting that he was "making the original argument worse".

(In any case, I don't think TB was the "steelest" man you could make here, nor the mother's real rejection.)

Comment author: ChristianKl 10 June 2013 05:11:49AM 0 points [-]

I think the 1 character per second speed is even done with EEGs that are much better than consumer grade equipment.

It could be possible to do better but it probably won't be easy.

Comment author: SilasBarta 10 June 2013 03:57:25PM *  0 points [-]

Sure, but I don't think EEG quality (in terms of lab vs. consumer grade) is the real bottleneck; I think it's minimizing the amount of input that must be provided at all by exploiting the regularity of the input that will be provided. The techniques available here may have been overlooked.

Comment author: Emile 09 June 2013 09:05:30PM 0 points [-]

I played a bit with Emotiv and find a maximum of one character-per-second pretty believable - at least, if you stick to actual brain signals and not signals from face muscles ( and even with face muscles one character per second seems in the right ballpark).

Comment author: SilasBarta 10 June 2013 01:58:27AM 0 points [-]

One character is not the same as one byte of (maximally compressed) information. The whole point of programs like Dasher (and word suggestion features in general) is to take advantage of the low entropy of text data relative to its uncompressed representation. Characteristic screenshot

Were you using a static, non-adaptive, on-screen keyboard? If so, that's why I would think connecting it to Dasher should result in a speed greater than one char per second, at least after the training period (both human training, and character-probability-distribution training).

Comment author: Luke_A_Somers 09 June 2013 02:36:42AM 0 points [-]

Yes. We both tweaked matters so that the example became a steelmanning. You changed what Richard said. I changed what his mom said. We both changed something, and after either or both of our changes, it was an example of steelmanning.

Comment author: SilasBarta 09 June 2013 02:48:16AM 0 points [-]

Right, except yours missed out on the whole "make it a better argument that you're refuting" thing.

Comment author: ChristianKl 08 June 2013 06:39:41PM 1 point [-]

Brain-computer interfaces for the disabled have been tried. There's plenty of academic work on the topic. For some people who are completely paralyzed the technology allows them to communicate by typing 1 character per second.

Comment author: SilasBarta 08 June 2013 08:19:48PM 1 point [-]

Right, I found that information at the time, but wasn't convinced this was the best achievable performance for such individuals (let alone price-performance), considering what should be possible with consumer-grade BCIs + Dasher.

I still can't convince myself that this is the best they can do. Personal project time?

Comment author: Luke_A_Somers 08 June 2013 12:54:34PM 0 points [-]

We're talking about Feynman steelmanning, not me.

Feynman would have been steelmanning if she had made a worse argument to begin with yet he responded to it and a better one.

Comment author: SilasBarta 08 June 2013 08:13:33PM 0 points [-]

Right, and we're talking about what true steelmanning would be in this case, right?

Comment author: Luke_A_Somers 07 June 2013 10:20:48PM 0 points [-]

That would work too. Note that I was making what he did steelmanning by way of making the original argument worse - we're working on opposite ends but I think we agree on definition.

Comment author: SilasBarta 07 June 2013 11:43:38PM 0 points [-]

I don't think we're agreeing on definition: I thought steelmanning was necessarily making the argument better, not worse.

Comment author: Luke_A_Somers 04 June 2013 08:55:13PM 17 points [-]

As far as I can tell, that's simply taking a facile objection seriously.

Steelmanning would be if she said that he could get, say, bubonic plague from her - and then he addressed not only that but also concerns about tuberculosis.

Comment author: SilasBarta 07 June 2013 08:12:37PM *  1 point [-]

I think steelmanning would instead be if you listed more realistic dangers of that place rather than more extreme dangers: for example, "TB is not a threat, but let's look at what the biggest danger would be, and see if the concern is still justified. How about the danger that people may not want to be around you if you go there too much [probably closer to what she actually had in mind] ..."

View more: Prev | Next