Comment author: NancyLebovitz 20 July 2012 03:07:19PM 7 points [-]

This is the kind of thing which makes me wonder about a community norm of taking psychological research (which may be badly designed or prove less than it seems to) very seriously.

Comment author: printing-spoon 21 July 2012 05:38:54AM 2 points [-]

It's not just a community norm, big chunks of the sequences seem to be built on small amounts of recent research.

Comment author: JackV 20 July 2012 11:18:15AM 1 point [-]

I don't know if the idea works in general, but if it works as described I think it would still be useful even if it doesn't meet this objection. I don't forsee any authentication system which can distinguish between "user wants money" and "user has been blackmailed to say they want money as convincingly as possible and not to trigger any hidden panic buttons", but even if it doesn't, a password you can't tell someone would still be more secure because:

  • you're not vulnerable to people ringing you up and asking what your password is for a security audit, unless they can persaude you to log on to the system for them
  • you're not vulnerable to being kidnapped and coerced remotely, you have to be coerced wherever the log-on system is

I think the "stress detector" idea is one that is unlikely to work unless someone works on it specifically to tell the difference between "hurried" and "coerced", but I don't think the system is useless because it doesn't solve every problem at once.

OTOH, there are downsides to being too secure: you're less likely to be kidnapped, but it's likely to be worse if you ARE.

Comment author: printing-spoon 21 July 2012 05:33:54AM 0 points [-]

you're not vulnerable to people ringing you up and asking what your password is for a security audit, unless they can persaude you to log on to the system for them

Easier to avoid with basic instruction.

you're not vulnerable to being kidnapped and coerced remotely, you have to be coerced wherever the log-on system is

Enemy knows the system, they can copy the login system in your cell.

Comment author: printing-spoon 06 June 2012 04:10:35AM *  1 point [-]

Because there could still be too much of the solid part for it to have a density less than air's?

edit: i suspect it would float for a little bit before the lighter gas diffuses out

Comment author: printing-spoon 06 June 2012 04:11:34AM 0 points [-]

edit: i suspect it would float, but only for a little bit before the lighter gas diffuses out.

Comment author: gwern 05 June 2012 09:37:07PM 1 point [-]

Maybe this is a trick question, but why wouldn't it float?

Comment author: printing-spoon 06 June 2012 04:10:35AM *  1 point [-]

Because there could still be too much of the solid part for it to have a density less than air's?

edit: i suspect it would float for a little bit before the lighter gas diffuses out

Comment author: gjm 26 March 2012 12:32:21PM 0 points [-]

I will do so once there's a balancing karma sink :-).

Comment author: printing-spoon 29 March 2012 12:13:19AM 0 points [-]

I don't care if he gets a few meaningless internet points for making a poll.

Comment author: JonathanLivengood 14 January 2012 12:16:13PM *  -1 points [-]

I can try. Or, at least give a sketch. (Hand-waving ahead ...)

The Ants problem -- if I'm understanding it correctly -- is a problem of coordinated action. We have a community of ants, and the community has some goals: collecting food, taking over opposing hills, defending friendly hills. Imagine you are an ant in the community. What does rational behavior look like for you?

I think that is already enough to launch us on lots of hard problems:

  • What does winning look like for a single ant in the Ants game? Does winning for a single ant even make sense or is winning completely parasitic on the community or colony in this case? Does that tell us anything about humans?

  • If all of the ants in my community share the same decision theory and preferences, will the colony succeed or fail? Why?

  • If the ants have different decision theories and/or different preferences, how can they work together? (In this case, working together isn't very hard to describe ... it's not like the ants fight themselves, but we might ask what kinds of communities work well -- i.e. is there an optimal assortment of decision theories and/or preferences for individuals?)

  • If the ants have different preferences, how might we apply results like Arrow's Theorem or how might we work around it?

...

So, there's a hand-wavy sketch of what I had in mind. But I don't know, is it still too vague to be useful?

EDIT: I should say that I realize the game works with a bot controlling the whole colony, but I don't think that changes the problems in principle, anyway. But maybe I'm missing something there.

Comment author: printing-spoon 14 January 2012 07:16:07PM *  0 points [-]

If the ants have different decision theories and/or different preferences, how can they work together?

EDIT: I should say that I realize the game works with a bot controlling the whole colony, but I don't think that changes the problems in principle, anyway.

What?

The ants are not even close to individuals. They're dots. They're dots that you move around.

Comment author: printing-spoon 13 January 2012 09:39:10PM 0 points [-]

The wormhole-wing-trumpet logo thing is a bit aliased.

Comment author: JonathanLivengood 13 January 2012 06:32:04AM 0 points [-]

I didn't think it sounded all that easy ... :)

Comment author: printing-spoon 13 January 2012 09:36:23PM 0 points [-]

Can you give an example for Ants?

Comment author: TheOtherDave 12 January 2012 03:09:30PM 4 points [-]

Heh. Well, asking them "Of the choices the world faces, which ones seem most important to HS students?" would probably have sounded condescending.

Comment author: printing-spoon 13 January 2012 04:35:42AM 0 points [-]

Do you think this is his real motivation? I can't imagine what he expects to learn.

Comment author: JonathanLivengood 12 January 2012 08:31:00AM 6 points [-]

Just spitballing here:

  • Promote the AI challenge as a rationalist meetup topic with the goal of having several working groups

  • Instead of trying to get one big group with a leader right from the start, appoint (or whatever) several leaders: assign to each leader a small collection of interested people

  • Be clear about what you want the leaders to do: what are the short and medium range goals

  • Put up an early post asking people to express interest and (maybe) skill-sets so that teams could be assembled with some balance / hope of accomplishing something

  • Keep in contact with the various leaders and see where people are getting stuck (I'm assuming that you are ultimately the person in charge of this project); periodically, have the leaders talk to each other -- but not extremely often; post regular discussion threads focusing on solving specific "We're stuck on this," problems

  • Try to reframe the problem or parts of the problem in a way that connects to generic rationality, so that non-programmers can contribute something -- looking over the old thread, it seems that a lot of people were intimidated by the threat of having to code stuff, but the programmers might nonetheless get a good idea or two from what non-programmers have to say about generic rationalist-type problems

  • Make some direct suggestions about the "worthwhile things" you mention. For example, apart from the AI project itself, what methods would you suggest site members use to cooperate and why? (Okay, maybe there isn't much more to be said directly about positive publicity and advancing AI ... but then, maybe there is ...)

  • Set benchmarks for when things should be done, even if those benchmarks have to be re-set several times along the way

Comment author: printing-spoon 13 January 2012 04:25:57AM 2 points [-]

Try to reframe the problem or parts of the problem in a way that connects to generic rationality, so that non-programmers can contribute something

This is harder than it sounds.

View more: Prev | Next