Because there could still be too much of the solid part for it to have a density less than air's?
edit: i suspect it would float for a little bit before the lighter gas diffuses out
Because there could still be too much of the solid part for it to have a density less than air's?
edit: i suspect it would float for a little bit before the lighter gas diffuses out
edit: i suspect it would float, but only for a little bit before the lighter gas diffuses out.
Maybe this is a trick question, but why wouldn't it float?
Because there could still be too much of the solid part for it to have a density less than air's?
edit: i suspect it would float for a little bit before the lighter gas diffuses out
I will do so once there's a balancing karma sink :-).
I don't care if he gets a few meaningless internet points for making a poll.
I can try. Or, at least give a sketch. (Hand-waving ahead ...)
The Ants problem -- if I'm understanding it correctly -- is a problem of coordinated action. We have a community of ants, and the community has some goals: collecting food, taking over opposing hills, defending friendly hills. Imagine you are an ant in the community. What does rational behavior look like for you?
I think that is already enough to launch us on lots of hard problems:
What does winning look like for a single ant in the Ants game? Does winning for a single ant even make sense or is winning completely parasitic on the community or colony in this case? Does that tell us anything about humans?
If all of the ants in my community share the same decision theory and preferences, will the colony succeed or fail? Why?
If the ants have different decision theories and/or different preferences, how can they work together? (In this case, working together isn't very hard to describe ... it's not like the ants fight themselves, but we might ask what kinds of communities work well -- i.e. is there an optimal assortment of decision theories and/or preferences for individuals?)
If the ants have different preferences, how might we apply results like Arrow's Theorem or how might we work around it?
...
So, there's a hand-wavy sketch of what I had in mind. But I don't know, is it still too vague to be useful?
EDIT: I should say that I realize the game works with a bot controlling the whole colony, but I don't think that changes the problems in principle, anyway. But maybe I'm missing something there.
If the ants have different decision theories and/or different preferences, how can they work together?
EDIT: I should say that I realize the game works with a bot controlling the whole colony, but I don't think that changes the problems in principle, anyway.
What?
The ants are not even close to individuals. They're dots. They're dots that you move around.
I didn't think it sounded all that easy ... :)
Can you give an example for Ants?
Heh. Well, asking them "Of the choices the world faces, which ones seem most important to HS students?" would probably have sounded condescending.
Do you think this is his real motivation? I can't imagine what he expects to learn.
Just spitballing here:
Promote the AI challenge as a rationalist meetup topic with the goal of having several working groups
Instead of trying to get one big group with a leader right from the start, appoint (or whatever) several leaders: assign to each leader a small collection of interested people
Be clear about what you want the leaders to do: what are the short and medium range goals
Put up an early post asking people to express interest and (maybe) skill-sets so that teams could be assembled with some balance / hope of accomplishing something
Keep in contact with the various leaders and see where people are getting stuck (I'm assuming that you are ultimately the person in charge of this project); periodically, have the leaders talk to each other -- but not extremely often; post regular discussion threads focusing on solving specific "We're stuck on this," problems
Try to reframe the problem or parts of the problem in a way that connects to generic rationality, so that non-programmers can contribute something -- looking over the old thread, it seems that a lot of people were intimidated by the threat of having to code stuff, but the programmers might nonetheless get a good idea or two from what non-programmers have to say about generic rationalist-type problems
Make some direct suggestions about the "worthwhile things" you mention. For example, apart from the AI project itself, what methods would you suggest site members use to cooperate and why? (Okay, maybe there isn't much more to be said directly about positive publicity and advancing AI ... but then, maybe there is ...)
Set benchmarks for when things should be done, even if those benchmarks have to be re-set several times along the way
Try to reframe the problem or parts of the problem in a way that connects to generic rationality, so that non-programmers can contribute something
This is harder than it sounds.
That actually sounds personally kinda nice. I wish I'd been coerced into seriously reading Dante and so on when I was younger, instead of learning completely false but vaguely-reasonable-sounding stuff about genetics and airplanes and Bernoulli's law.
reasonable-sounding stuff about genetics and airplanes and Bernoulli's law
What is this referring to?
I don't know if the idea works in general, but if it works as described I think it would still be useful even if it doesn't meet this objection. I don't forsee any authentication system which can distinguish between "user wants money" and "user has been blackmailed to say they want money as convincingly as possible and not to trigger any hidden panic buttons", but even if it doesn't, a password you can't tell someone would still be more secure because:
I think the "stress detector" idea is one that is unlikely to work unless someone works on it specifically to tell the difference between "hurried" and "coerced", but I don't think the system is useless because it doesn't solve every problem at once.
OTOH, there are downsides to being too secure: you're less likely to be kidnapped, but it's likely to be worse if you ARE.
Easier to avoid with basic instruction.
Enemy knows the system, they can copy the login system in your cell.