EDIT: Clarified some things.
Suppose we have a bunch of spherical billiard balls rolling around on an infinite plane. Suppose there is no friction and the collisions are elastic. They don't feel the influence of gravity or any other force except the collisions. At least one ball is moving. Can they ever return to their original positions and velocities?
1) If only positions should match, then the answer is yes. Just roll two identical balls at each other.
2) If both positions and velocities should match, then the answer is no. Here's a sketch of a proof:
Assume without loss of generality that some balls have nonzero velocity along the X axis. Let's define the following function of time: take all balls having nonzero X velocity, and take the lowest X coordinate of their centers. A case analysis shows that the function can change from increasing to decreasing, but never the other way. Therefore it cannot be periodic. But since it's determined from the configuration, that means the configuration can't be periodic, QED.
Note that the case analysis is tricky because the function can be discontinuous. The interesting cases are when a ball's X velocity becomes zero due to a collision, or (more subtly) two balls with only Y velocity gain X velocity due to an off-center collision. But I think the statement about monotonicity still holds.
Yeah I thought about those two cases as well, but I agree that they are correct. Perhaps we could make the proof a bit simpler by picking the X direction to be one that the balls never travel perpendicular too (although in fact I can't even think of a proof that such a direction exists).
That wouldn't help with the first case, because a ball can stop completely. Let's keep the proof as it is :-)
The full answer is: they cannot return if there are only finitely many balls, but they can if there are infinitely many.
Let's first assume that there are finitely many balls. As Thomas pointed out, we can assume that the center of mass is fixed. Let's consider R defined to be the distance from the center of mass to the furthest ball and call that furthest ball B (which ball that is might change over time). R might be decreasing at the start - we might start with B going towards the center of mass. But if R decreased forever then we would know that they never return to their starting location (since R would be different)! So at some point it must become at least as large as it was at the start. At that point either the derivative of R is 0 or it is positive. In either case, R must increase forever onwards - which again shows it can't return to its original starting point. Why is it always increasing from that point onwards? Well, the only way for the ball B to turn around and start heading back towards the center is if there is another ball further away than it to collide with it. But that can't be, since B is the furthest out ball! (Edit: I see now that this is essentially equivalent to cousin_it's argument.)
For infinitely many balls, you can construct a situation where they return to their original position! We're going to put a bunch of balls on a line (you don't even need the whole plane). In the interval [0,1], there'll be two balls with initial velocity heading in towards each other at unit speed, with one ball at the left edge of the interval and one ball at the right. Then do the same thing for each interval [k,k+1]. When you let them go, each pair in each interval will collide and then be heading outwards with unit interval. Then they'll collide at the boundary with the next interval with the ball from the next interval. That sets them back at the starting position. I.e. all balls collide first with their neighbor on one side, then their neighbor on the other side, setting them back to their starting position.
Possibly not a rational answer (so possilbly not living up to the less wrong philosophy!) but given the assumption of an infinite plane I would think the probability is vanishingly small of returning to the original position and velocity.
Something would need to constrain the vectors taken to prevent any ball from taking off in some direction that could be described as "away from the group". Perhaps that could be understood be be on a path for which the the path of no other ball can possible intersect. At that point this ball can will never change it's current velosity and never return to it's oiginal position.
I cannot offer a proof that such a condition must eventually occur in your experimnt but my intuition is that it will. If so that vanishing small probablity that everyting return to some orginal state goes to zero.
Two balls which are orbiting around its common mass center, they do return to a previous position. If there is no gravity, a finite bunch of rolling balls will never again return to the present state. Never again.
I'm assuming no gravity (and that at least one ball is moving). Do you have a proof for your assertion?
Sure.
If the center of gravity moves, it moves with the velocity v. So it will be in the position r+v*t after some time t. Now it's at the position r. A different position of gravity (mass) center means different position. For the whatever finite t.
In the case when the gravity center doesn't move, you can divide the composition into two sub-compositions, where both gravity centers do move. If only one had moved, then the combined gravity center would move and we would have the solved case above.
But if both gravity centers move, they can either move apart and never collide - in which case they will both have different position vectors lately - or they will collide. In that case, they will reverse their directions after the elastic collision and we have a solved case then.
Well, that's an approximate proof.
But if both gravity centers move, they can either move apart and never collide - in which case they will both have different position vectors lately - or they will collide. In that case, they will reverse their directions after the elastic collision and we have a solved case then.
I'm not convinced by this bit. Usually we can calculate the results of an elastic collision by using both conservation of energy and conservation of momentum. But we can't know the energy of the sub-compositions based just on the velocity of their centres of mass. They will also have some internal energy. So we can't calculate the results of the collision.
Do we agree, that if there will be no collision, and both gravity centers move, that they will never return to the present position?
Then, both gravity centers travel with a certain velocity each and will collide. How can they return back here? If they will reverse both velocities after the collision. Then, they can return. But with the opposite velocities, and therefore this will not be the same state.
Since they always move in lines, there will be no another collision. And therefore no return to the present state.
What do you mean by "the" collision? If each part has several balls then there will be multiple collisions.
Doesn't matter how many collisions will happen, the momentum conservation will hold. Even if only two small balls of each gravity center will collide, the sum of momentums of that two gravity centers will remain the same. Doesn't matter which partition of balls we chose, only that is the same before and after the collision.
After some collisions have happened and the two parts are heading away from each other the two parts could still overlap and then some more of their balls could collide. This could lead to the two parts heading back together.
No, that's impossible. However you choose to divide this set of balls and however they later collide, both impulses are still conserved.
It's an old problem, cousin_it has posted:
Here's another problem that might be easier. Make an O(n log n) sorting algorithm that's simple, stable, and in place.
Radix. Except that it's not in place.
I know several reasonable algorithms for stable sorting in O(n log n) time and O(sqrt(n)) extra space, like Mrrl's SqrtSort. That's good enough for all practical purposes, because anyone who wants to sort a billion elements can afford an extra array of 30000 elements. And all known algorithms using less extra space essentially emulate O(sqrt(n)) bits of storage by doing swaps inside the array, which is clearly a hack.
Radix sort has its own rabbit hole. If you're sorting strings that are likely to have common prefixes, comparison sorting isn't the best way, because it needs to look at the initial characters over and over. There are many faster algorithms for string sorting based on ideas from radix sort: American flag sort, three-way radix quicksort, etc. The Haskell package Data.Discrimination generalizes the idea from strings to arbitrary algebraic datatypes, allowing you to sort them in almost linear time (in terms of total size, not number of elements).
Crash Course on youtube has a variety of ~10 minute videos on a whole bunch of topics- I haven't watched most of the topics, but History and Literature are pretty decent. The length hurts in a lot of places, but I think it does a good job given that constraint and I'll admit I'm a lot more likely to "one more video" my way through a dozen of those than I am to sit down for a two hour documentary on the Vietnam War or The Great Gatsby, even if I'd feel like I was getting a more in-depth education out of the latter.
I'm currently going through a painful divorce so of course I'm starting to look into dating apps as a superficial coping mechanism.
It seems to me that even the modern dating apps like Tinder and Bumble could be made a lot better with a tiny bit of machine learning. After a couple thousand swipes (which doesn't take long), I would think that a machine learning system could get a pretty good sense of my tastes and perhaps some metric of my minimum standards of attractiveness. This is particularly true for a system that has access to all the swiping data across the whole platform.
Since I swipe completely based on superficial appearance without ever reading the bio (like most people), the system wouldn't need to take the biographical information into account, though I suppose it could use that information as well.
The ideal system would quickly learn my preferences in both appearance and personal information and then automatically match me up with the top likely candidates. I know these apps keep track of the response rates of individuals, so matches who tend not to respond often (probably due to being very generally desirable) would be penalized in your personal matchup ranking - again, something machine learning could handle easily.
I find myself wondering why this doesn't already exist.
I think it's highly likely that an App like Tinder doesn't do the matching completely random but optimizes for some factor.
Your analysis ignores the fact that Tinder principle is about woman only getting messages from guys on whom they previously swipped left and thus signaled that they want to receive messages from the guy. That ritual has psychological value.
If you do want a more explicit recommendation system, sites like eharmony can provide for that need.
I considered creating something like that to be used with Tinder's (unofficial) API. There are a bunch of freely available algorithms one might use for this purpose. I did not seriously attempt this because it's a hard problem, the algorithms are unreliable and difficult, and I'm not even sure if it's something I want or could profit from.
As for why Tinder hasn't done this. It goes against their business model. They would make less money. Tinder wants to keep you as an user for as long as possible, and the whole process of swiping, always wondering what the next one will be like, is their most addictive feature. Ideally they'll only let you go on dates if it's really necessary to keep you as a user. I'd guess that a significant portion of their users just use the app for swiping.
(Please excuse my incorrect english).
When talking about the debate “nurture vs nature” I call it:
Brain hardware + Preloaded Software VS. Compatible Software Installation
(A random question - what can be said about the behaviour of a function like y=cos(y1), where y1=cos(y2), ...? Sorry for spam, too much work makes me wonder more and think less.)
Hello everyone, I am from USA. I am here to share this good news to only those who will seize this opportunity.I read a post about an ATM hacker and I contacted him via the email address that was attached in the post. I paid the required sum of money for the blank card I wanted and he sent the card through UPS Express Delivery Shipment, and I got it in 3days. I got it from him last week and now I have withdrew $50,000 USD. The blank ATM card is programmed in a way that it can withdraw money from any ATM machine around the world. Now I have so much money to put of my bills with the help of the card. I am really happy I met Mr.Esa Perez who helped me with this card because I've heard about this card long ago but I had no means of getting it until I came across Mr. Esa Perez. To contact him, you can send him a mail or visit his website. Email: unlimitedblankatmcard@gmail.com Website: http//:unlimitedblankatmcard.webs.com
When I first began working as a bookseller, I had to run to the stores thinking "[Name of the publishing house] - [school subject] - [year] - [kind of workbook, part] - [to what textbook] - [edition] - [amount]". Nine months later, I run to the stores thinking "[this sequence of turns (as a kind of wriggly line)] - [subject] - [year] - [not that one! The other one!] - [unspecified; grab both] - [more]". Must be professional growth...
Question: How do you make the paperclip maximizer want to collect paperclips? I have two slightly different understandings of how you might do this, in terms of how it's ultimately programmed: 1) there's a function that says "maximize paperclips" 2) there's a function that says "getting a paperclip = +1 good point"
Given these two different understandings though, isn't the inevitable result for a truly intelligent paperclip maximizer to just hack itself and based on my two different understandings: 1) make itself /think/ that it's getting paperclips because that's what it really wants--there's no way to make it value ACTUALLY getting paperclips as opposed to just thinking that it's getting paperclips 2) find a way to directly award itself "good points" because that's what it really wants
I think my understanding is probably flawed somewhere but haven't been able to figure it out so please point out where
For what it's worth, though, as far as I can tell we don't have the ability to create an AI that will reliably maximize the number of paperclips in the real world, even with infinite computing power. As Manfred said, model-based goals seems to be a promising research direction for getting AIs to care about the real world, but we don't currently have the ability to get such an AI to reliably actually "value paperclips". There are a lot of problems with model-based goals that occur even in the POMDP setting, let alone when the agent's model of the world or observation space can change. So I wouldn't expect anyone to be able to propose a fully coherent complete answer to your question in the near term.
It might be useful to think about how humans "solve" this problem, and whether or not you can port this behavior over to an AI.
If you're interested in this topic, I would recommend MIRI's paper on value learning as well as the relevant Arbital Technical Tutorial.
To our best current understanding, it has to have a model of the world (e.g. as a POMDP) that contains a count of the number of paperclips, and that it can use to predict what effect its actions will have on the number of paperclips. Then it chooses a strategy that will, according to the model, lead to lots of paperclips.
This won't want to fool itself because, according to basically any model of the world, fooling yourself does not result in more paperclips.
"according to basically any model of the world, fooling yourself does not result in more paperclips."
Paul Almond at one time proposed that every interpretation of a real thing is a real thing. According to that theory, fooling yourself that there are more paperclips does result in more paperclips (although not fooling yourself also has that result.)
But what does the code for that look like. It looks like maximize(# of paperclips in world), but how does it determine (# of paperclips in world)? You just said it has a model. But how can it distinguish between real input that leads to the perception of paperclips and fake input that leads to the perception of paperclips?
Well, if the acronym "POMDP" didn't make any sense, I think we should start with a simpler example, like a chessboard.
Suppose we want to write a chess-playing AI that gets its input from a camera looking at the chessboard. And for some reason, we give it a button that replaces the video feed with a picture of the board in a winning position.
Inside the program, the AI knows about the rules of chess, and has some heuristics for how it expects the opponent to play. Then it represents the external chessboard with some data array. Finally, it has some rules about how the image in the camera is generated from the true chessboard and whether or not it's pressing the button.
If we just try to get the AI to make the video feed be of a winning position, then it will press the button. But if we try to get the AI to get its internal representation of the data array to be in a winning position, and we update the internal representation to try to track the true chessboard, then it won't press the button. This is actually quite easy to do - for example, if the AI is a jumble of neural networks, and we have a long training phase in which it's rewarded for actually winning games, not just seeing winning board states, then it will learn to take into account the state of the button when looking at the image.
Why would it hack itself to think it's getting paperclips if it's originally programmed to want real paperclips? It would not be incentivized to make that hack because that hack would make it NOT get paperclips.
As I said though, how do you program it to want REAL paperclips as opposed to just perceiving that it is getting paperclips.
People who become passionate about meditation tend to say that the hardest part is encountering "dark things in your mind".
What do meditators mean by this?
Possibly they mean more than one thing, but the primary concept that jumps to mind is known as the "dark night". The aim of many meditation practices is to become aware of the contents of consciousness to the extent that those contents lose any emotional valence and become meaningless objects. In the long term this makes the meditator extremely equanimous and calm and detached, in a good way. In the medium term, before the changes have properly sunk in, it can result in a semi-detachment from reality where everything seems meaningless but in a very bad way.
I think I may have touched the edges of such phenomena. It is indeed unpleasant, and probably contributed to my cutting down my meditation by a lot.
There are stages in meditation when painful thoughts and memories might come bubbling up. If you're just sitting still with your mind and have nothing to distract you, you may occasionally end up facing some past trauma, especially if you've previously avoided dealing with it and have e.g. tried to just distract yourself from it whenever it came up.
(This is not necessarily a negative anything in the long run, since facing those negative thoughts can help in getting over them.)
What if you discovered that a part of your brain doesn't like when your friends are happier than you?
What if you discovered a part of your brain just wants to wirehead itself?
What if you discovered a part of your brain that likes to come up with ideas about how horrible you are and then meditation only causes you to pay attention to those thoughts?
Can anyone offer a linguistic explanation for the following phenomenon related to pronoun case and partial determiners:
In (1) the subject is the word "none". The word "us" is part of the prepositional phrase "of us".
Eliezer wrote in http://lesswrong.com/lw/ro/2place_and_1place_words/ :
Sexiness: Admirer, Entity—> [0, ∞) ... Sexiness: Entity—> [0, ∞) ... Fred::Sexiness == Sexiness_20934 ...
Is it there a sort of semantic language code or something similar to write pseudo code when talking about concepts?
If it's worth saying, but not worth its own post, then it goes here.
Notes for future OT posters:
1. Please add the 'open_thread' tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should start on Monday, and end on Sunday.
4. Unflag the two options "Notify me of new top level comments on this article" and "