Wiki Contributions

Comments

Hmm, interesting. The exact choice of decimal place at which to cut off the comparison is certainly arbitrary, and that doesn't feel very elegant. My thinking is that within the constraint of using floating point numbers, there fundamentally isn't a perfect solution. Floating point notation changes some numbers into other numbers, so there are always going to be some cases where number comparisons are wrong. What we want to do is define a problem domain and check if floating point will cause problems within that domain; if it doesn't, go for it, if it does, maybe don't use floating point.

In this case my fix solves the problem for what I think is the vast majority of the most likely inputs (in particular it solves it for all the inputs that my particular program was going to get), and while it's less fundamental than e.g. using arbitrary-precision arithmetic, it does better on the cost-benefit analysis. (Just like how "completely overhaul our company" addresses things on a more fundamental level than just fixing the structural simulation, but may not be the best fix given resource constraints.)

The main purpose of my example was not to argue that my particular approach was the "correct" one, but rather to point out the flaws in the "multiply by an arbitrary constant" approach. I'll edit that line, since I think you're right that it's a little more complicated than I was making it out to be, and "trivial" could be an unfair characterization.

In the general case I agree it's not necessarily trivial; e.g. if your program uses the whole range of decimal places to a meaningful degree, or performs calculations that can compound floating point errors up to higher decimal places. (Though I'd argue that in both of those cases pure floating point is probably not the best system to use.) In my case I knew that the intended precision of the input would never be precise enough to overlap with floating point errors, so I could just round anything past the 15th decimal place down to 0.

If we figure out how to build GAI, we could build several with different priors, release them into the universe, and see which ones do better. If we give them all the same metric to optimize, they will all agree on which of them did better, thus determining one prior that is the best one to have for this universe.

I don't understand what "at the start" is supposed to mean for an event that lasts zero time.

Ok now I'm confused about something. How can it be the case that an instantaneous perpendicular burn adds to the craft's speed, but a constant burn just makes it go in a circle with no change in speed?

...Are you just trying to point out that thrusting in opposite directions will cancel out? That seems obvious, and irrelevant. My post and all the subsequent discussion are assuming burns of epsilon duration.

I don't understand how that can be true? Vector addition is associative; it can't be the case that adding many small vectors behaves differently from adding a single large vector equal to the small vectors' sum. Throwing one rock off the side of the ship followed by another rock has to do the same thing to the ship's trajectory as throwing both rocks at the same time.

How is that relevant? In the limit where the retrograde thrust is infinitesimally small, it also does not increase the length of the main vector it is added to. Negligibly small thrust results in negligibly small change in velocity, regardless of its direction.

Unfortunately I already came across that paradox a day or two ago on Stack Exchange. It's a good one though!

Yeah, my numerical skill is poor, so I try to understand things via visualization and analogies. It's more reliable in some cases, less in others.

Load More