Comment author: Thomas 12 July 2016 04:45:09PM 1 point [-]

parallel genetic algorithm

I wouldn't say it's "genetic algorithm", I prefer the term "evolution algorithm".

In what way is it more automatic than existing similar apps?

We did some testings. For example, we took some existing schedules and optimized them with our tool. The difference was substantial.

We also did some packings of circles inside a square and some spheres inside a cube, denser than it has been previously achieved.

We have built some 3D croswords 8 by 8 by 8 letters with no black field at all - field with English words.

I don't know if optalaner can do the same. I think not.

spreadsheet, some soft, some hard and it spits out a few of the top solutions

Every constrain has its own user specified weight. From 0 to 10^12 and every integer inside this interval. This is the measure of how soft or hard a constrain is.

Comment author: Vitor 12 July 2016 09:56:20PM 2 points [-]

Did you also test what other software (optaplanner as mentioned by HungryHobo, any SAT solver or similar tool) can do to improve those same schedules?

Did you run your software on some standard benchmark? There exists a thing called the international timetabling competition, with publicly available datasets.

Sorry to be skeptical, but scheduling is an NP-hard problem with many practical applications and tons of research has already been done in this area. I will grant that many small organizations don't have the know-how to set up an automated tool, so there may still be a niche for you, specially if you target a specific market segment and focus on making it as painless as possible.

Comment author: HungryHobo 12 July 2016 01:50:44PM *  2 points [-]

If true this has some spectacular implications for computing (long term).

http://phys.org/news/2016-07-refutes-famous-physical.html

"Now, an experiment has settled this controversy. It clearly shows that there is no such minimum energy limit and that a logically irreversible gate can be operated with an arbitrarily small energy expenditure. Simply put, it is not true that logical reversibility implies physical irreversibility, as Landauer wrote."

Some of the limits of computation, how much you could theoretically do with a certain amount of energy are based on what appear to have been incorrect beliefs about information processing and entropy.

It will push the research towards "zero-power" computing: the search for new information processing devices that consume less energy. This is of strategic importance for the future of the entire ICT sector that has to deal with the problem of excess heat production during computation.

It will call for a deep revision of the "reversible computing" field. In fact, one of the main motivations for its own existence (the presence of a lower energy bound) disappears.

Comment author: Vitor 12 July 2016 09:34:13PM *  3 points [-]

This will not have any practical consequences whatsoever, even in the long term. It is already possible to perform reversible computation (Paper by Bennett linked in the article) for which such lower bounds don't apply. The idea is very simple: just make sure that your individual logic gates are reversible, so you can uncompute everything after reading out the results. This is most easily achieved by writing the gate's output to a separate wire. For example an OR gate, instead of mapping 2 inputs to 1 output like

(x,y) --> (x OR y),

it would map 3 inputs to 3 outputs like

(x, y, z) --> (x,y, z XOR (x OR y)),

causing the gate to be its own inverse.

Secondly, I understand that the Landauer bound is so extremely small that worrying about it in practice is like worrying about the speed of light while designing an airplane.

Finally, I don't know how controversial the Landauer bound is among physicists, but I'm skeptical in general of any experimental result that violates established theory. Recall that just a while ago there were some experiments that appeared to show FTL communication, but were ultimately a sensor/timing problem. I can imagine many ways in which measurement errors sneak their way in, given the very small amount of energy being measured here.

Comment author: Furcas 02 May 2016 05:52:07PM 3 points [-]

Yes, I acknowledge all of that. Do you understand the consequence of not doing those things, if Christianity is true?

Eternal torment, for everyone you failed to convert.

Eternal. Torment.

Comment author: Vitor 04 May 2016 01:11:45PM 0 points [-]

The real danger, of course, is being utterly convinced Christianity is true when it is not.

The actions described by Lumifer are horrific precisely because they are balanced against a hypothetical benefit, not a certain one. If there is only an epsilon chance of Christianity being true, but the utility loss of eternal torment is infinite, should you take radical steps anyway?

In a nutshell, Lumifer's position is just hedging against Pascal's mugging, and IMHO any moral system that doesn't do so is not appropriate for use out here in the real world.

Comment author: Stefan_Schubert 22 March 2016 02:14:30PM 2 points [-]

I have a maths question. Suppose that we are scoring n individuals on their performance in an area where there is significant uncertainty. We are categorizing them into a low number of categories, say 4. Effectively we're thereby saying that for the purposes of our scoring, everyone with the same score performs equally well. Suppose that we say that this means that all individuals with that score get assigned the mean actual performance of the individuals with that that score. For instance, if there were three people who got the highest score, and their perfomance equals 8, 12 and 13 units, the assigned performance is 11 units.

Now suppose that we want our scoring system to minimise information loss, so that the assigned performance is on average as close as possible to the actual performance. The question is: how do we achieve this? Specifically, how large a proportion of all individuals should fall into each category, and how does that depend on the performance distribution?

It would seem that if performance is linearly increasing as we go from low to high performers, then all categories should have the same number of individuals, whereas if the increase is exponential, then the higher categories should have a smaller number of individuals. Is there a theorem that proves this, and which exacty specifies how large the categories should be for a given shape of the curve? Thanks.

Comment author: Vitor 22 March 2016 04:18:04PM 10 points [-]

Your problem is called a clustering problem. First of all, you need to answer how you measure your error (information loss, as you call it). Typical error norms used are l1 (sum of individual errors), l2 (sum of squares of errors, penalizes larger errors more) and l-infinity (maximum error).

Once you select a norm, there always exists a partition that minimizes your error, and to find it there are a bunch of heuristic algorithms, e.g. k-means clustering. Luckily, since your data is one-dimensional and you have very few categories, you can just brute force it (for 4 categories you need to correctly place 3 boundaries, and naively trying all possible positions takes only n^3 runtime)

Hope this helps.