There are mosquito populations that you shouldn't try to exterminate, because they are important to their ecosystem. If you get rid of them a bunch of birds have no food and so they are gone too etc. But they are up here in the arctic. Getting rid of all the tropical mosquitoes is good for everyone and does not have any great effects on any ecosystem. Everyone that eats mosquitoes there also has other insects that they prefer to eat.
There's about 3200 species of mosquito. < 200 bite humans and perhaps a dozen are major disease vectors for humans.
We extinct about 150 species per day without really trying. Increasing the number of species we push to extinction by 10% for a single day would save half a million lives per year.
Oxitec has already PR problem with it's current approach where they can prove that all mosquitos don't leave ancestors and where they focus on disease carrying mosquitos that are invading species.
See http://www.dailymail.co.uk/news/article-3425381/Are-scientists-blame-Zika-virus-Researchers-released-genetically-modified-mosquitos-Brazil-three-years-ago.html http://naturalsociety.com/outrage-oxitecs-gm-moths-are-released-in-new-york/
According to Oxitec:
The economic cost of dengue is phenomenal and was estimated to have cost the global economy over US$39 billion in 2011 alone
Spending a few billions on eliminating disease carrying mosquitos would be okay.
Even if over the long-term using the gene drive technology is the best way to go, I don't think it's the best way to have the discussion at the beginning when they idea of eliminating mosquito species enters public consciousness.
Oh my god those articles are stupid.
"Oxitec’s GM mosquitoes have a genetic ‘kill switch’ but no one is sure if it will work on just the GM variety or also on the bugs that interbreed with the GM ‘test’ insects. "
If only there was some way to physically scream "THAT'S THE FUCKING POINT!" at the author. The whole point is to spread the "kill switch" to the wild mosquitoes.To kill them.
The Daily mail article appears to be referring to this:
where people started pointing to GM mosquitos having been released in areas where zika has been spreading.
never mind that areas where mosquito's are the biggest problem are the areas where you try mosquito control, in a similar vein it's suspicious that most malaria deaths are in areas where bednets have previously been distributed. There can be only one conclusion: bednets cause malaria.
The best way to win principals is to show them that a ridiculously complex constrain may be applied and calculated automatically.
- 4.5 school hours of S per week (4 hours on odd weeks and 5 hours on even weeks)
- when there is the fifth hour in the week, then this hour may be the second hour of the subject S on that day
- if it is on the same day, it should be immediately after the previous hour of the subject S
- in the above case, it must be the last hour for the teacher
- three classes of students are divided into 5 groups for the subject S
- there are 4 teachers for those 5 groups, one teacher teaches groups number 2 and 4
- there is a given list of students for groups 1, 3 and 5 and a combined list for students for groups 2 and 4
- computer should divide the combined list into two separated lists (2 and 4) but they must not differ for more than 4 students in size
- as one of those groups (2 or 4) are always idle, the subject M which is equally divided, must be taught then - or the S should be the first hour of the day
- for there are only 4 hours of subject M per week
- there are only 3 teachers of M
- there are also 3 hours of subject A per week for those same students in 5 differently set groups
- there are 5 teachers of A, but one of them also teaches the group number 1 of S
- it would be nice but not mandatory if the number of waiting hours for students were 0
This is a real life example, I have discussed 1 hour ago with one of the teachers (math teacher) in one of our schools. It is not the most complex demand we had, by far.
S = Slovenian language M = Math A = Anglescina (guess what that is)
fair enough, I was underwhelmed by your initial post describing it but I agree that showing that your system can handle weird constraints in real examples is an excellent demonstration.
The record thing to me just happens to be a good demonstration that you're not just another little startup with some crappy schedualling software, you're actually at the top of the field in some areas.
North America's schools as a big market
The thing is, it's a very fragmented market. The US schools are local, basically run at the town level, so for you it is essentially a retail market with a large number of customers each of which buys little. I'm guessing that you'll need a large sales organization to break in.
Or possibly to find an existing company selling office/organization/planning software that's already got a big share of the market and selling them license to the tech.
We did some benchmarks. Sometimes we did it well, sometimes not that well.
For example in the case of Job Shop Scheduling benchmark we were unable to break a single record. There are records waiting to be break in JSS area, but we haven't broken a single one.
But we are still holding some (years old) packing records right now.
One may say, that JSS is the base of every scheduling and that packing is not. In fact, the real life scheduling is more complicated than either one of those benchmarks. We have many more constrains in real life. And it turns out, that many constrains somehow help the evolution to find trade-offs.
if you're the holders of some records for certain problem types then that grabs my interest.
I'd suggest leading with that since it's a strong one.
As far as I can see, the experiment has shown that what was considered to be the lower bound is actually not.
However I don't understand how the claim of "no lower bound at all" necessarily follows. For all we know there is just a different, lower (lower bound).
I found it odd as well but I think it's because it implies that the theoretical reason for that lower bound may be invalid.
There's likely going to turn out to be a different theoretical lower bound for some other reason but right now we don't have that theoretical reason.
If true this has some spectacular implications for computing (long term).
http://phys.org/news/2016-07-refutes-famous-physical.html
"Now, an experiment has settled this controversy. It clearly shows that there is no such minimum energy limit and that a logically irreversible gate can be operated with an arbitrarily small energy expenditure. Simply put, it is not true that logical reversibility implies physical irreversibility, as Landauer wrote."
Some of the limits of computation, how much you could theoretically do with a certain amount of energy are based on what appear to have been incorrect beliefs about information processing and entropy.
It will push the research towards "zero-power" computing: the search for new information processing devices that consume less energy. This is of strategic importance for the future of the entire ICT sector that has to deal with the problem of excess heat production during computation.
It will call for a deep revision of the "reversible computing" field. In fact, one of the main motivations for its own existence (the presence of a lower energy bound) disappears.
Some interesting news: the first autonomous soft tissue surgery, sounds like a notable breakthrough in machine vision was involved for distinguishing all the messy, fleshy internals of the (porcine) patient.
http://www.popularmechanics.com/science/health/a20718/first-autonomous-soft-tissue-surgery/
Evolution. Schedules are competing for being there. Every second 10000 or so are born and are mostly killed by the control program which let live only the top schedules according to the 30+ criteria set in the script.
Random (but perhaps clever) mutation and non-random selection, that's under the hood.
At first, the top schedule is a random one and not feasible at all. After a million (or a billion, that depends) generations the first feasible one appears and from there on, evolution produces more and more perfect schedules.
For every processor core, at least one evolution is going on. Each at least slightly different one. The program can spread across many computers and there may be as many as 100 or more parallel evolutions going on. They talk occasionally (via internet) and exchange their champions.
It has been 10 years long real life experiment, which went very well. A lot of schools involved, teachers and students and some academic papers published. Now it's time to spread it.
so, parallel genetic algorithm based scheduling app with (ranked?) constraints?
In what way is it more automatic than existing similar apps?
presumably you still need to give it a list of constraints (say a few thousand constraints), possibly in a spreadsheet, some soft, some hard and it spits out a few of the top solutions or presumably an error if the hard constraints cannot be met?
What can it do that, say, optaplanner can't do?
View more: Next
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Note that with a goal to eliminate a species completely, the longer you wait to get experience and perfected technology, the better.
A major screw up in such a case would be some random factor, mutation etc. preventing us from wiping all mosquitoes, and leaving a group that would be resistant to current gene-drive technology.
I don't know enough about gene-drives to suggest how it might happen - but the point is that there are always "unknown unknowns".
That smaller group would then quickly spread and replace the previous population, and would be harder to deal with.
Repeat a few times, and you have gradually nudged the population of mosquitoes to be resistant to our attempts to eliminate it.
It's possible that waiting longer and using a better technology in the first strike, would have solved the problem cleanly.
I remember having a similar discussion about HIV and anti-retroviral drugs.
In short, it's an easy position to take if you and the people you care about aren't currently in the firing line and making policy choices on assumptions about future discoveries that we can't guarantee is ethically problematic.