If it's worth saying, but not worth its own post (even in Discussion), then it goes here.


Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should be posted in Discussion, and not Main.

4. Open Threads should start on Monday, and end on Sunday.

New Comment
127 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

I have some software I am thinking about packaging up and releasing as open-source, but I'd like to gauge how interesting it is to people other than me.

The software is a highly useable implementation of arithmetic encoding. AE completely handles the problem of encoding, so in order to build a custom compressor for some data set, all you have to do is supply a probability model for the data type(s) you are compressing (I call this "BYOM" - Bring Your Own Model).

One of the key technical difficulties of data compression is that you need to keep the ... (read more)

2Gunnar_Zarncke
Summary after your explanations: Yes, I'm really interested in an open-source packaging of your library. Preferably on github.
2ChristianKl
I read a bit of what you previously wrote about your approach but I didn't read your full book. I think a bunch of Quantified Self applications would profit from good compression. It's for example relatively interesting to sample galvanic skin response in very short time intervals of 5ms. Similar things go for accelerometer data. It would be interesting what kind of data you can draw from the noisy heart rate data on smartwatches with shorter time intervals. Smart watches could easily gather that data with shorter time accuracy than they currently do but they have relatively limited space. In practice I think it will depend a lot on how easy it is to use your software. Maybe you could also have a gamified version. You have a website and every week there's a dataset that get's published. Only have of the data is released. Every participant can enter their own model via a website and the person who's model compresses the unreleased part of the data the best wins.
0Daniel_Burfoot
Thanks for the feedback. A couple years ago (wow, is LessWrong really that old?) I challenged people to Compress the GSS, but nobody accepted the offer...
0ChristianKl
The minimum amount of time investment to participate in the GSS challenge might take hours. For most people it's not even clear what steps are involved in building a model for compressing a dataset. It's not really gamified. I think it would be possible to have a website that allows people to make up a model in a minute to take part in the tournament. A one-minute model might be bad, but it might get people into the mood for engaging into the game. I also think that a QS dataset might be more interesting than compressing the GSS. Promotion wise I think it could be promoted via the QS website (I might still have posting privelages or simply ask, I doubt people would have a problem). Of course it might be that I misunderstand the issue and it's not possible to build the website in a way that allows people to provide 1 minute models.
0gwern
I dunno if it would be all that interesting. If someone wants to work on predictive modeling of datasets every week or month in a tournament format, they can just use Kaggle (and win with XGBOOST or a residual network, likely). I have fat/muscle/weight data on myself from an Omron scale going back 2 years with multiple measurements on most days; this is a reasonably interesting dataset because one can quantify measurement error, the variables are interrelated with one or two latent variables, there are definite nontrivial time trends, and it's easy to generate hold out data (if the tournament runs 1 month, then there's an additional 1 month of data which no one, including the organizer, had access to to score contributions with at the end) - but I doubt anyone would bother participating. I have an even bigger QS dataset incorporating all my recorded data of all kinds on a daily granularity, somewhere around 100+ summary variables, but the missingness is so high that it would be unpleasant to work with (I've been having a great deal of difficulty just getting lavaan/blavaan to run on it) and likewise I doubt there would be much interest in a competition. There needs to be some sort of incentive: either prizes, inherently interesting data, or some important intellectual/scientific point to it. Kaggles with a lot of participating have big prizes or sexy datasets like the Higgs boson or whales.
0ChristianKl
I think there a scientific point for those QS data sets that can be automatically measured with a high scale of granuality. Very frequently people measure less data because they don't want to store all the data that a single sensor can produce. Currently acclerometer data get's compressed into the variable of "steps". That variable has the advantage that it has an intuitive meaning but it's likely not the best possible variable to gather when doing scientific work about how Pokemon Go leads people to do more exercise.
0gwern
Doesn't that have as much to do with battery life and software engineering effort than anything? Those sensors could already log data in much more detail by streaming into an off-the-shelf compressor like xz, but they don't because good compression inherently requires a lot of computation/battery-life and adds complexity compared to naive methods. There don't seem to be many use-cases where people having already plugged in zpaq but that just isn't enough and they need even better compression.
0ChristianKl
I think translating accerlerometer data into steps is effectively a way of data compression. But it's a way of data compression that's not optimized for leaving important features of the data intact but about trying to give users a variable they think they understand.
2Gunnar_Zarncke
Is it something like this? http://www.cipr.rpi.edu/research/SPIHT/EW_Code/FastAC_Readme.pdf
4Daniel_Burfoot
Thanks for posting this link, it contains a good illustration of the problem of using separate encoder/decoder implementations. See how they have separate encoder/decoder implementations on page 8/9 of the document? That strategy is very very error prone. It is very hard for the programmer to ensure that the encoder and decoder are performing exactly the same updates, and even the slightest off-by-one error will cause the process to fail completely (I spent many hours trying to debug sync problems like this). This problem becomes more painful as you attempt to build more and more sophisticated compressors. With my library, there is no separation of encoder and decoder logic; it is effectively the same code. That basically guarantees there will be no sync problems. Since I developed this technique I haven't had any sync problems.
2Gunnar_Zarncke
Which language?
0Daniel_Burfoot
Java.
2Gunnar_Zarncke
Great. I'm interested. Performancewise it may not be the best possibility, but for reusability it's good. I wonder about the overhead of your abstraction.
0Daniel_Burfoot
Thanks for the feedback! Re: performance, my implementation is not performance optimized, but in my experience Java is very fast. According to this benchmark Java is only about 2x slower than pure C (also known as "portable assembly").
2Gunnar_Zarncke
Yeah, the benchmark game. But arithmetic coding and the implied bit twiddling isn't exactly the strength of Java. On the other hand in this case the overhead of you in-sync de/encoding abstraction may be decisive.
0akvadrako
Just post it on github with no effort. If you start getting pull requests or issues logged, you'll have your answer.
0Lumifer
Two basic questions: (1) What are the immediate practical applications? (2) How qualified must the user be? (The "all you have to do is supply a probability model" part is worrying :-/)
4Daniel_Burfoot
Basically, if you have your own dataset that you wanted to compress with a special purpose model, you could try doing that. You could try out compression-based tricks for computer vision, like in this paper. You could use it as part of an information theory course if you wanted to show students a real example of compression in practice. In my view it is quite easy to use, but you still need to be a programmer with some knowledge of stats and information theory.
2Lumifer
So it's more of a library and less of an application?
2Daniel_Burfoot
Yes.

Some interesting news: the first autonomous soft tissue surgery, sounds like a notable breakthrough in machine vision was involved for distinguishing all the messy, fleshy internals of the (porcine) patient.

http://www.popularmechanics.com/science/health/a20718/first-autonomous-soft-tissue-surgery/

I've written a summary of 'result-blind peer review' with all the references I could find: https://en.wikipedia.org/wiki/Scholarly_peer_review#Result-blind_peer_review Anyone know of more?

Growing and interlinking neurons

First ever optic nerves regrown in a mammal.

http://sciencebulletin.org/archives/3026.html

and neurons bathed in IGF interconnect denser and more often

http://www.kurzweilai.net/neurons-grown-from-stem-cells-in-a-dish-reveal-clues-about-autism

Texting changes brain waves to new pattern

http://sciencebulletin.org/archives/2623.html

How does being nervous influence your ability stats? Being nervous improves my mental abilities (I usually did better on standardized tests than I did on practice ones and I can tell that my recall is much better when I'm nervous), but I get clumsier and less articulate. Interestingly, when I'm nervous I come across as being far less intelligent than I normally do, even though the reverse is true.

4Gram_Stone
Adrenaline can improve function in a number of domains, so it might be that anyone who has test anxiety or some other performance anxiety could in certain situations that they perceive as threatening have some sweet concentration of adrenaline in their bodies such that they have improved performance rather than impaired performance, and this is never recognized as occurring by the same mechanism as test or performance anxiety unspecified because the amount of adrenaline causes symptoms that aren't detrimental. Conceivably the amount of adrenaline and the effects thereof could be different across different domains. Maybe that happens to you. What do you think? EDIT: This might lead to empirical evidence. Anxiety may decrease performance when attention has to be switched between tasks, but may improve performance when the task is difficult and singular. Think social situations vs. exams.
0James_Miller
Yes, this could be what's happening with me.
4niceguyanon
Same happens with certain physical activities, race day magic is so common.
0Unnamed
See: Yerkes-Dodson law and research on "optimal level of arousal".

"In deep learning, architecture engineering is the new feature engineering"

Trying to set algo's so they are not limited sets of the developers, or the databases.

http://smerity.com/articles/2016/architectures_are_the_new_feature_engineering.html

An AI was used to optimize and align a set of lasers used to produce BoseEinstein Condensates, and optimized the problem in one hour.

"Fast machine-learning online optimization of ultra-cold-atom experiments"

http://www.nature.com/articles/srep25890

SETI research organizational optimization, and exam... (read more)

By now I have read (or skimmed) so many reviews of Age of Em that I probably could have read the book myself...

Anyway.

I though about the two future paths: A Hansonian Em future and a machine AI future. I wondered how to reconcile the (seeming?) contradiction between them. Then the idea occurred to me that maybe that both can be (mostly? partly?) true:

If FAI is possible it will likely make Ems possible shortly after. If it is friendly as assumed then an exploitation of the Ems (loss of human value) as feared by many comments is ruled out by construction.... (read more)

0MrMind
I know regard reading a book a not so trivial investment of time and energy, given the huge quantity of possible books I could be reading right now. Is there any particular reason to believe Hanson's beliefs? So that it makes sense to anticipate the future the way he does?
2pcm
There's no particular reason to believe all of his predictions. But that's also true of anyone else who makes as many predictions as the book does (on similar topics). When you say "anticipate the future the way he does", are you asking whether you should believe there's a 10% chance of his scenario being basicly right? Nobody should have much confidence in such predictions, and when Robin talks explicitly about his confidence, he doesn't sound very confident. Good forecasters consider multiple models before making predictions (see Tetlock's work). Reading the book is a better way for most people to develop an additional model of how the future might be than reading new LW comments.
0MrMind
If your model doesn't even get to 10%, then I say: unless you have hundreds of competing models in your mind (who has?), then do not even bother. Your comment helped me reach the conclusion that reading AoE would be a waste of time.

I went to a party recently, and the host provided the food. At the end of the party, there was an awful lot left over, and my understanding is that most of it went to waste.

I had a thought when this was happening: if I was the host, why not keep track of how much food my guests actually ate, and try adjusting the amount of food at my next party to match?

The host was not a rationalist, as I suspect most hosts aren't, but upon researching the issue, it doesn't seem as if there's a widespread solution.

There are charities that focus on "recycling" f... (read more)

The reason parties are oversupplied with food is because the incentives are asymmetrical. Specifically, the loss from having too much food is considerably smaller than the loss from having too little food.

Having insufficient food is a significant loss of status since you failed as a host to provide proper hospitality. There are a bunch of obvious historical and cultural reasons why not being able to feed your guests is a bad thing, status-wise.

Having too much food is just a matter of some wasted money and/or having to eat leftovers for few days. Not a big deal at all nowadays.

2MrMind
That calories are used as social lubricant irks me a lot. I understand why it was so in the past, but we live in a world filled to the brim with food, do we really need tens of thousands of calories at any social gathering? The answer is obiously not, indeed it would be beneficial to lower the amount circulating... But as Lumifer spotted and wannabe rationalists often overlook, what appears as waste and irrationality is actually a situation optimized for status. Ignoring status is almost always a bad idea, BUT: we can always treat it as just another contraint. Given that we need to optimize for status and waste reduction, what could we do? * coordinate with a charity to pick-up the leftovers * use food that can be easily refrigerated and consumed gradually later * have food in stages, so that variety masks lack of abundance (and pressure people into eating leftovers) * repackage leftovers and offer them as parting gifts ... These are just from a less than five minute brainstorming session, I'm sure someone invested in this would come up with much more interesting and creative ideas.
5Lumifer
In the Western world where obesity is rampant, why do you want to pressure people into eating more? Generally speaking, the party-leftovers issue doesn't strike me as much of a problem. I suggest doing a back-of-the-envelope calculation of the harm it causes.
0MrMind
Well, because that's what the problem statement asked for! But yeah, it's probably a forgotten purpose: what should be optimized is the amount of food not wasted, not how much food remains at the end of the party. It's not indeed! But it's a nice simple little world, I took it as an exercise in rationality.
4Gunnar_Zarncke
I like your suggestions. Asking people whether they want to take leftovers is an option I have seen used a lot.
3gjm
That doesn't sound to me like it's compatible with "optimizing for status".
0MrMind
The sentence was perhaps ambiguous: I meant that the pressure for eating leftovers derived from the stages, from the fact that that particular food in x minutes will be no longer available. You know, the usual scarcity trick. Not that the patron should encourage attendees to finish their plates :)
0ChristianKl
I think that frequently people don't want to eat the last thing because it means that other can't eat the last thing, but social norms might vary.
2jsteinhardt
I don't think this is really a status thing, more a "don't be a dick to your guests" thing. Many people get cranky if they are hungry, and putting 30+ cranky people together in a room is going to be a recipe for unpleasantness.
1Gunnar_Zarncke
But there is a difference between having an amount appropriate to avoid crankiness and more than can be eaten.
1jsteinhardt
But like, there's variation in how much food people will end up eating, and at least some of that is not variation that you can predict in advance. So unless you have enough food that you routinely end up with more than can be eaten, you are going to end up with a lot of cranky people a non-trivial fraction of the time. You're not trying to peg production to the mean consumption, but (e.g.) to the 99th percentile of consumption.
-1Gunnar_Zarncke
You seem to think that people that are not completely satiated are automatically cranky. That doesn't match my observation. Also you may have multiple dishes. For example we mostly start with a collaboratively prepared soup - which thereby will be the right size by construction. Later we have some snacks or sweets or fruits. First the fresh ones, later if needed packaged ones.
1jsteinhardt
I don't think I need that for my argument to work. My claim is that if people get, say, less than 70% of a meal's worth of food, an appreciable fraction (say at least 30%) will get cranky.
2Gunnar_Zarncke
Then maybe we have different experience. Or differently selected people around us.
7Dagon
For most problems like this, it's worth solving once or twice at small scale before you look for general solutions. How many parties have you thrown (or guided the food procurement for), and what have you found that makes for better estimation of needs? Have you talked with caterers or other experts in such estimation? It would be interesting to learn how they decide when to risk too little vs too much, and the clever tricks they have to control consumption (which will make the estimates more accurate). For instance, having lots of cheap starches and limited meat, along with explicit or subtle rationing, can lead to high waste measured by weight or calories, but fairly low waste measured by cost.
2Lumifer
Not sure caterers will be helpful since they're paid for what they bring to the party and they don't care at all whether it gets eaten or not. Similarly, the all-you-can-eat buffets have lots of data from which to estimate how much an average customer eats, and they have the law of large numbers on their side, too. For the house parties the usual answer is just experience. After a few missteps most people can learn to have a workable idea of the amount of food needed without formulating a full Bayesian model or even without a simple spreadsheet. Of course there is some uncertainty and the incentives make the host provide the amount at the top end of the reasonable estimate interval.
5Gunnar_Zarncke
Just to provide a data point: I'm hosting get-togethers of friends that you might call parties regularly and usually nothing or a very small amount is thrown away. I'm wondering whether this might be specific to Germany. Here there is some social pressure to avoid wasting stuff (together with a strong trend for sustainability).
2entirelyuseless
"why not keep track of how much food my guests actually ate, and try adjusting the amount of food at my next party to match?" Because the amount of food that people eat is not an absolute value, but a function of how much is there. If you do that adjustment, and then continue to do that adjustment, you will end with a situation without any food. That is true both at parties and in any other situation, like meals served to people who otherwise will have nothing to eat, at least to a first approximation -- if the last situation is absolute, you will get people eating some food, but it will not be enough to live on.
0Viliam
I guess there is not a fixed amount of food brought per guest, but rather a random distribution. The host's goal is not to make sure that the average "food brought" equals the average "food desired", but rather that with, say, 95% probability the current "food brought" is at least 90% of "food desired" (feel free to change the numbers to fit your experience). Also, the host is hedging against the possibility that the few guests who usually come with hands full of food, suddenly can't come or for some random reason come empty-handed. I guess the best way to improve the world is to have a list of such charities in your neighborhood ready in a printed form, and give it to the host if they are interested.
4gjm
I agree with all that and would add: * "Too much food" is a much less fun-killing failure mode than "Not enough food". * You'd like guests to have a decent choice of things to eat even at the start when not so much has been brought and at the end when lots has been eaten. In particular, plenty of choice at the end of the party => lots of food left over. * At least some party food keeps well and serves nicely as snack food, so if you have too much you just eat it later. (Or maybe bring it to another party. Check those best-before dates!) * Having too much food kinda suggests "this person has lots of generous friends and/or limitless resources" whereas having too little kinda suggests "this person has no generous friends and is in financial trouble". Which message would you rather be sending to your party guests? * The wastage isn't super-expensive anyway. What fraction of your income do you spend on party food?

If true this has some spectacular implications for computing (long term).

http://phys.org/news/2016-07-refutes-famous-physical.html

"Now, an experiment has settled this controversy. It clearly shows that there is no such minimum energy limit and that a logically irreversible gate can be operated with an arbitrarily small energy expenditure. Simply put, it is not true that logical reversibility implies physical irreversibility, as Landauer wrote."

Some of the limits of computation, how much you could theoretically do with a certain amount of ene... (read more)

6Vitor
This will not have any practical consequences whatsoever, even in the long term. It is already possible to perform reversible computation (Paper by Bennett linked in the article) for which such lower bounds don't apply. The idea is very simple: just make sure that your individual logic gates are reversible, so you can uncompute everything after reading out the results. This is most easily achieved by writing the gate's output to a separate wire. For example an OR gate, instead of mapping 2 inputs to 1 output like (x,y) --> (x OR y), it would map 3 inputs to 3 outputs like (x, y, z) --> (x,y, z XOR (x OR y)), causing the gate to be its own inverse. Secondly, I understand that the Landauer bound is so extremely small that worrying about it in practice is like worrying about the speed of light while designing an airplane. Finally, I don't know how controversial the Landauer bound is among physicists, but I'm skeptical in general of any experimental result that violates established theory. Recall that just a while ago there were some experiments that appeared to show FTL communication, but were ultimately a sensor/timing problem. I can imagine many ways in which measurement errors sneak their way in, given the very small amount of energy being measured here.
4Gunnar_Zarncke
While you can always make the computation reversible it comes at a price: Carrying around larger and larger number of bits which take space and time to communicate and store.
2Douglas_Knight
I think that the Laundauer limit is controversial. But if it's wrong, one should be able to explain on the level of theory. What ordinary models of physics say about their gate is much more convincing than an experiment. How did they design their gate if they don't have a competing theory?
2Lumifer
As far as I can see, the experiment has shown that what was considered to be the lower bound is actually not. However I don't understand how the claim of "no lower bound at all" necessarily follows. For all we know there is just a different, lower (lower bound).
0HungryHobo
I found it odd as well but I think it's because it implies that the theoretical reason for that lower bound may be invalid. There's likely going to turn out to be a different theoretical lower bound for some other reason but right now we don't have that theoretical reason.

Found this great youtube channel by a guy named Isaac Arthur, covering a variety of space topics. Has videos on Dyson Spheres, colonizing the Moon, and even concepts for very long term survival of civilizations and people past the heat death of the universe. Very rational and comprehensive.

[Link] Slashdot "New Study Shows Why Big Pharma Hates Medical Marijuana"

Christopher Ingraham writes in the Washington Post that a new study shows that painkiller abuse and overdose are significantly lower in states with medical marijuana laws and that when medical marijuana is available, pain patients are increasingly choosing pot over powerful and deadly prescription narcotics.

--

0root
I've read (mostly things by Ron Maimon) that marijuana* can actually impair your ability to do calculations (and in extent, I'd also assume your ability to make decisions) and I'm curious if there's any truth to that. * Is there a difference between marijuana, medical marijuana, weed, instert_name_here? They seem to be used interchangeably. At least they seem to cause a similar if not the exact same effect.
2Gunnar_Zarncke
There are long-term effects, but the impact seems to be not fully clear (wikipedia, On the other hand there are many known side-effects of the normal drugs used for the same purpose. Beside the medical properties there are also the social properties of a drug. See also the AMS report Brain science, addiction and drugs.

Now you can raise your neural networks at home, and then send em to school in the cloud, on GPU's.

"Today, researchers and developers can train their neural nets locally, and deploy them to Algorithmia’s scalable, cloud infrastructure, where they become smart API endpoints for other developers to use."

http://blog.algorithmia.com/2016/07/cloud-hosted-deep-learning-models/

A course in machine learning, printed or electronic

http://ciml.info/

I've been reading a slice of Neoreactionary - Anti-Neoreactionary discussions on Slate Star Codex.
A problem I've seen is that people are too hung-up to a positive / negative affiliation with the passage of time. The controversy seems to revolve mostly around "the past was good / the past was bad".
Who cares how the past was?
Just tell me what your values are and what political / social system you think serves them best!
It doesn't matter if it comes from the past, the Bible, Lord of the Rings or utopian literature. Just discuss the model! It's mostly fiction anyway.

(this mini-rant is directed at nobody in particular. I'll likely never have the occasion to discuss with a Neoreactionary)

9Viliam
past = outside view For example, if in the past people have repeatedly suggested a plan to create a paradise on Earth, and the plan, when realized, repeatedly ended with bloodshed and poverty, and now someone suggests the same plan again... I guess that's a reason to suspect it probably wouldn't end well. At the very least, the proponent should explain why exactly the previous instances have failed and what exactly they are planning to do differently today to avoid that specific failure. But there is a difference between using the past as an outside view, i.e. conservatism; and worshipping the "past as my modern mind imagines it", i.e. neoconservatism / neoreaction. The latter is, ironically, in some aspects similar to the progressives who are worshipping the fictional future -- similar approach to modelling society, different aesthetics (or as you called it "positive / negative affiliation with the passage of time").
0MrMind
I would be a little more radical, but you said what I thought better than I could.
6Vaniver
I think a lot of political questions hinge on what's possible, and also what the consequences of policies are. If someone says "I think we should arrange marriages instead of letting individuals pick," then the immediate questions to settle are 1) will people allow such a policy to be put in place / comply with it, and 2) what will the consequences be? (There's also the "does this align with principles" deontological question, but this is relatively easy to answer without looking at the past or present so I'll ignore it.) And the past provides our primary data source to answer those sorts of questions. Yes, we can imagine multiple different causal effects of attempting to arrange marriages, but how those interplay with each other and shake out is hard to know. But other people tried that for us, and so we can investigate their experiments and come to a judgment.
5MrMind
The problem I see in using the past as evidence is that the further we go from our era, the more what we know is mostly made up. True, we have documents and evidence and so on, but they only paint a relatively sketchy picture of what the society was, we mostly made up the details in a reasonable manner. Plus we don't get any statistical data on things like happiness, income, etc. The risk of mistaking noise for signal is so high that it's probably worth throwing it all away, especially when the starting point of the conversation is "People were happier / sadder in xth century, so we should / shouldn't do as they did". How can you possibly know?
5Vaniver
Sure, quality of data degrades with distance, both in space and time. But I don't think it degrades to the point where it actually is worth throwing it all away. Is this a serious question, or a statement of anti-epistemology? (That is, all knowledge is uncertain, and so the right question is "how did you get to the level of uncertainty you have" rather than "how do you justify pretending that there is no uncertainty?")
0MrMind
It's not only that data becomes more scarce. It's also that it becomes noisier. Case in point: many people believe the Gospels to be a semi-accurate narration of what happened during that era, but actually they were compiled centuries later, and historically contemporary source are both scarce and painting a completely different pictures. The furthest we go, the higher the possibility of having bogus evidence. A bit of both, I guess. A cautionary tale, but also a question I would definitely make if I were discussing with someone with that point of view.
4ChristianKl
I very much prefer people who base their political beliefs based on empirics about the real world compared to people who just base their political beliefs on made-up fantasy. I don't think there a good reason to treat both the same.

I have a neat software solution for something. Is it kosher to discuss it here, or it would be considered just as another spamming attempt?

[-]Elo160

Try us. Are you selling to us? If yes then maybe not so great to do (however Squirrelinhell released hasteworm just recently and no one complained). If no, then idea sharing is good.

5Thomas
Say, that you have a school with about 100 teachers, 1000 students, 25 rooms ... Each having his/hers demands and constraints. Now, you want an optimal schedule - who doesn't. For that I have a software to do it automatically. Not semi-automatically like everyone else. I want to test it for the North America and Australia's primary and secondary schools on several real life examples. For free, of course. I am looking for a principal or his assistant to try this together over Skype.
2sdr
Heads up about the business side of this: selling to primary & secondary schools, esp outside of the US, is 8/10 difficult. Specifically, even if the teachers are fully championing your solution, they do not wield any sort of purchasing authority (and sure as hell won't pay from their own wallet). Purchasing authority's incentive-structure does not align with "teacher happiness", "optimal schedule", or most things one would imagine being the mission of the school. It is, however, critical for them to control all sw used inside the school, and might actively discourage using non-approved vendors.
2Viliam
Whose job is it typically to create the schedule? Do those people have political power in schools? If your marketing point is "better schedules", then yes, it is about the benefit for teachers and students, and no one important cares about that. However, if your marketing point is "easier to make schedules", suddenly the school administration has an incentive to care.
0Thomas
Pure economically driven decisions should win eventually. For example we have once reduced the number of school buses from 4 to 3. 20% or 160 students come with a bus. That's 3 full buses or 4 not so full buses. It's important however, that every arriving student has a class right away. Otherwise he may want to come with a later bus, overcrowding it. Just on time arriving of those students with just 3 buses was a logistical nightmare. But just a constrain for the digital evolution of the school schedule. Another big saving is to eliminate the afternoon school shift. We have 2 such cases already evolved.
5Lumifer
Only in the realm of spherical cows in vacuum. Also known as "The markets can stay irrational longer than you can stay solvent".
2Vaniver
What optimization method are you using under the hood, if you don't mind me asking?
5Thomas
Evolution. Schedules are competing for being there. Every second 10000 or so are born and are mostly killed by the control program which let live only the top schedules according to the 30+ criteria set in the script. Random (but perhaps clever) mutation and non-random selection, that's under the hood. At first, the top schedule is a random one and not feasible at all. After a million (or a billion, that depends) generations the first feasible one appears and from there on, evolution produces more and more perfect schedules. For every processor core, at least one evolution is going on. Each at least slightly different one. The program can spread across many computers and there may be as many as 100 or more parallel evolutions going on. They talk occasionally (via internet) and exchange their champions. It has been 10 years long real life experiment, which went very well. A lot of schools involved, teachers and students and some academic papers published. Now it's time to spread it.
0HungryHobo
so, parallel genetic algorithm based scheduling app with (ranked?) constraints? In what way is it more automatic than existing similar apps? presumably you still need to give it a list of constraints (say a few thousand constraints), possibly in a spreadsheet, some soft, some hard and it spits out a few of the top solutions or presumably an error if the hard constraints cannot be met? What can it do that, say, optaplanner can't do?
2Thomas
I wouldn't say it's "genetic algorithm", I prefer the term "evolution algorithm". We did some testings. For example, we took some existing schedules and optimized them with our tool. The difference was substantial. We also did some packings of circles inside a square and some spheres inside a cube, denser than it has been previously achieved. We have built some 3D croswords 8 by 8 by 8 letters with no black field at all - field with English words. I don't know if optalaner can do the same. I think not. Every constrain has its own user specified weight. From 0 to 10^12 and every integer inside this interval. This is the measure of how soft or hard a constrain is.
4Vitor
Did you also test what other software (optaplanner as mentioned by HungryHobo, any SAT solver or similar tool) can do to improve those same schedules? Did you run your software on some standard benchmark? There exists a thing called the international timetabling competition, with publicly available datasets. Sorry to be skeptical, but scheduling is an NP-hard problem with many practical applications and tons of research has already been done in this area. I will grant that many small organizations don't have the know-how to set up an automated tool, so there may still be a niche for you, specially if you target a specific market segment and focus on making it as painless as possible.
0Thomas
We did some benchmarks. Sometimes we did it well, sometimes not that well. For example in the case of Job Shop Scheduling benchmark we were unable to break a single record. There are records waiting to be break in JSS area, but we haven't broken a single one. But we are still holding some (years old) packing records right now. One may say, that JSS is the base of every scheduling and that packing is not. In fact, the real life scheduling is more complicated than either one of those benchmarks. We have many more constrains in real life. And it turns out, that many constrains somehow help the evolution to find trade-offs.
1HungryHobo
if you're the holders of some records for certain problem types then that grabs my interest. I'd suggest leading with that since it's a strong one.
5gjm
Not necessarily for their target market.
0NancyLebovitz
I belief that being flexible about target markets is one of the major ways businesses grow.
0Thomas
The best way to win principals is to show them that a ridiculously complex constrain may be applied and calculated automatically. * 4.5 school hours of S per week (4 hours on odd weeks and 5 hours on even weeks) * when there is the fifth hour in the week, then this hour may be the second hour of the subject S on that day * if it is on the same day, it should be immediately after the previous hour of the subject S * in the above case, it must be the last hour for the teacher * three classes of students are divided into 5 groups for the subject S * there are 4 teachers for those 5 groups, one teacher teaches groups number 2 and 4 * there is a given list of students for groups 1, 3 and 5 and a combined list for students for groups 2 and 4 * computer should divide the combined list into two separated lists (2 and 4) but they must not differ for more than 4 students in size * as one of those groups (2 or 4) are always idle, the subject M which is equally divided, must be taught then - or the S should be the first hour of the day * for there are only 4 hours of subject M per week * there are only 3 teachers of M * there are also 3 hours of subject A per week for those same students in 5 differently set groups * there are 5 teachers of A, but one of them also teaches the group number 1 of S * it would be nice but not mandatory if the number of waiting hours for students were 0 This is a real life example, I have discussed 1 hour ago with one of the teachers (math teacher) in one of our schools. It is not the most complex demand we had, by far. S = Slovenian language M = Math A = Anglescina (guess what that is)
1HungryHobo
fair enough, I was underwhelmed by your initial post describing it but I agree that showing that your system can handle weird constraints in real examples is an excellent demonstration. The record thing to me just happens to be a good demonstration that you're not just another little startup with some crappy schedualling software, you're actually at the top of the field in some areas.
0Lumifer
If your algorithm is actually the best-of-class for this problem, there are serious applications for it outside of schools.
0Thomas
I know that. But my focus in this thread are North America's schools as a big market. But yes - how good this algorithm really is? Where is its optimal domain? I guess, evolving algorithms is the best usage. Either from a previous known algorithm, either from scratch, either from data. Like evolving Kepler's law from planetary data. I wrote a post about that here, a few years ago. http://lesswrong.com/lw/9pl/automatic_programming_an_example/
0Lumifer
The thing is, it's a very fragmented market. The US schools are local, basically run at the town level, so for you it is essentially a retail market with a large number of customers each of which buys little. I'm guessing that you'll need a large sales organization to break in.
0HungryHobo
Or possibly to find an existing company selling office/organization/planning software that's already got a big share of the market and selling them license to the tech.
0Viliam
Does the solution space support this? I can imagine a schedule that only violates 1 criterium, but the nearest correct solution is far away from it. (Seems to me the schedules are similar to 3-SAT in this aspect.)
0Thomas
This is indeed a big and fundamental problem. If 1 criterium only is violated and this persists for many millions of generations, the control program sees this semi-solution as worse and worse. Much worse than a 2 or 4 criteriums miss. So it's then killed. It's even more complicated than that. Several such tricks are employed and this problem almost vanishes.
2Viliam
I used to make schedules with aSc TimeTables years ago. There is a free demo available. Could you compare your approach with this app? The application is fully automatic, in the sense that you first enter all the data and constraints, and then you run the computation. (There is an option to put some items on specific place and "lock" them there, but this is strongly discouraged unless you really know what you are doing. Essentially, you can do it to speed up the computation if you have a logical proof that certain things must be done some way, but the application can't notice that and keeps wasting CPU time with alternative approaches.) As far as I know, the numbers of teachers / students / rooms / subjects are not limited, but of course their number has an impact on the complexity of the computation.
4Thomas
The complexity of constrains for each student or teacher is much greater with our software than with aSc. The scripting language is much more complex and enables you to describe pretty much every whim one might have. Like a different speed, teachers have between two different locations or after how many classes a break is mandatory for a specified teacher ... and many more. It's student oriented primary and every student can have a very different curriculum then everybody else. Still, all this will be automatically calculated and then optimized. Now, we want to see how it will behave in practice for North America and Australia.
4Viliam
Sounds great! I hope you don't expect average teachers to write the scripts though.
4Thomas
Not an average, no. But at every other school, there is at least one teacher who is able to do it (for the entire school, of course) . Some like to work in pairs when scripting it. I thought, I might find some among readers and contributors here as well. Looking for people with this (hard) problem.

I've been thinking about belief as anticipation versus belief as association.

Some people associate with beliefs like they associate with sports teams. Asking them to provide evidence for their belief is like asking them to provide evidence for their sports team being "the best."

And beliefs as anticipation you know, I'm sure.

My question is: What are signs of a "belief" being an anticipation versus it being a mere association (or other non-anticipating belief)?

One is the attempt to defend against falsification: "If you REALLY believe... (read more)

0Viliam
This is further complicated by the fact that even the anticipation-beliefs are probabilistic. So you can have a "belief" that says "I love my sports team", and a belief that says "I expect my team to win (probability 80%)". So in both cases it is possible for the team to lose and the person to keep their belief.
0ChristianKl
I don't think the test does what you propose. I can both strongly identify with a belief and at the same time make anticipations based on the belief.
0Brillyant
Three types exist. 1) Belief as association AND Belief as anticipation 2) Belief as anticipation ONLY 3) Belief as association ONLY Only #3 Type beliefs would leave the believer making excuses in advance. They don't actually believe a claim to be true (anticipation), but they believe that assenting to the belief is important (association). See Dennet's Belief in Belief and Sagan's Garage Dragon for more info. I don't think it's quite and cut and dry as this, by the way. People have their personal probabilities in regard to how strongly they hold anticipatory beliefs. It's not all or nothing.
1ChristianKl
Empirically I don't find this to be the case. I think most skeptics do have believes of anticipation that various paranormal effects won't happen. At the same time bring a skeptic in situations where his beliefs about the domain might reasonably get challenged they might make excuses in advance. Most people don't use probability for their beliefs. They use mental processes such as the availability heuritistic, that doesn't correspond directly to probabilities. Neither Dennet nor Sagan are a psychologist or have similar experience with working with beliefs in other context. If you use their discussions that are essentially about ontology as being discussions about how humans reason you are going to make mistakes.
0Brillyant
I meant "personal probability" as the confidence at which people intuit a belief as actually anticipatory (vs. a belief they merely assent to as an association.) This level of confidence is on a sliding scale (vs. all or nothing).
0ChristianKl
Moat-and-bailey. I don't think there was a suggestion in the above post that you meant with probability something that doesn't follow Kolmogorov's axioms and where you can't directly apply Bayes rule. Especially on LW I think it's valuable to call things that don't follow those axioms and therefore aren't what's usually meant with 'probability', 'probability'.
5gjm
I don't normally point out typos (and it's probably better on balance for LW not to be the sort of nitpicky place where everyone does) but this one is (1) almost exactly backwards and (2) sufficiently plausible-sounding to be dangerous :-). It's motte and bailey. The motte is the raised mound with a fortification on it. The moat is the big ditch around the castle, usually filled with water.
3ChristianKl
Thanks.
2Lumifer
The bailiffs got drunk on Baileys, crossed the moat, and demolished the motte leaving nothing but bay leaves and motes of dust floating in the air...
0Brillyant
Okay. My point was only that there is a spectrum. Some beliefs are anticipatory (i.e. people actually believe them) and others are just associations (i.e. people don't believe them, but they find the idea of saying they believe in them to be so important they swear up and down they believe in them)... But most beliefs are somewhere in the grey middle, with people assigning a "gut feeling probability" to each belief, without doing any math.
0ChristianKl
With those semantics people not only have a "gut feeling probability" but also a "heart feeling probability" and various similar "probabilities". Those don't have to be the same and depending on the context the person is going to use a different one.
0Brillyant
Meh. Not really. There is a strong connotation in American English for "gut feeling" that means essentially instinct or intuition. Here's a definition I found via Google's first page results: "an instinct or intuition; an immediate or basic feeling or reaction without a logical rationale" This is what I meant. I think that would be clear to a high percentage of readers.
0ChristianKl
Here's again the problem that you don't look at the way humans reason but against the abstract concepts defined in the dictionary. The way terms are defined in the dictionary has little to do with the empiric reality that some people give different intuitive answers when they feel into their gut or when they feel into their heart.
0Jiro
I can guess that if you were to meet a flat-earther with the intent of engaging with his ideas, you would start thinking of what things he might show you and why those things wouldn't actually demonstrate a flat earth. That does not mean you are making "excuses in advance". "He's probably going to show me how ships disappear on the horizon, but I know that is affected by air refraction." "Oh, you're just making an excuse in advance."
1ChristianKl
What empiric standard would you use to classify things as making excuses in advance?
1Jiro
I don't know, but I'm pretty sure that "I can respond to any claim he's likely to make" isn't it. I'm not sure there is such a thing at all, short of having your idea be outright unfalsifiable.
1ChristianKl
It seems like there something that the OP means with "making excuses in advance". It might not what you think would be rightly called "making excuses in advance". I don't think that category exists in a way where it can be successfully used to distinguish people who have anticipations and are identified with a belief from people who are just identified with it.

"Superintelligence cannot be contained: Lessons from Computability Theory" http://arxiv.org/pdf/1607.00913.pdf

"Superintelligence is a hypothetical agent that possesses intelligence far surpassing that of the brightest and most gifted human minds. In light of recent advances in machine intelligence, a number of scientists, philosophers and technologists have revived the discussion about the potential catastrophic risks entailed by such an entity. In this article, we trace the origins and development of the neo-fear of superintelligence, and s... (read more)

[-]gjm160

This paper frames the problem as "look at a program and figure out whether it will be harmful" and correctly observes that there is no way to solve that problem with perfect accuracy if the programs being analysed are arbitrary. But its arguments have nothing to say about, e.g., whether there's some way of preventing harm as it's about to happen; nor about whether it is possible to construct a program that provably does something useful without harming humans.

E.g., imagine a world where it is known that the only way to harm humans is to press a certain big red button labelled "Harm the Humans". The arguments in this paper show that there is no general procedure for deciding whether a computer with the ability to press this button will do so. But they don't rule out the possibility that you can make a useful machine with no access to the button, or a useful machine with a little bit of hardware in it that blows it up if it gets too close to the button.

(There are reasons to be concerned about such machines because in practice you probably can't causally isolate them from the button in the way required. The paper's introductory material discusses some such reasons. But they play no role in the technical argument of the paper, at least on the cursory reading I've given it.)

0turchin
I think that it is difficult but may be possible to create superintelligent program which will provably do some formally specified thing. But the main problem is that we can't specify formally what is "harming human". Or we can, but we can't be sure that it is safe definition. So it results in some kind of circularity: we could prove that the machine will do X, but we can't prove that X is actually good and safe. We may try to return the burden of prove to the machine. We must prove that it will prove that X is really good and safe. I have bad feelings about computability of this task. That is why I generally skeptical of idea of mathematical prove of AI safety. It doesn't provide 100 per cent safety, because prove can have holes in it and the task is too complex to be solved in time.
7gjm
This is a real and important difficulty, but it isn't what the paper is about -- they assume one can always readily tell whether people are being harmed.
0torekp
What is the notion of "includes" here? Edit: from pp 4-5:

Did some rationality-informed commenting for my university television about guns and racism.