For the past few months we've had three developers (Eric Rogstad, Oliver Habryka, and Harmanas Chopra) working on LW 2.0. I haven't talked about it publicly much because I didn't want to make promises that I wasn't confident could be kept. (Especially since each attempt to generate enthusiasm about a revitalization that isn't followed by a revitalization erodes the ability to generate enthusiasm in the future.)
We're far enough along that the end is in sight; we're starting alpha testing and I'm going to be start posting a status update in the Open Thread each Monday to keep people informed of how it's going.
New research out of the Stanford / Facebook AI labs: They train an LSTM-based system to construct logical programs that are then used to compose a modular system of CNNs that answers a given question about a scene.
This is very important for the following reasons:
This is really exciting and I'm glad we're moving further into the direction of "neural networks being used to construct interpretable programs."
I have a neat idea for a smartphone app, but I would like to know if something similar exists before trying to create it.
It would be used to measure various things in one's life without having to fiddle with spreadsheets. You could create documents of different types, each type measuring something different. Data would be added via simple interfaces that fill in most of the necessary information. Reminders based on time, location and other factors could be set up to prompt for data entry. The gathered data would then be displayed using various graphs and c...
There's a weird cold war in software design, where everyone knows that they can use 'security' to win any argument, but we must all refrain from doing so, because that ratchet only goes one way.
The deal is that no one can ever argue against 'security', so you always win if you bring it up, but if you use that against me I'll retaliate, and the project will fail (very very securely).
Also, unrelated, if I you ever hear someone bragging about their amazing release process, just nod and ask them about the emergency release process. That's what they ACTUALLY use.
Why do you care if you're running in a simulation? Well, if you're a piece of malware, like a computer virus or Internet worm, you care whether you're being analyzed by an anti-malware researcher. If you're running in a researcher's lab instead of actually infecting target systems, you'd want to shut down so the researcher can't observe your real behavior and find out how to exterminate you. However, this means that if you can be fooled into believing you're running in a simulation when you're not, you can be made to shut down in the wild.
Can we make a list of all the best (maybe not the best, but the ones people use) implementation intentions/TAPs for rationality? That would be instantly useful to anyone who encounters it.
Also, making a list for general TAPs/implementation intentions LWers find useful in their life would also be very helpful to everyone.
I don't have enough karma to even make a post in discussion, so can someone take up my quest?
Do you find it demotivating to do mathematics which is assigned to you in school compared doing mathematics personally? I'm currently having difficulty getting myself doing mathematics thats assigned to me.
If it's worth saying, but not worth its own post, then it goes here.
Notes for future OT posters:
1. Please add the 'open_thread' tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should start on Monday, and end on Sunday.
4. Unflag the two options "Notify me of new top level comments on this article" and "