For the past few months we've had three developers (Eric Rogstad, Oliver Habryka, and Harmanas Chopra) working on LW 2.0. I haven't talked about it publicly much because I didn't want to make promises that I wasn't confident could be kept. (Especially since each attempt to generate enthusiasm about a revitalization that isn't followed by a revitalization erodes the ability to generate enthusiasm in the future.)
We're far enough along that the end is in sight; we're starting alpha testing and I'm going to be start posting a status update in the Open Thread each Monday to keep people informed of how it's going.
New research out of the Stanford / Facebook AI labs: They train an LSTM-based system to construct logical programs that are then used to compose a modular system of CNNs that answers a given question about a scene.
This is very important for the following reasons:
This is really exciting and I'm glad we're moving further into the direction of "neural networks being used to construct interpretable programs."
I have a neat idea for a smartphone app, but I would like to know if something similar exists before trying to create it.
It would be used to measure various things in one's life without having to fiddle with spreadsheets. You could create documents of different types, each type measuring something different. Data would be added via simple interfaces that fill in most of the necessary information. Reminders based on time, location and other factors could be set up to prompt for data entry. The gathered data would then be displayed using various graphs and c...
There's a weird cold war in software design, where everyone knows that they can use 'security' to win any argument, but we must all refrain from doing so, because that ratchet only goes one way.
The deal is that no one can ever argue against 'security', so you always win if you bring it up, but if you use that against me I'll retaliate, and the project will fail (very very securely).
Also, unrelated, if I you ever hear someone bragging about their amazing release process, just nod and ask them about the emergency release process. That's what they ACTUALLY use.
Why do you care if you're running in a simulation? Well, if you're a piece of malware, like a computer virus or Internet worm, you care whether you're being analyzed by an anti-malware researcher. If you're running in a researcher's lab instead of actually infecting target systems, you'd want to shut down so the researcher can't observe your real behavior and find out how to exterminate you. However, this means that if you can be fooled into believing you're running in a simulation when you're not, you can be made to shut down in the wild.
Can we make a list of all the best (maybe not the best, but the ones people use) implementation intentions/TAPs for rationality? That would be instantly useful to anyone who encounters it.
Also, making a list for general TAPs/implementation intentions LWers find useful in their life would also be very helpful to everyone.
I don't have enough karma to even make a post in discussion, so can someone take up my quest?
Do you find it demotivating to do mathematics which is assigned to you in school compared doing mathematics personally? I'm currently having difficulty getting myself doing mathematics thats assigned to me.
Short enough to just post here rater than linking:
Imagine, that you are an intelligent bird with a 360 degrees, panoramic view, flying over a plane equipped with orthogonal x and y axis, clearly visible and – what a coincidence – intersecting just 10 meters beneath you.
I argue, that due to the well known phenomenon of the geometrical perspective, you see in a distance the line which goes North, parallel to the line which goes West. In fact, every direction seems parallel to all other three directions.
Is that right, and why it’s right? How could this be?
Is there an unstated assumption that the panoramic view is accomplished by mapping to a human-evolved ~135 degree field of view? I don't think this would happen in a brain evolved and trained on panoramic eyes/sensors. It doesn't happen in reality, where panoramic views exist everywhere and are generally accessed by turning our heads.
Closer objects must appear bigger and this kind of perspective is inescapable for us, cameras or birds.
From here the apparent parallelism of the two, from a single point outgoing lines -- follows. How then a 360 degrees vision creature handle this? When the straight road going to the North, is parallel to another straight road going to the West, which is parallel to yet another straight road going to the South? At least in some distance and then to the horizon.
I have an idea, but first I am asking you. How?
If it's worth saying, but not worth its own post, then it goes here.
Notes for future OT posters:
1. Please add the 'open_thread' tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should start on Monday, and end on Sunday.
4. Unflag the two options "Notify me of new top level comments on this article" and "