LIST: I can't vote Karma on some people, some contexts.
What's up with that?
In a discussion thread, I can karma vote on anyone.
But if I select a person to see all their posts, for some people, karma is disabled while looking at them, and for others, it's not disabled. Same thing if I look at their posts under the list of all my posts.
Iterated Prisoner's Dilemma in software patents
This post contains some thoughts around software-patent strategies for large tech companies, in particular how the ability to block others' applications seems to set up an Iterated Prisoner's Dilemma and may change the strategic landscape for patents entirely.
Joel Spolsky writes of recent successes in blocking bad patent applications:
Micah showed me a document from the USPTO confirming that they had rejected the patent application, and the rejection relied very heavily on the document I found. This was, in fact, the first “confirmed kill” of Ask Patents, and it was really surprisingly easy.
and suggests that this may lead to a "Mexican Standoff" among major software companies:
My dream is that when big companies hear about how friggin’ easy it is to block a patent application, they’ll use Ask Patents to start messing with their competitors. How cool would it be if Apple, Samsung, Oracle and Google got into a Mexican Standoff on Ask Patents? If each of those companies had three or four engineers dedicating a few hours every day to picking off their competitors’ applications, the number of granted patents to those companies would grind to a halt. Wouldn’t that be something!
It seems to me that this would be something of a Prisoner's Dilemma situation for the companies: Presumably, each of them is best off if it is the only one that can get any software patents (it defects by blocking the others, they cooperate by not setting up a patent-blocking team), better off if everyone can get patents (everyone cooperates by not having a blocking team), and worst off if nobody can get patents (everyone has a blocking team which they have to pay for). It is Iterated because the decision to block or not block can be made anew every month, or quarter, or whatever. So the question is, will these companies filled with smart people be able to recognise an IPD, and will they cooperate?
Some factors to consider: Setting up a patent-blocking team requires some small amount of effort, so inertia is in favour of cooperation. On the other hand, many individual engineers at these places are likely out of sympathy with the patents that their managers insist on, and may be delighted to push the 'D' button under the guise of sabotaging their competitors. (And at least some of the major tech companies have 20% time or equivalents, so there wouldn't even be much inertia to overcome - just decide to do it!)
Another point is that this is a multiplayer game, but it only takes two companies to block everyone: For example, Google blocks everyone except Google, and then exactly one company needs to retaliate to make the block complete. This does of course raise the question of who is going to step forward and pay for the retaliation; but on the other hand, the cost appears small. The free-rider problem exists, but it does not seem to be large.
Another point: The ease of patent-blocking may change the strategic landscape entirely, by making it not worth the effort to file for patents in the first place. It appears to me that everyone involved knows that these patents are worthless. They file them for some mix of prestige, "everyone does it", and ability to retaliate if someone else sues using _their_ worthless overbroad patents. Presumably it is only worth expending engineer time on this because the patents are very likely to be granted; conversely, it's only worth having patent-blocking teams if a lot of worthless applications are filed. The equilibrium is not clear to me, but it seems that it will have to shift at least slightly in the directions of having engineers do more bug-fixing and less patent-filing.
Best causal/dependency diagram software for fluid capture?
I've found most graphing software too clunky, or having too much mental friction, for my purpose of creating graphically represented plans, to convert written diagrams into digital form, or to do preference inference based on the structure of my goals (amongst other things).
So far the only tool that I've seen that reduces this friction is GraphViz [1], since I think I can literally just list down connection after connection in markup, with no care for structure or reasonableness, and then prune connections after I see how the entire thing looks. Point and click is for suckers.
However, I also like the approach of Freemind that quickly outputs a visual map that is easily traversable; but it doesn't do much for me when the causality is more involved.
Are there any alternatives that anyone is aware of?
[1] If you are not familiar with GraphViz, see this amusing introduction that maps the social network in R. Kelly's hit hip hopera, "Trapped in the Closet".
Idea: Self-Improving Task Management Software
So what the world needs is yet another task management program, right?
My idea is software which automatically implements productivity strategies, measures the effectiveness of those strategies, and analyses which strategies work best for you. Hopefully, using the software would result in a sustained increase in your productivity over time.
By "productivity strategies" I mean things like: the recommendations in the the anti-procrastination algorithm, the pomodoro technique, exercising regularly, pre-commitment, experimenting with sleep patterns, gamifying your tasks and so forth.
In practical terms, what I'm envisioning is an extensible software framework. The core program would be a simple task list manager: add tasks to be done in the future, check off items as done when completed and send notifications to the user.
This core framework would then be extended by plugins, which represented different productivity strategies. For example, the pomodoro plugin might make your first task at 9am each morning to review your task list and choose the most important three tasks (MITs), your second task to set and begin a timer for 30 minutes and your third task to complete that top MIT you chose. After 30 minutes, it would add a new task of taking a five minute relaxation break and send you a notification to let you know. Five minutes later, it would notify you again to finish your relaxation break task, with a fresh task to re-start the timer and then back to your MITs for a further 30 minutes.
The software could independently activate and deactivate the plugins in order to collect sufficient data to suggest which strategies were most effective for you. Over time, more plugins would be written as people made further suggestions. Existing plugins could be potentially improved and automatically reviewed using A/B testing.
When deciding whether a strategy is "effective", I mean that a large number of tasks are completed, that the remaining number of tasks on the list is small and that the age of those tasks is not too great. However, the criteria could be extended to ask for an indication of mood from the user, to allow for low stress optimisation, for example. Perhaps stochastic self sampling would work well here.
If users were willing to opt into providing anonymous data, the software could automate a community review of the strategies: which strategies seem to be most commonly effective? Affinity analysis could even be used to recommend plugins that were helpful to other people who responded to similar strategies as you.
What are your comments, and specifically criticisms, of this idea? Would you try using software like this if it existed? Would you like to assist in writing software like this?
Is there an automatic Chrome-to-Anki-2 extension or solution?
I'd like to be able to click unfamiliar words in Chrome and automatically create notes in Anki 2 using an online dictionary. It'd also be nice to have an automatic method for sending text and images to Anki notes straight from Chrome. For example, if I read an article here that I want to remember, I'd be able to highlight the title, send it to Anki, and when I review, I'd see the title on the card's front with the reverse being a link to the source if I forgot what the post was about.
I found some Chrome extensions that purport to do this sort of thing, but didn't get any of them to work with Anki 2. Is anyone currently doing this, and if so, what is the solution?
Mailing List for Digitized Belief Network Discussion
Hi all,
This is a follow-up to a previous post of mine - 'A digitized belief network?'.
I have now created a discussion group for anyone who wants to discuss the problems involved in creating a digital representation of a human's beliefs. Anyone who is interested in joining us can sign up here.
See you all around the list,
Avi
A digitized belief network?
Hello to all,
Like the rest of you, I'm an aspiring rationalist. I'm also a software engineer. I design software solutions automatically. It's the first place my mind goes when thinking about a problem.
Today's problem is the fact that our beliefs all rest on beliefs that rest on beliefs. Each one has a <100% probability of being correct. Thus, each belief built on it has an even smaller chance of being correct.
When we discover a belief is false (or less dramatically, revise its probability of being true), it propagates to all other beliefs that are wholly or partially based on it. This is an imperfect process and can take a long time (less in rationalists, but still limited by our speed of thought and inefficiency in recall).
I think that software can help with this. If a dedicated rationalist spent a large amount of time committing each belief of theirs to a database (including a rational assessment of its probability overall and given that all other beliefs that it rests on are true) as well as which other beliefs their beliefs rest on, you would eventually have a picture of your belief network. The software could then alert you to contradictions between your estimate of a belief's probability of being true and its estimate based on the truth estimate of the beliefs that it rests on. It could also find cyclical beliefs and other inconsistencies. Plus, when you update a belief based on new evidence, it can spit out a list of beliefs that should be reconsidered.
Obviously, this would only work if you are brutally honest about what you believe and fairly accurate about your assessments of truth probabilities. But I think this would be an awesome tool.
Does anyone know of an effort to build such a tool? If not, would anyone be interested in helping me design and build such a tool? I've only been reading LessWrong for a little while now, so there's probably a bunch of stuff that I haven't considered in the design of such a tool.
Your's rationally,
Avi
[Website usability] Scroll to new comments (v0.3)
I wrote a short userscript1 that allows for jumping to the next (or previous) new comment in a page (those marked with green). I have tested it on Firefox nightly with the Greasemonkey addon and Chromium. Unfortunately, I think that user scripts only work in Chromium/Google Chrome and Firefox (with Greasemonkey).
Download here (Clicking the link should offer a install prompt, and that is all the work that needs to be done.)
It inserts a small box in the lower right-hand corner that indicates the number of new messages and has a "next" and a "previous" link like so:

Clicking either link should scroll the browser to the top of the appropriate comment (wrapping around at the top and bottom).
The "!" link shows a window for error logging. If a bug occurs, clicking the "Generate log" button inside this window will create a box with some information about the running of the script2, copying and pasting that information here will make debugging easier.
I have only tested on the two browsers listed above, and only on Linux, so feedback about any bugs/improvements would be useful.
(Technical note: It is released under the MIT License, and this link is to exactly the same file as above but renamed so that the source can be viewed more easily. The file extension needs to be changed to "user.js" to be able to run as a user script properly)
Changelog
v0.1 - First version
v0.2 - Logging & indication of number of new messages
v0.3 - Correctly update when hidden comments are loaded (and license change). NOTE: Upgrading to v0.3 on Chrome is likely to cause a "Downgrading extension error" (I'd made a mistake with the version numbers previously), the fix is to uninstall and then reinstall the new version. (uninstall via Tools > Extensions)
1 A segment of javascript that runs in the web browser as the page is loaded. It can modify the page, e.g. inserting a bit of html as this script does.
2 Specifically: the url, counts of different sets of comments, some info about the new comments, and also a list of the clicks on "prev" and "next".
When programs have to work-- lessons from NASA
They Write the Right Stuff is about software which "never crashes. It never needs to be re-booted. This software is bug-free. It is perfect, as perfect as human beings have achieved. Consider these stats : the last three versions of the program -- each 420,000 lines long-had just one error each. The last 11 versions of this software had a total of 17 errors. Commercial programs of equivalent complexity would have 5,000 errors."
The programmers work from 8 to 5, with occasional late nights. They wear dressy clothes, not flashy or grungy. I assume there's a dress code, but I have no idea whether conventional clothes are actually an important part of the process. I'm sure that working reasonable numbers of hours is crucial, though I also wonder whether those hours need to be standard office hours.
"And the culture is equally intolerant of creativity, the individual coding flourishes and styles that are the signature of the all-night software world. "People ask, doesn't this process stifle creativity? You have to do exactly what the manual says, and you've got someone looking over your shoulder," says Keller. "The answer is, yes, the process does stifle creativity." " I have no idea what's in the manual, or if there can be a manual for something as new as self-optimizing AI. I assume there could be a manual for some aspects.
What follows is main points quoted from the article:
The important thing is the process: The product is only as good as the plan for the product. About one-third of the process of writing software happens before anyone writes a line of code.
2. The best teamwork is a healthy rivalry. The central group breaks down into two key teams: the coders - the people who sit and write code -- and the verifiers -- the people who try to find flaws in the code. The two outfits report to separate bosses and function under opposing marching orders. The development group is supposed to deliver completely error-free code, so perfect that the testers find no flaws at all. The testing group is supposed to pummel away at the code with flight scenarios and simulations that reveal as many flaws as possible. The result is what Tom Peterson calls "a friendly adversarial relationship."
I note that it's rivalry between people who are doing different things, not people competing to get control of a project.
3. The database is the software base.
One is the history of the code itself -- with every line annotated, showing every time it was changed, why it was changed, when it was changed, what the purpose of the change was, what specifications documents detail the change. Everything that happens to the program is recorded in its master history. The genealogy of every line of code -- the reason it is the way it is -- is instantly available to everyone.
The other database -- the error database -- stands as a kind of monument to the way the on-board shuttle group goes about its work. Here is recorded every single error ever made while writing or working on the software, going back almost 20 years. For every one of those errors, the database records when the error was discovered; what set of commands revealed the error; who discovered it; what activity was going on when it was discovered -- testing, training, or flight. It tracks how the error was introduced into the program; how the error managed to slip past the filters set up at every stage to catch errors -- why wasn't it caught during design? during development inspections? during verification? Finally, the database records how the error was corrected, and whether similar errors might have slipped through the same holes.
The group has so much data accumulated about how it does its work that it has written software programs that model the code-writing process. Like computer models predicting the weather, the coding models predict how many errors the group should make in writing each new version of the software. True to form, if the coders and testers find too few errors, everyone works the process until reality and the predictions match.
4. Don't just fix the mistakes -- fix whatever permitted the mistake in the first place.
The process is so pervasive, it gets the blame for any error -- if there is a flaw in the software, there must be something wrong with the way its being written, something that can be corrected. Any error not found at the planning stage has slipped through at least some checks. Why? Is there something wrong with the inspection process? Does a question need to be added to a checklist?
Importantly, the group avoids blaming people for errors. The process assumes blame - and it's the process that is analyzed to discover why and how an error got through. At the same time, accountability is a team concept: no one person is ever solely responsible for writing or inspecting code. "You don't get punished for making errors," says Marjorie Seiter, a senior member of the technical staff. "If I make a mistake, and others reviewed my work, then I'm not alone. I'm not being blamed for this."
Emotional Installation of Software
I have recently been thinking about this question, "what is it exactly that helps install religious software so deeply and dogmatically into the brain?" Often those who are strongly religious fall into a few categories: (1) They were trained to believe in specific aspects of religion as children; (2) They entered into a very destitute part of their lives (i.e. severe depression, midlife crisis, loss of a job, death in the family, cancer, alcoholism, or other existential problems).
What strikes me about these situations is that emotion generally dominates the decision-making process. I remember when I was a child and attended church camp at the encouragement of my family I was heavily pressured by the camp counselors to "accept Christ" and I saw that there was a positive correlation between my willingness to accept Christ, memorize Bible verses, and say certain statements about behavior in the context of Christian morals and the way that the camp counselors, my extended family, and other adults would treat me. As a result, it was not until many years later that my preference for rationalism and science was able to fully crack that emotionally-founded religious belief installed in me as a child. I know many people for whom a similar narrative is true regarding experiences with alcohol, etc., though it seems to be rare for someone to completely dismiss deeply and emotionally held beliefs from their youth.
Emotion is something we have evolved to utilize. Generally speaking, we need emotion because we have to make split-second decisions sometimes in life and we don't have the opportunity to integrate our decision process on data. If someone attacks me I will become angry because anger will raise my adrenaline levels, temporarily reduce other biological needs like hunger or waste removal, and enable me to fight for survival. Essentially emotion is just a recorded previous decision that works on stereotypical data, or in probabilistic terms it is like basing a quick decision on solely the first moment of a bunch of previously experienced data. The first moment might not be the best descriptor of the data... but if you're in a computational bind you might not be able to do a whole lot better and you'll be biologically penalized for spending your CPU time trying to compute better descriptors of the data. But it is undeniable that decisions we all make based upon emotion are often some of the most powerful and deepest-seated beliefs that we have.
With religion this is especially true. Very religious people, in my view, have this software installed emotionally and then spend years practicing the art of pushing the installed software ever closer to the very act of perception itself, until at some point it is almost the case that sensory data is literally passed through a religious filter before it is even processed and presented for perception. A sunset becomes a symbol of God's love so much so that there is (almost) no physical distinction between the literal viewing of photons depicting the sunset scene and the thinking of the thought "This shows that God loves me." Emotionally installed software presents a very difficult problem. Depending on how close to the act of perception that it has been pushed, it implies there is a remarkably tiny window of opportunity for the presentation of data that could convincingly demonstrate that rational alternatives are better in a number of important senses.
I'm sure many of you have had debates where you've run into circular logic and unavoidable walls that stifle all useful discussion. Can we as a community come up with a good theory on how sensory data can help to uninstall deep emotionally installed software in someone's brain? I really feel that this is an area that deserves some philosophical attention. Is it the case that if software is installed in someone's brain in conjunction with emotion (and by this I literally mean that the cyclic AMP cycles and other biological processes used for memory formation are made stronger and synaptic connections related to the library of belief concepts (e.g. religious) are reinforced by chemicals released in conjunction with the emotive force of the experience in which they are formed) can only be uninstalled by a similarly impactful emotional experience? It appears that slow-moving rationality and logical discussion are almost physically powerless to succeed as convincing mechanisms. And if this is the case, what should rationalists do to promote their ideas (aside from the obvious social pressure to stop installing religious software in the minds of children, etc.)
Note that in the discussion above I use 'religion' as a specific example, but any irrationally held belief that derives from an emotionally impactful experience would serve the same purpose. And also, here we can assume 'religious' refers to ontological claims unsupported by any evidence and then purported to have day-to-day impacts on life and decision-making. I would be very grateful for any thoughts the community has and hopefully we can generate some useful techniques for understanding how to appropriately uninstall emotional software (in the instances when it's useful to do so)... even the kinds of emotional software that we ourselves (rationalists) often fall victim to in our own imperfect understanding of the world.
Software for Critical Thinking, Prof. Geoff Cumming
Prof. Geoff Cumming has done some interesting work. Of particular relevance to the LW community, he has studied software for enhancing critical thinking.
My past research: I worked on Computer tools for enhancing critical thinking, with Tim van Gelder. We studied argument mapping, and Tim’s wonderful Reason!Able software for critical thinking. This has proved very effective in university and school classrooms as the basis for effective enhancement of critical thinking. In an ARC-funded project we evaluated the software and Tim’s related educational materials. We found evidence that a one semester critical thinking course, based on Reason!Able, gives a very substantial increase—considerably greater than reported in previous evaluations of critical thinking courses—in performance on standardised tests.
Tim’s software has been further developed by his company Austhink Software, and is now available commercially as Rationale and bCisive: both are fabulous! http://www.austhink.org/ http://bcisive.austhink.com/
= 783df68a0f980790206b9ea87794c5b6)
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)