Recognizing memetic infections and forging resistance memes
What does an memetic infection look like? Well, you would encounter something (probably on the internet) that seems very compelling. You think intensely about it for a while, and it spurs you to do something - probably to post something related on the internet. After a while, the meme may not seem that compelling to you anymore, and you wonder why you invested that time and energy. The meme has reproduced itself. For example, Bruce Sterling's response to the 'New Aesthetic' is a paradigmatic example of memetic infection: he encountered it, he found it compelling, he wrote about it, I read about it and now I know about it. (Note that the word 'infection' has a stigma to it, but I don't mean it to be necessarily a bad thing. I will use 'disease' to mean 'infection with bad consequences'.)
Now, let me jump to an apparently unrelated concept - Viral Eukaryogenesis. If I understand correctly, Viral Eukaryogenesis is the theory that eukaryotes (including you and me) are inheritors of a bargain between two kinds of life - metabolic life and viral life, something like the way lichens are a bargain between fungi and algae. The nucleus that characterizes eukaryotes is supposed to be descended from a virus protein shell, and the membrane-fusion proteins that we use for gamete fusion (crucial for sex) are supposed to be descended from viral infection proteins. I am not a biologist, but my understanding of the state of biology is that it is an interesting hypothesis, as yet neither proven nor disproven. However, I'm going to talk as if it were true, because I'm actually trying to make an analogy with memes.
Evaluating weather forecast accuracy: an interview with Eric Floehr
Eric Floehr has a business that "holds a mirror up" to weather forecasters, and aggregates and evaluates forecasters for weather forecast consumers. Rationalists interested in improving our societies truth orientation might be mildly interested.
http://www.johndcook.com/blog/2011/04/12/weather-forecast-accuracy/
Failure Modes sometimes correspond to Game Mechanics
If you want to carry a brimming cup of coffee without spilling it, you may want to "change" your goal to instead primarily concentrate on humming. This is an example of a general pattern. It sometimes helps to focus on a nearby artificial goal rather than your actual goal. Let me call that strategy "gamification". There is a business strategy, also named "gamification", of adding game mechanics to a website in order to achieve various business goals. This is related but different. Here I'm referring to a strategy for problem solvers.
We sometimes fail, and sometimes one failure is very similar to another failure. That is, there are characteristic ways that we fail. One of the primary ways that we can improve is to learn our failure modes and create external structures (pieces of paper, software tools) that check, protect against, or head off those forms of failure.
For example, imagine this plan of checklist improvement:
- Change your normal way of working to include an explicit checklist (that starts empty).
- When you make a mistake:
- Analyze what went wrong
- Try to generalize the particular incident to a category
- Add an item to your checklist.
This is very simple and generic, but it is reasonable to believe that if you carefully and diligently followed this plan, your reliability would go up (with diminishing returns because your errors are also your opportunities for improvement). I have not read Mayo, but her "error-theoretic" philosophy of science might be applicable here.
We can try to build a correspondence between failure modes, and game mechanics that attempt to cope for that failure mode.
The null model of science
Jonah Lehrer wrote about the (surprising?) power of publication bias.
http://m.newyorker.com/reporting/2010/12/13/101213fa_fact_lehrer?currentPage=all
Cosma Shalizi (I think) said something, or pointed to something, about the null model of science - what science would look like if there were no actual effects, just statistical anomalies that look good at first. I can't find the reference, though.
What are Arguments, from an Agorics point of view?
Background on Agorics:
The idea of software agents cooperating in an open market or "agora". Described by Mark Miller and Eric Drexler here: http://e-drexler.com/d/09/00/AgoricsPapers/agoricpapers.html Depicted by Greg Egan in his novel "Diaspora", exerpt here: http://gregegan.customer.netspace.net.au/DIASPORA/01/Orphanogenesis.html
Background on Argument: http://en.wikipedia.org/wiki/Argument
Let's start by supposing that an argument is a variety of persuasive message. If Bob trusts Alice though, Bob could be persuaded by simply recieving a claim from Alice. That is a kind of persuasive message, but it's not an argument. If Bob is insecure, then Bob's mind could be hacked and therefore changed. However, that's not an argument either. (The "Buffer Overflow Fallacy"?)
Possibly arguments are witnesses (or "certificates"), as used in computational complexity. Alice could spend exp-time to solve an instance of an NP-complete problem, then send a small witness to B, who can then spend poly-time to verify it. The witness would be an argument.
I'm not sure if that's a definition, but we have an overgeneral category (persuasive messages) that is, a superset of arguments, two subcategories of persuasive messages that are specifically excluded, and one subcategory that is specifically included, which seems like enough to go on with.
We know what witnesses to SAT problems look like - they look like satisfying assignments. That is, if Bob were considering a SAT problem, and Alice sent Bob a putative satisfying assignment, and Bob verified it, then Bob ought (rationally) to be convinced that the problem is satisfiable.
What do other kinds of witnesses look like? What about probabilistic computation? What if Alice and Bob may have different priors?
Move meetups to the sidebar?
The number of meetup announcements on the main blog has been increasing. Though it's reasonable to try to get meetups high visibility to increase the chance that people who are nearby see the announcement, the posts themselves are content-free.
How difficult would it be to, instead of promoting meetup announcements, tag them "meetup" and put a "meetups" section in the sidebar, similar to "recent comments" or "recent posts"?
Kevin T. Kelly's Ockham Efficiency Theorem
There is a game studied in Philosophy of Science and Probably Approximately Correct (machine) learning. It's a cousin to the Looney Labs game "Zendo", but less fun to play with your friends. http://en.wikipedia.org/wiki/Zendo_(game) (By the way, playing this kind of game is excellent practice at avoiding confirmation bias.) The game has two players, who are asymmetric. One player plays Nature, and the other player plays Science. First Nature makes up a law, a specific Grand Unified Theory, and then Science tries to guess it. Nature provides some information about the law, and then Science can change their guess, if they want to. Science wins if it converges to the rule that Nature made up.
30th Soar workshop
This is a report from a LessWrong perspective, on the 30th Soar workshop. Soar is a cognitive architecture that has been in continuous development for nearly 30 years, and is in a direct line of descent from some of the earliest AI research (Simon's LT and GPS). Soar is interesting to LessWrong readers for two reasons:
- Soar is a cognitive science theory, and has had some success at modeling human reasoning - this is relevant to the central theme of LessWrong, improving human rationality.
- Soar is an AGI research project - this is relevant to the AGI risks sub-theme of LessWrong.
Stigmergy and Pickering's Mangle
Stigmergy is a notion that an agent's behavior is sometimes best understood as coordinated by the agent's environment. In particular, social insects build nests, which have a recognizable standard pattern (different patterns for different species). Does the wasp or termite have an idea of what the standard pattern is? Probably not. Instead, the computation inside the insect is a stateless stimulus/response rule set. The partially-constructed nest catalyzes the next construction step.
An unintelligent "insect" clambering energetically around a convoluted "nest", with the insect's local perceptions driving its local modifications is recognizably something like a Turing machine. The system as a whole can be more intelligent than either the (stateless) insect or the (passive) nest. The important computation is the interactions between the agent and the environment.
The Argument from Witness Testimony
(Note: This is essentially a rehash/summarization of Jordan Sobel's Lotteries and Miracles - you may prefer the original.)
George Mavrodes wrote an interesting analogy. Scenario 1: Suppose you read a newspaper report claiming that a particular individual (say, Henry Plushbottom of Topeka, Kansas) has won a very large lottery. Before reading the newspaper, you would have given quite low odds that Henry in particular had won the lottery. However, the newspaper report flips your beliefs quite drastically. Afterward, you would give quite high odds that Henry in particular had won the lottery. Scenario 2: You have read various claims that a particular individual (Jesus of Nazareth) arose from the dead. Before hearing those claims, you would have given quite low odds of anything so unlikely happening. However (since you are reading LessWrong) you presumably do not give quite high odds that Jesus arose from the dead.
What is it about the second scenario which makes it different from the first?
View more: Next
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)