Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.
Identity is mostly discussed on LW in a cautionary manner: keep your identity small, be aware of the identities you are attached to. As benlandautaylor points out, identities are very powerful, and while being rightfully cautious about them, we can also cultivate them deliberately to help us achieve our goals.
Some helpful identities that I have that seem generally applicable:
- growth mindset
- low-hanging fruit picker
- jack-of-all trades (someone who is good at a variety of skills)
- someone who tries new things
- universal curiosity
- mirror (someone who learns other people's skills)
Out of the above, the most useful is probably growth mindset, since it's effectively a meta-identity that allows the other parts of my identity to be fluid. The low-hanging fruit identity helps me be on the lookout for easy optimizations. The universal curiosity identity motivates me to try to understand various systems and fields of knowledge, besides the domains I'm already familiar with. It helps to give these playful or creative names, for example, "champion of low-hanging fruit". Some of these work well together, for example the "trying new things" identity contributes to the "jack of all trades" identity.
It's also important to identify unhelpful identities that get in your way. Negative identities can be vague like "lazy person" or specific like "someone who can't finish a project". With identities, just like with habits, the easiest way to reduce or eliminate a bad one seems to be to install a new one that is incompatible with it. For example, if you have a "shy person" identity, then going to parties or starting conversations with strangers can generate counterexamples for that identity, and help to displace it with a new one of "sociable person". Costly signaling can be used to achieve this - for example, joining a public speaking club. The old identity will not necessarily go away entirely, but the competing identity will create cognitive dissonance, which it can be useful to deliberately focus on. More specific identities require more specific counterexamples. Since the original negative identity makes it difficult to perform the actions that generate counterexamples, there needs to be some form of success spiral that starts with small steps.
Some examples of unhelpful identities I've had in the past were "person who doesn't waste things" and "person with poor intuition". The aversion to wasting money and material things predictably led to wasting time and attention instead. I found it useful to try "thinking like a trader" to counteract this "stingy person" identity, and get comfortable with the idea of trading money for time. Now I no longer obsess about recycling or buy the cheapest version of everything. Underconfidence in my intuition was likely responsible for my tendency to miss the forest for the trees when studying math or statistics, where I focused on details and missed the big picture ideas that are essential to actual understanding. My main objection to intuitions was that they feel imprecise, and I am trying to develop an identity of an "intuition wizard" who can manipulate concepts from a distance without zooming in. That is a cooler name than "someone who thinks about things without really understanding them", and brings to mind some people I know who have amazing intuition for math, which should help the identity stick.
There can also be ambiguously useful identities, for example I have a "tough person" identity, which motivates me to challenge myself and expand my comfort zone, but also increases self-criticism and self-neglect. Given the mixed effects, I'm not yet sure what to do about this one - maybe I can come up with an identity that only has the positive effects.
Which identities hold you back, and which ones propel you forward? If you managed to diminish negative identities, how did you do it and how far did you get?
I like posts that are concise and to the point. Posts like that maximize my information/effort ratio. I would really like to see experienced rationalists simply post a list of things they believe on any given subject with a short explanation for why they believe each of those things. Then I could go ahead and adjust my beliefs based on those lists as necessary.
Sadly I don’t see any posts like this. Presumably this is because of the social convention where you’re expected to back up any public belief with arguments, so that other people can attempt to poke holes in them. I find this strange because the arguments people present rarely have anything to do with why they believe those things, which makes the whole exercise a giant distraction from the main point that the author is trying to bring across. In order to prevent this kind of derailment, posters tend to cover their arguments with endless qualifications so that their sentences read like this: “I personally believe that, in cases X Y Z and under circumstances B and C, ceteris paribus and barring obvious exceptions, it seems safe to say that murder is wrong, though of course I could be mistaken.” The problems with such excessive argumentation and qualification are threefold:
- The post becomes less readable: The information/effort ratio is lowered.
- It becomes much more difficult to tell what the author genuinely believes: Are they really unsure or just trying to appear humble? Is that their true objection, or just an argument?
- Despite everything, someone is STILL going to miss the point and reply that sometimes killing people is ok in certain situations, and then the next 100 comments will be about that.
By contrast, terseness makes posts more readable and makes it less likely that the main point is misunderstood. So if we as a community could just relax the demand for argumentation and qualification somewhat, and we all focussed on debating the main points of posts instead of getting sidetracked, then perhaps experienced rationalists here could write nice and concise posts that give short and clear answers to complicated questions. Instead, some of the sequences are so long and involve so many arguments, counter-arguments and disclaimers that I feel the point is lost entirely.
Discussion prompt: Nick Szabo's essay on judging tradition, "Objective Versus Intersubjective Truth".
1 PM (remember daylight saving time!)
Nam Phuong at 11th and Broad St. This is a Vietnamese restaurant which is good, cheap, quiet, and on mass transit.
In the previous post I have defined an intelligence metric solving the duality (aka naturalized induction) and ontology problems in AIXI. This model used a formalization of UDT using Benja's model of logical uncertainty. In the current post I am going to:
- Explain some problems with my previous model (that section can be skipped if you don't care about the previous model and only want to understand the new one).
- Formulate a new model solving these problems. Incidentally, the new model is much closer to the usual way UDT is represented. It is also based on a different model of logical uncertainty.
- Show how to define intelligence without specifying the utility function a priori.
- Since the new model requires utility functions formulated with abstract ontology i.e. well-defined on the entire Tegmark level IV multiverse. These are generally difficult to construct (i.e. the ontology problem resurfaces in a different form). I outline a method for constructing such utility functions.
Problems with UIM 1.0
The previous model postulated that naturalized induction uses a version of Solomonoff induction updated in the direction of an innate model N with a temporal confidence parameter t. This entails several problems:
- The dependence on the parameter t whose relevant value is not easy to determine.
- Conceptual divergence from the UDT philosophy that we should not update at all.
- Difficulties with counterfactual mugging and acausal trade scenarios in which G doesn't exist in the "other universe".
- Once G discovers even a small violation of N at a very early time, it loses all ground for trusting its own mind. Effectively, G would find itself in the position of a Boltzmann brain. This is especially dangerous when N over-specifies the hardware running G's mind. For example assume N specifies G to be a human brain modeled on the level of quantum field theory (particle physics). If G discovers that in truth it is a computer simulation on the merely molecular level, it loses its epistemic footing completely.
I now propose the following intelligence metric (the formula goes first and then I explain the notation):
IU(q) := ET[ED[EL[U(Y(D)) | Q(X(T)) = q]] | N]
- N is the "ideal" model of the mind of the agent G. For example, it can be a universal Turing machine M with special "sensory" registers e whose values can change arbitrarily after each step of M. N is specified as a system of constraints on an infinite sequence of natural numbers X, which should be thought of as the "Platonic ideal" realization of G, i.e. an imagery realization which cannot be tempered with by external forces such as anvils. As we shall see, this "ideal" serves as a template for "physical" realizations of G which are prone to violations of N.
- Q is a function that decodes G's code from X e.g. the program loaded in M at time 0. q is a particular value of this code whose (utility specific) intelligence IU(q) we are evaluating.
- T is a random (as in random variable) computable hypothesis about the "physics" of X, i.e a program computing X implemented on some fixed universal computing model (e.g. universal Turing machine) C. T is distributed according to the Solomonoff measure however the expectation value in the definition of IU(q) is conditional on N, i.e. we restrict to programs which are compatible with N. From the UDT standpoint, T is the decision algorithm itself and the uncertainty in T is "introspective" uncertainty i.e. the uncertainty of the putative precursor agent PG (the agent creating G e.g. an AI programmer) regarding her own decision algorithm. Note that we don't actually need to postulate a PG which is "agenty" (i.e. use for N a model of AI hardware together with a model of the AI programmer programming this hardware), we can be content to remain in a more abstract framework.
- D is a random computable hypothesis about the physics of Y, where Y is an infinite sequence of natural numbers representing the physical (as opposed to "ideal") universe. D is distributed according to the Solomonoff measure and the respective expectation value is unconditional (i.e. we use the raw Solomonoff prior for Y which makes the model truly updateless). In UDT terms, D is indexical uncertainty.
- U is a computable function from infinite sequences of natural numbers to [0, 1] representing G's utility function.
- L represents logical uncertainty. It can be defined by the model explained by cousin_it here, together with my previous construction for computing logical expectation values of random variables in [0, 1]. That is, we define EL(dk) to be the probability that a random string of bits p encodes a proof of the sentence "Q(X(T)) = q implies that the k-th digit of U(Y(D)) is 1" in some prefix-free encoding of proofs conditional on p encoding the proof of either that sentence or the sentence "Q(X(T)) = q implies that the k-th digit of U(Y(D)) is 0". We then define
EL[U(Y(D)) | Q(X(T)) = q] := Σk 2-k EL(dk). Here, the sentences and the proofs belong to some fixed formal logic F, e.g. Peano arthimetics or ZFC.
- G's mental architecture N is defined in the "ideal" universe X where it is inviolable. However, G's utility function U inhabits the physical universe Y. This means that a highly intelligent q is designed so that imperfect realizations of G inside Y generate as many utilons as possible. A typical T is a low Kolmogorov complexity universe which contains a perfect realization of G. Q(X(T)) is L-correlated to the programming of imperfect realizations of G inside Y because T serves as an effective (approximate) model of the formation of these realizations. For abstract N, this means q is highly intelligent when a Solomonoff-random "M-programming process" producing q entails a high expected value of U.
- Solving the Loebian obstacle requires a more sophisticated model of logical uncertainty. I think I can formulate such a model. I will explain it in another post after more contemplation.
- It is desirable that the encoding of proofs p satisfies a universality property so that the length of the encoding can only change by an additive constant, analogically to the weak dependence of Kolmogorov complexity on C. It is in fact not difficult to formulate this property and show the existence of appropriate encodings. I will discuss this point in more detail in another post.
It seems conceptually desirable to have a notion of intelligence independent of the specifics of the utility function. Such an intelligence metric is possible to construct in a way analogical to what I've done in UIM 1.0, however it is no longer a special case of the utility-specific metric.
Assume N to consist of a machine M connected to a special storage device E. Assume further that at X-time 0, E contains a valid C-program u realizing a utility function U, but that this is the only constraint on the initial content of E imposed by N. Define
I(q) := ET[ED[EL[u(Y(D); X(T)) | Q(X(T)) = q]] | N]
Here, u(Y(D); X(T)) means that we decode u from X(T) and evaluate it on Y(D). Thus utility depends both on the physical universe Y and the ideal universe X. This means G is not precisely a UDT agent but rather a "proto-agent": only when a realization of G reads u from E it knows which other realizations of G in the multiverse (the Solomonoff ensemble from which Y is selected) should be considered as the "same" agent UDT-wise.
Incidentally, this can be used as a formalism for reasoning about agents that don't know their utility functions. I believe this has important applications in metaethics I will discuss in another post.
Utility Functions in the Multiverse
UIM 2.0 is a formalism that solves the diseases of UIM 1.0 at the price of losing N in the capacity of the ontology for utility functions. We need the utility function to be defined on the entire multiverse i.e. on any sequence of natural numbers. I will outline a way to extend "ontology-specific" utility functions to the multiverse through a simple example.
Suppose G is an agent that cares about universes realizing the Game of Life, its utility function U corresponding to e.g. some sort of glider maximization with exponential temporal discount. Fix a specific way DC to decode any Y into a history of a 2D cellular automaton with two cell states ("dead" and "alive"). Our multiversal utility function U* assigns Ys for which DC(Y) is a legal Game of Life the value U(DC(Y)). All other Ys are treated by dividing the cells into cells O obeying the rules of Life and cells V violating the rules of Life. We can then evaluate U on O only (assuming it has some sort of locality) and assign V utility by some other rule, e.g.:
- zero utility
- constant utility per V cell with temporal discount
- constant utility per unit of surface area of the boundary between O and V with temporal discount
- The construction of U* depends on the choice of DC. However, U* only depends on DC weakly since given a hypothesis D which produces a Game of Life wrt some other low complexity encoding, there is a corresponding hypothesis D' producing a Game of Life wrt DC. D' is obtained from D by appending a corresponding "transcoder" and thus it is only less Solomonoff-likely than D by an O(1) factor.
- Since the accumulation between O and V is additive rather than e.g. multiplicative, a U*-agent doesn't behave as if it a priori expects the universe the follow the rules of Life but may have strong preferences about the universe actually doing it.
- This construction is reminiscent of Egan's dust theory in the sense that all possible encodings contribute. However, here they are weighted by the Solomonoff measure.
This summary was posted to LW main on February 28th. The following week's summary is here.
Irregularly scheduled Less Wrong meetups are taking place in:
- Boston - Optimizing Empathy Levels: 02 March 2014 02:00PM
- Hamburg - Structure: 04 March 2014 05:00PM
- Munich Meetup: 08 March 2014 02:00PM
- Saint Petersburg sunday meetup: 01 March 2014 04:00PM
- Sydney Meetup - March: 26 March 2014 06:30PM
- Berkeley: Implementation Intentions: 05 March 2014 07:00PM
- [Berlin] Community Weekend in Berlin: 11 April 2014 04:00PM
- Brussels - Calibration and other games: 08 March 2014 01:00PM
- London Games Meetup 09/03, + Socials 02/03 and 16/02 : 09 March 2014 02:00PM
- NYC Rationality Megameetup and Unconference: April 5-6: 05 April 2014 11:00AM
- Salt Lake City UT — Open Possibilities and Improv Skills: 09 March 2014 02:00PM
Locations with regularly scheduled meetups: Austin, Berkeley, Berlin, Brussels, Cambridge, MA, Cambridge UK, Columbus, London, Madison WI, Melbourne, Mountain View, New York, Philadelphia, Research Triangle NC, Salt Lake City, Seattle, Toronto, Vienna, Washington DC, Waterloo, and West Los Angeles. There's also a 24/7 online study hall for coworking LWers.
A serious possibility is that the first AGI(s) will be developed in a Manhattan Project style setting before any sort of friendliness/safety constraints can be integrated reliably. They will also be substantially short of the intelligence required to exponentially self-improve. Within a certain range of development and intelligence, containment protocols can make them safe to interact with. This means they can be studied experimentally, and the architecture(s) used to create them better understood, furthering the goal of safely using AI in less constrained settings.
Setting the Scene
Technological and/or Political issues could force the development of AI without theoretical safety guarantees that we'd certainly like, but there is a silver lining
A lot of the discussion around LessWrong and MIRI that I've seen (and I haven't seen all of it, please send links!) seems to focus very strongly on the situation of an AI that can self-modify or construct further AIs, resulting in an exponential explosion of intelligence (FOOM/Singularity). The focus on FAI is on finding an architecture that can be explicitly constrained (and a constraint set that won't fail to do what we desire).
My argument is essentially that there could be a critical multi-year period preceding any possible exponentially self-improving intelligence during which a series of AGIs of varying intelligence, flexibility and architecture will be built. This period will be fast and frantic, but it will be incredibly fruitful and vital both in figuring out how to make an AI sufficiently strong to exponentially self-improve and in how to make it safe and friendly (or develop protocols to bridge the even riskier period between when we can develop FOOM-capable AIs and when we can ensure their safety).
The requirement for a hard singularity, an exponentially self-improving AI, is that the AI can substantially improve itself in a way that enhances its ability to further improve itself, which requires the ability to modify its own code; access to resources like time, data, and hardware to facilitate these modifications; and the intelligence to execute a fruitful self-modification strategy.
The first two conditions can (and should) be directly restricted. I'll elaborate more on that later, but basically any AI should be very carefully sandboxed (unable to affect its software environment), and should have access to resources strictly controlled. Perhaps no data goes in without human approval or while the AI is running. Perhaps nothing comes out either. Even a hyperpersuasive hyperintelligence will be slowed down (at least) if it can only interact with prespecified tests (how do you test AGI? No idea but it shouldn't be harder than friendliness). This isn't a perfect situation. Eliezer Yudkowsky presents several arguments for why an intelligence explosion could happen even when resources are constrained, (see Section 3 of Intelligence Explosion Microeconomics) not to mention ways that those constraints could be defied even if engineered perfectly (by the way, I would happily run the AI box experiment with anybody, I think it is absurd that anyone would fail it! [I've read Tuxedage's accounts, and I think I actually do understand how a gatekeeper could fail, but I also believe I understand how one could be trained to succeed even against a much stronger foe than any person who has played the part of the AI]).
But the third emerges from the way technology typically develops. I believe it is incredibly unlikely that an AGI will develop in somebody's basement, or even in a small national lab or top corporate lab. When there is no clear notion of what a technology will look like, it is usually not developed. Positive, productive accidents are somewhat rare in science, but they are remarkably rare in engineering (please, give counterexamples!). The creation of an AGI will likely not happen by accident; there will be a well-funded, concrete research and development plan that leads up to it. An AI Manhattan Project described above. But even when there is a good plan successfully executed, prototypes are slow, fragile, and poor-quality compared to what is possible even with approaches using the same underlying technology. It seems very likely to me that the first AGI will be a Chicago Pile, not a Trinity; recognizably a breakthrough but with proper consideration not immediately dangerous or unmanageable. [Note, you don't have to believe this to read the rest of this. If you disagree, consider the virtues of redundancy and the question of what safety an AI development effort should implement if they can't be persuaded to delay long enough for theoretically sound methods to become available].
A Manhattan Project style effort makes a relatively weak, controllable AI even more likely, because not only can such a project implement substantial safety protocols that are explicitly researched in parallel with primary development, but also because the total resources, in hardware and brainpower, devoted to the AI will be much greater than a smaller project, and therefore setting a correspondingly higher bar for the AGI thus created to reach to be able to successfully self-modify itself exponentially and also break the security procedures.
Strategies to handle AIs in the proto-Singularity, and why they're important
First, take a look the External Constraints Section of this MIRI Report and/or this article on AI Boxing. I will be talking mainly about these approaches. There are certainly others, but these are the easiest to extrapolate from current computer security.
These AIs will provide us with the experimental knowledge to better handle the construction of even stronger AIs. If careful, we will be able to use these proto-Singularity AIs to learn about the nature of intelligence and cognition, to perform economically valuable tasks, and to test theories of friendliness (not perfectly, but well enough to start).
"If careful" is the key phrase. I mentioned sandboxing above. And computer security is key to any attempt to contain an AI. Monitoring the source code, and setting a threshold for too much changing too fast at which point a failsafe freezes all computation; keeping extremely strict control over copies of the source. Some architectures will be more inherently dangerous and less predictable than others. A simulation of a physical brain, for instance, will be fairly opaque (depending on how far neuroscience has gone) but could have almost no potential to self-improve to an uncontrollable degree if its access to hardware is limited (it won't be able to make itself much more efficient on fixed resources). Other architectures will have other properties. Some will be utility optimizing agents. Some will have behaviors but no clear utility. Some will be opaque, some transparent.
All will have a theory to how they operate, which can be refined by actual experimentation. This is what we can gain! We can set up controlled scenarios like honeypots to catch malevolence. We can evaluate our ability to monitor and read the thoughts of the agi. We can develop stronger theories of how damaging self-modification actually is to imposed constraints. We can test our abilities to add constraints to even the base state. But do I really have to justify the value of experimentation?
I am familiar with criticisms based on absolutley incomprehensibly perceptive and persuasive hyperintelligences being able to overcome any security, but I've tried to outline above why I don't think we'd be dealing with that case.
Right now AGI is really a political non-issue. Blue sky even compared to space exploration and fusion both of which actually receive funding from government in substantial volumes. I think that this will change in the period immediately leading up to my hypothesized AI Manhattan Project. The AI Manhattan Project can only happen with a lot of political will behind it, which will probably mean a spiral of scientific advancements, hype and threat of competition from external unfriendly sources. Think space race.
So suppose that the first few AIs are built under well controlled conditions. Friendliness is still not perfected, but we think/hope we've learned some valuable basics. But now people want to use the AIs for something. So what should be done at this point?
I won't try to speculate what happens next (well you can probably persuade me to, but it might not be as valuable), beyond extensions of the protocols I've already laid out, hybridized with notions like Oracle AI. It certainly gets a lot harder, but hopefully experimentation on the first, highly-controlled generation of AI to get a better understanding of their architectural fundamentals, combined with more direct research on friendliness in general would provide the groundwork for this.
Discussion article for the meetup : West LA—Expert At Vs. Expert On
How to Find Us: Go into this Del Taco. I will bring a Rubik's Cube. The presence of a Rubik's Cube will be strong Bayesian evidence of the presence of a Less Wrong meetup.
Parking is completely free. There is a sign that claims there is a 45-minute time limit, but it is a lie.
Discussion: Expert at vs. expert on is a fairly important distinction. It's also a really simple one, which makes it conceptual low-hanging fruit. It's not totally without nuance; for example the terminology implies either total mastery or encyclopedic knowledge, but it applies just as well at any level of competence.
- Expert At Versus Expert On. I know of no other writing that is explicitly on this topic. Robin Hanson emphasizes the signaling aspect (of course he does), but I do not.
- It is well-known that you learn to play baseball by playing baseball, not by reading essays about baseball. However, it is not usually made explicit that the former makes you an expert at baseball, and the latter makes you an expert on baseball.
- Another nuance: Being an expert at something helps you become an expert on it; the vice versa may be true also. For example, you are probably a better linguist if you speak many languages.
NB: No prior knowledge of or exposure to Less Wrong is necessary; this will be generally accessible. Also, we may or may not play a card game.
Discussion article for the meetup : West LA—Expert At Vs. Expert On
In late December 2013, Jonah, my collaborator at Cognito Mentoring, announced the service on LessWrong. Information about the service was also circulated in other venues with high concentrations of gifted and intellectually curious people. Since then, we're received ~70 emails asking for mentoring from learners across all ages, plus a few parents. At least 40 of our advisees heard of us through LessWrong, and the number is probably around 50. Of the 23 who responded to our advisee satisfaction survey, 16 filled in information on where they'd heard of us, and 14 of those 16 had heard of us from LessWrong. The vast majority of student advisees with whom we had substantive interactions, and the ones we felt we were able to help the most, came from LessWrong (we got some parents through the Davidson Forum post, but that's a very different sort of advising).
In this post, I discuss some common themes that emerged from our interaction with these advisees. Obviously, this isn't a comprehensive picture of the LessWrong community the way that Yvain's 2013 survey results were.
- A significant fraction of the people who contacted us via LessWrong aren't active LessWrong participants, and many don't even have user accounts on LessWrong. The prototypical advisees we got through LessWrong don't have many distinctive LessWrongian beliefs. Many of them use LessWrong primarily as a source of interesting stuff to read, rather than a community to be part of.
- About 25% of the advisees we got through LessWrong were female, and a slightly higher proportion of the advisees with whom we had substantive interaction (and subjectively feel we helped a lot) were female. You can see this by looking at the sex distribution of the public reviews of us from students.
- Our advisees included people in high school (typically, grades 11 and 12) and college. Our advisees in high school tended to be interested in mathematics, computer science, physics, engineering, and entrepreneurship. We did have a few who were interested in economics, philosophy, and the social sciences as well, but this was rarer. Our advisees in college and graduate school were also interested in the above subjects but skewed a bit more in the direction of being interested in philosophy, psychology, and economics.
- Somewhat surprisingly and endearingly, many of our advisees were interested in effective altruism and social impact. Some had already heard of the cluster of effective altruist ideas. Others were interested in generating social impact through entrepreneurship or choosing an impactful career, even though they weren't familiar with effective altruism until we pointed them to it. Of those who had heard of effective altruism as a cluster of ideas, some had either already consulted with or were planning to consult with 80,000 Hours, and were connecting with us largely to get a second opinion or to get opinion on matters other than career choice.
- Some of our advisees had had some sort of past involvement with MIRI/CFAR/FHI. Some were seriously considering working in existential risk reduction or on artificial intelligence. The two subsets overlapped considerably.
- Our advisees were somewhat better educated about rationality issues than we'd expect others of similar academic accomplishment to be, and more than the advisees we got from sources other than LessWrong. That's obviously not a surprise at all.
- We hadn't been expecting it, but many advisees asked us questions related to procrastination, social skills, and other life skills. We were initially somewhat ill-equipped to handle these, but we've built a base of recommendations, with some help from LessWrong and other sources.
- One thing that surprised me personally is that many of these people had never spent time exploring Quora. I'd have expected Quora to be much more widely known and used by the sort of people who were sufficiently aware of the Internet to know LessWrong. But it's possible there's not that much overlap.
My overall takeaway is that LessWrong seems to still be one of the foremost places that smart and curious young people interested in epistemic rationality visit. I'm not sure of the exact reason, though HPMOR probably gets a significant fraction of the credit. As long as things stay this way, LessWrong remains a great way to influence a subset of the young population today that's likely to be disproportionately represented among the decision-makers a few years down the line.
It's not clear to me why they don't participate more actively on LessWrong. Maybe no special reasons are needed: the ratio of lurkers to posters is huge for most Internet fora. Maybe the people who contacted us were relatively young and still didn't have an Internet presence, or were being careful about building one. On the other hand, maybe there is something about the comments culture that dissuades people from participating (this need not be a bad feature per se: one reason people may refrain from participating is that comments are held to a high bar and this keeps people from offering off-the-cuff comments). That said, if people could somehow participate more, LessWrong could transform itself into an interactive forum for smart and curious people that's head and shoulders above all the others.
PS: We've now made our information wiki publicly accessible. It's still in beta and a lot of content is incomplete and there are links to as-yet-uncreated pages all over the place. But we think it might still be interesting to the LessWrong audience.
Many of you here have likely heard of Bitcoin, and maybe know something about it.
Earlier today, a story broke that a reporter had apparently tracked down the real Satoshi Nakamoto, infamous creator of the Bitcoin protocol.
This seems like an excellent opportunity to practice our Bayesian updating!
So, how likely do you think it is that this man is the founder of Bitcoin? What do you believe and why?
Discussion article for the meetup : March Meetup: Body Hacking!
An overview of body hacking, what's possible, what's known, what needs more exploration, and what tools are available to you.
Presenters needed! Do you have expertise on any of this? Lemme know and you can do anything from a full presentation with slides and handouts to leading a discussion on a particular topic.
Also, please check out our facebook group here: https://www.facebook.com/groups/Atlanta.Lesswrong/
Discussion article for the meetup : March Meetup: Body Hacking!
View more: Next