All of j_andrew_rogers's Comments + Replies

Apropos the Wikipedia article, in what way is grey goo a "transhumanist theory"?

Grey goo scenarios are relatively straightforward extrapolations of mundane technological progress and complex system dynamics with analogues in real biological systems. Subscribing to transhumanism is not a prerequisite to thinking that grey goo is a plausible region of the technological development phase space.

Your mention of Zipcar in the context of Netflix is an astute point. Zipcar has a very nice and well-developed infrastructure that would be nearly ideal for the transition. The question is whether or not Zipcar is thinking that far ahead, and I do not know the answer.

Many people do not know that even though Netflix has only been streaming video for a few years, they were very actively building their business around that transition over a decade ago, pretty much from their inception. They built out all of the elements required to take advantage of that tra... (read more)

2Miller
Yeah Hastings was fond of saying 'That's why we called it NETflix not DVDs-by-mail'.. although I think even in the late 90s there were some weak attempts at video on demand over the web so the vision wasn't nearly as advanced as I think it would be in Zipcar's case. One of the major problems in the analogy is that the capital investment to replace cars is so ridiculously enormous it's difficult to imagine one company capturing a large chunk of it. The precise details of how driverless cars come to be used will be fascinating. Urban or rural first? taxi replacement or owned first? Will there be restricted areas? Who are the major players? Does it kill existing mass transit (I think so)? What will be the dominant fueling model? What will NYC do with the subway (make it a high speed expressway for the cars perhaps)? Will webvan make a comeback (snicker)?

Everyone is over-thinking this. I used to live in Nevada and political process is driven by the unusual history and heuristics of the state.

The politicians do not care about technology, safety, or even being first per se. Nevada has very successfully built a political economy based on doing legislative and regulatory arbitrage against neighboring states, particularly California. If they think there is a plausible way to drive revenue by allowing things that other states do not allow, it is a surprisingly easy sale. The famous liberalism of the state, where... (read more)

I have typically sought advice (and occasionally received unsolicited advice) from fashion-aware women, most of whom are happy to demonstrate their domain expertise. This has proven to be an efficient strategy that produces good results for relatively low cost. Most of the men I know that dress well rely on a similar strategy; the dearth of men who are savvy at this suggests a somewhat complex signaling game at work.

Take advantage of specialization. It is no different than when individuals solicit advice for me on a matter about which I am perceived as knowledgeable. People enjoy demonstrating their expertise.

There is no reason we cannot massively parallelize algorithms on silicon, it just requires more advanced computer science than most people use. Brains have a direct connect topology, silicon uses a switch fabric topology. An algorithm that parallelizes on the former may look nothing like the one that parallelizes on the later. Most computer science people never learn how to do parallelism on a switch fabric, and it is rarely taught.

Tangentially, this is why whole brain emulation on silicon is a poor way of doing things. While you can map the wetware, the ... (read more)

A question that needs to be asked is where are you willing to go to find a job? San Jose? The best choices are somewhat context dependent.

Seaside's economy is based on a military post and agriculture, neither of which are conducive to an intellectually interesting job scene. There is a shortage of good computer people an hour north, so if you are looking up there and having trouble then there is probably a presentation gap. At the same time, I would not be surprised at all if you found the options in your area to be unsatisfactory.

0zntneo
Yea seaside is where i am looking mostly because my wife is at DLI right now.
0[anonymous]
Yea seaside is where i am looking mostly because my wife is at DLI right now.

The ASVAB is not an exemplar of careful correctness and it is not targeted at people for which that would be beneficial. When I took it many years ago there were a few questions with glaring ambiguities and questionable assumptions; I simply picked the answer that I thought they would want me to pick if I was ignorant of the subject matter.

I maxed the test.

The test is not aimed at intelligent, educated people. It is designed to filter out people of low intelligence. I've met many people that struggled to achieve 50%, something I used to find shocking. If ... (read more)

Define "top 1%". Many programmers may be "top 1%" at some programming domain in some sense but they will not be "top 1%" for every programming domain. It is conceivable that there are enough specializations in software such that half of all programmers are "top 1%" at something, even if that something is neither very interesting nor very important in any kind of grand sense. It is not just by domain either, many employers value a particular characteristic within that niche e.g. speed versus thoroughness versus optim... (read more)

What would a survey of a cross-section of "computer experts" have looked like predicting the Internet in 2005 from 1990? The level of awareness required to make that prediction accurately is not generally found, people who did understand it well enough to make an educated guess would be modeled as outliers. The above survey is asking people to make a similar type of prediction.

An important aspect of AI predictions like the above is that it is asking people who do not understand how AI works. They are definitely experts on the history of past atte... (read more)

A problem is that karma attempts to capture orthogonal values in a single number. Even though you can reduce those values to a single number they still need to be captured as separate values e.g. slashdot karma system for a half-assed example.

Karma seems to roughly fall into one of three buckets. The first is entertainment value e.g. a particularly witty comment that nonetheless does not add material value to the discussion. The second is informational value e.g. posting a link to particularly relevant literature of which many people are unaware. The thir... (read more)

3Armok_GoB
I can think of at least 2 kinds of value you missed: Artistic value, for things like well written stories and non-humorus but inspirational images made. Implied effort values, for things like summaries that required reading through some huge number of articles but isn't that impressive itself other than letting people know that those hundreds of articles didn't contain anything interesting and saving them the trouble to read them.
5Raemon
Upvoted because it was well reasoned (if lacking in information I didn't already know), and because the last line is funny.

Gamification is essentially the art of exploiting human cognitive biases so it is very meta to use gamification to teach rationality.

2Risto_Saarelma
An important thing with games is that all the participants are supposed to share the understanding that they are participating voluntarily in a game with specific rules. As long as gamification sticks to this, it's already somewhat separable from the field cognitive bias exploitation in general, where all sorts of subterfuge is generally the rule. Games are obviously compelling even with the players being aware that they are participating in a voluntary game. But does gamification work even when the mechanism the game is supposed to serve some ulterior goal and the cognitive biases that it works on are made clear to the player? Jane McGonigal has described running gamification on herself, and apparently being successful with it.
0Psy-Kosh
Well, arguably "advertising" is that art. Gamification is rather more specific, though in some cases it may involve exploiting biases. But hey, twisted meta is always fun. :)

Chomsky? He is something of a bellwether for specious reasoning, which is a contribution of sorts. The obviously inconsistent logic of the various beliefs he holds makes his philosophy, such as it is, seem disjointed and arbitrary.

As a philosopher, he plays a "crazy uncle" character.

-3scientism
Chomsky's beliefs include the following: that empirical research has no place in linguistics, that linguistic problems should be solved by expert intuition, that biology is inapplicable to linguistics and psychology, and that language did not evolve. All this and more can be found in his collection of papers "New Horizons in the Study of Language and Mind." I don't think people realise how far his work is from the cognitive revolution he is credited with having founded.
1Desrtopa
I've often wondered whether he was worth my time to investigate further, since on one hand I regularly hear him cited as a remarkable and inspiring figure, and on the other, I've never read an argument of his that I found enlightening or unusually well formulated.

It is more or less what khafra stated. I'm not saying it is true in your case (hint: winking smiley) but it is very common for people to evaluate their life choices as you did without regard for the evidence. To put it another way, your statement would only distinguishable from the ubiquitous life choice confirmation bias if you stated it had made your life much worse.

I can imagine several places worse than the Bay Area for many people (and several places better), so it is not as though your statement was not plausible on its face. :-)

This story is about rapid iteration rather than quantity. The "quantity" is the detritus of evolution created while learning to produce a perfect pot. If a machine was producing pots it would generate great quantity but the quality would not vary from one iteration to the next.

There are many stories and heuristics in engineering lore suggesting rapid iteration converges on quality faster than careful design. See also: OODA loops, the equivalent military heuristic.

2atucker
That's true. Originally, this post was part of "Don't Fear Failure". I intended for it to talk about low-cost failures, how practicing helps, and then do a bit of talking about how rationalists should be able to pay mindful attention to their mistakes in order to learn from them before they get right back up and try again. So basically advocating rapid iteration after desensitizing people to failing. However, I wasn't quite able to tie it all together and it just felt like it dragged on. So instead I split it up into a post which says that failing isn't that bad, and another about how practice pays off. I could follow up with another post which more clearly spells out rapid iteration, but that might be a bit much. I'd rather move on to talking about perfectionism and unduly favoring the status quo. Could be wrong though.

That sounds an awful lot like confirmation bias. ;-)

4moshez
It's not confirmation bias when two people agree. It's confirmation bias when one refuses to take into account contradictory evidence. Do you have any?

I have always had this problem in a bad way, but the above prescription strikes me as flimsy. What is to prevent me from disabling the technical device so that I can get my pellet of rat food? What if I need to dig through a bunch of links for whatever work it is I am supposed to be doing? It does not structurally modify incentives or behaviors.

To put it another way, if it is a huge waste of time when you are supposed to be working on something else, is it ever not a waste of time?

The best solution to the problem of wasting time for myself is something t... (read more)

0Vaniver
It's not prevention. It's increasing the cost. If the startup cost of going to CNN is about the same as the startup cost of pulling out a book on my stack of books to read, and I prefer the second to the first, this will mean I always choose to read a book rather than go to CNN. However, without this fix, then going to CNN is cheaper, and so I might choose it. Having to disable it- assuming that takes as long as the startup of something you want to do more- serves the same function.

As a general comment based on my own experience, there is an enormous value in studying existing art to know precisely what science and study has actually been done -- not what people state has been done. And at least as important, learning the set of assumptions that have driven the current body of evidence.

This provides an enormous amount of context as to where you can actually attack interesting problems and make a difference. Most of my personal work has been based on following chains of reasoning that invalidated an ancient assumption that no one had revisited in decades. I wasn't clever, it was really a matter of no one asking "why?" in many years.

Some of these hobbies are not like the others. I would classify hobbies based on whether or not rationalism is an essential prerequisite for engaging in the hobby. Programming and poker make sense to me but the rationales for the rest seem to be thinner, ascribing lessons that could be ascribed to almost any activity.

The distinction, as I see it, is that both programming and poker require rationalist discipline in depth that must be internalized to be effective. I can play video games or read/watch science fiction and benefit from the entertainment value ... (read more)

At a very high level, the problem is almost intrinsic; it is very difficult to stop a determined attacker given the current balance between defensive and offensive capabilities. A strong focus on hardening only makes it expensive, not impossible.

That said, most security breaches like the above are the result of incompetence, negligence, ignorance, or misplaced trust. In other words, human factors. Humans will continue to be a weak link across all of the components involved in security. There comes a point where systems are sufficiently hardened at a technical level that it is almost always easiest to attack the people that have access to them rather than the systems themselves.

For #1, having to drive, work, go to another important function, or being required to drink more later at some other function seems to be an acceptable occasional excuse but not a permanent one.

On #3, many cultures have sayings and aphorisms that share the idea that people who do not drink are not as trustworthy in various contexts. Much of it seems to follow from the idea that people are more honest when they've had a drink or two, and therefore people who do not drink are hiding their true character. The display of honesty is considered a trust-buildin... (read more)

8gwern
At the very least, refusing to drink can be seen as an attempt to empower yourself - let other people get stupider and more irrational (drunk) while remaining in full possession of your own mental powers. I recall at least one PUA page which described how to get the bartender to serve you watered down drinks (or water period) while your female targets get the full-strength alcohol.
4Raemon
It did occur to me soon after posting that drinking slowly (to the point that it's basically not at all) would allow me to give the impression that I'm drinking, an honest answer when people ask "what are you drinking?" with minimal side effects. The drinks I'm able to mostly enjoy tend to be "fruity" drinks that have... signaling issues.

On the other side of that argument, a fetus does not have the higher brain function or consciousness that would allow it to experience pain. When an adult is put under general anesthesia for surgery we do not generally consider them to be "experiencing pain" even though the body is still reacting the damage as though they were conscious. They still have brain function, they temporarily lack the higher brain function required for the meaningful experience of pain. A very similar argument could be applied to a fetus.

There is another effective framing technique that I almost never see used that might be worth considering because I've seen it used well in the past.

Most people think evolution is a purely biological concept and it is virtually always framed in such ways. This runs headlong into the mystical beliefs many people attach to living organisms. Making evolution a property of an organism is no different than making a "soul" the property of an organism to them, and fits in the same cognitive pigeonhole. A lot of the jumbled chemistry and thermodynamic a... (read more)

0Tiiba
The problem with some creationists (the ones who get the basics), as I understand it, is not that they don't think evolution is happening, but that they don't think it's fast enough to transform proto-bacterial zero-cellular balls of chemicals into people in a mere three billion years. Although, personally, I think it's a really long time.
2Raemon
This was definitely what solidified my position a few years ago, changing my stance from "evolution is very likely to be right" to "it's basically impossible for it to NOT be right." The defining moment was learning about the Avida code, which demonstrate that "irreducible complexity" was basically inevitable. For this e-mail, I'm thinking of glossing over most of the pre-genetic evidence. Point out that it DID heavily lean towards evolution being true, but the ultimate test was genetics. Evolutionary theory made predictions about how genetics would turn out to work, and if that prediction had turned out to be wrong, we would have had to make major changes to the theory or scrap it completely. But it didn't. Related species shared the percentage of genes we'd have expected them to, and the rate of mutation that we've observed demonstrates that it's mathematically inevitable. I'm not sure that's the "best" argument in an absolute sense, but I think it makes the most sense to focus on in the allotted space/attention-span.

Tangentially, the fact that she is arguing with a person that believes in evolution could itself be a problem that changes the dynamic.

I've often observed that most people believe in evolution in essentially the same way a creationist believes in creationism. They did not reason themselves into that position nor do they really understand evolution in any significant way, it was a position they were told all right-thinking people should believe and so that is why they do so. The charge often forwarded by creationists that evolution is merely another quasi-r... (read more)

A related empirical data point is that we already see strong light cone effects in electronic markets. The machine decision speeds are so fast that it is not possible to usefully communicate with similarly fast machines outside of a radius of some small number of kilometers because the state of reality at one machine changes faster than it can propagate that information to another due to speed of light limitations. The diminishing ability to influence decisions as a function of distance raises questions about the relevancy of most long haul communication b... (read more)

-1jacob_cannell
Good points. Looking at how the effect is already present in high speed digital trading makes it more immediately relevant, and perhaps we could generalize from some of those lessons for futures dominated by high speed intelligences. Yes, this is a related divergent effect. The idea of copying the internet into local caches to reduce latency is an example.

It is analogous to how you can implement a hyper-cube topology on a physical network in normal 3-space, which is trivial. Doing it virtually on a switch fabric is trickier.

Hyper-dimensionality is largely a human abstraction when talking about algorithms; a set of bits can be interpreted as being in however many dimensions is convenient for an algorithm at a particular point in time, which follows from fairly boring maths e.g. Morton's theorems. The general concept of topological computation is not remarkable either, it has been around since Tarski, it just... (read more)

You are missing the point. There are hyper-dimensional topological solutions that can be efficiently implement on vanilla silicon that obviate your argument. There is literature to support the conjecture even if there is not literature to support the implementation. Nonetheless, implementations are known to have recently been developed at places like IBM Research that have been publicly disclosed to exist (if not the design). (ObDisclosure: I developed much of the practical theory related to this domain -- I've seen running code at scale). Just because ... (read more)

0jacob_cannell
I'm totally missing how a "hyper-dimension topological solution" could get around the physical limitation of being realized on a 2D printed circuit. I guess if you use enough layers? Do you have a link to an example paper about this?

There is a subtle point I think you are missing. The problem is not one of processing power or even bandwidth but one of topology. Increasing the link bandwidth does not solve any problems nor does increasing the operations retired per clock cycle.

In parallel algorithms research, the main bottleneck is that traditional computer science assumes that the communication topology is a directly connected network -- like the brain -- but all real silicon systems are based on switch fabrics. For many years computer science simplified the analysis by treating thes... (read more)

3jacob_cannell
Topology is the central ultimate scalability problem, it manifests in multiple forms such as interconnect, the memory bottleneck, and so on. If you could magically increase the memory/link bandwidth and operations retired per lock cycle to infinity that would solve the hard problems. 2D topology and the wide separation of memory and computation limit the link bandwidth and operations per clock. The brain employs massive connectivity in 3D but is still obviously not fully connected, and even it has to employ some forms of switching/routing for selective attention and other component algorithms. The general topological problem is a von neumman design on a 2D topology with separate logic and memory divided into N logic gates and M memory gates can access about sqrt(M) of it's memory bits per clock and has similar sublinear scaling in bit ops/clock. Then factor in a minimum core logic size to support desired instruction capability and the effects of latency and you get our current designs. If you are willing to make much smaller very limited instruction set ASICS and mix them in with memory modules you can maximize the performance of a 2D design for some problems, but it's still not amazingly better. I don't see this as something that you can magically solve with a new body of computer science. The theoretical world does need to factor in this additional complexity, but in the real world engineers already design to the real bandwidth/link constraints. A switch fabric is a necessity with the very limited scaling you get in 2D (where memory/computation scales in 2D but transit scales in 1D). It's a geometry issue. The ultimate solution is to start moving into 3D so you can scale link density with surface area instead of perimeter. Of course then the heat issue is magnified, but that research is already under way. A complete theoretical framework of parallel computation should be geometric, and deal with mapping abstract dimensionless algorithms onto physical 3D co

Of course, it turns out I'll be in London on that day...

Elysian Pub is a good spot. Conveniently located too, as far as I am concerned.

0j_andrew_rogers
Of course, it turns out I'll be in London on that day...