All of Daniel_Burfoot's Comments + Replies

First, I appreciate the work people have done to make LW 2 happen. Here are my notes:

  1. Strong feeling - the links and descriptions of the Sequences, the Codex, and HPMOR (while good) should not be at the top of the page. The top should be the newest material.
  2. Please please please include a "hide subthread" option to collapse a comment and all its responses. That is a dealbreaker for me, if a site doesn't have that feature, I won't read the comments.
  3. Current LW has a really nice alternating color scheme for comment/reply. One comment will have a
... (read more)
3Raelifin
Issue 2 is about to be fixed: https://github.com/Discordius/Lesswrong2/pull/188
6Habryka
Strongly agree with 1. I have a plan for a separate thing at the top of the frontpage for logged-in users that takes up much less space and is actually useful for multiple visits. Here is a screenshot of my current UI mockup for the frontpage: https://imgur.com/a/GXjTY The emphasis continue to be on historical instead of recent content, with the frontpage emphasizing reading for logged-in users. If you don't have anything in your reading-queue the top part disappears completely and you just have the recent discussion (though by default the HPMOR, The Sequences and The Codex are in your reading queue)
5casebash
I agree that people should not be able to upvote or downvote an article without having clicked through to it. I also find the comments hard to parse because the separation is less explicit than on either Reddit or here.
1Adam Zerner
For 6, I think users who want to minimize temptation should at least have the option of disabling this. Relevant: http://www.timewellspent.io/.
0Adam Zerner
2, 3 and 7 all seem like pretty noncontroversial and doable things.
4Raemon
Much of this is stuff that's on the development team's agenda (either to change or to think about). One thing that's a significant change that was very intentional is the "Opening up with the sequences, codex, and HPMOR" (albeit with a lot less certainty with HPMOR being included there). We do plan to have an "All Posts" page that ends up being the primary way you consume the site (with newest content first). And for people who've already read the sequences et-al, that'll be the preferred way for them to interact with the site. But for newcomers, a major shift with Lesserwrong is essentially, "since the glory days where most good content is from were way back when, and since understanding the sequences really is important for being able to engage productively with the site, we want to be encouraging newcomers to first engage with that content rather than treating it as a forum where newcomers can show up and start posting immediately." If you don't have that, then the discussion won't really have the elements that make Less Wrong particularly valuable.

Why do people see Mars as a better target for human colonization than the Moon? Most comments on lunar colonization seem to refer to two facts:

  1. the Moon has quite low gravity, so it cannot maintain an atmosphere for a long period of time.
  2. the Moon has no magnetic field, so it will not protect us from solar radiation.

In my mind, both of these problems can be solved by a ceiling or dome structure. The ceiling both retains the atmosphere and also blocks harmful radiation. Note that a failure in the ceiling won't be catastrophic: the atmosphere won't drain rapidly, and the amount of radiation exposure per unit time isn't disastrously high even without the ceiling.

0morganism
Mars has pretty low gravity too, maybe Luna has enough to protect health. Mars atmo at .05 of Earth. Mars has pretty much no magnetic field also, just a few (unexplained) loops that look like solar prominences. Luna much easier to supply.
2turchin
The great thing about Moon colony is that it ruins could survive a billion years, and will be found by the next civilization on Earth if it appears. They will found our DNA and data and return humanity to life. There are also ways to attract the attention of the next civilization to the place of our former colony. On Mars a colony remains can't survive for so long, as Fobos will collide with Mars in 50 million years and weathering is also stronger. The self-sustained colony on Mars probably is not possible without self-replicating robotics. If such robotics will be created it would create new space risks and new opportunities for colonization and interstellar travel. This would make Mars colony less relevant for survival.
5turchin
I uploaded new presentation based on our article (with Brian Green) in Futures about global risk survival using already existing nuclear-powered submarines. They are robust military grade survivors and could be converted into refuges with low costs. https://www.slideshare.net/avturchin/nuclear-submarines-as-global-risk-shelters Nuclear subs could provide the same level of protection as Moon or Mars colonies for most of the catastrophes where life on Earth survives - for the fraction of cost, starting from 1 mln usd, compared with trillions for Mars colony.
2Dagon
I think there's a ton of overlap in the problems faced for colonizing anywhere off-planet. So I strongly expect that colonizing either implies colonizing the pretty quickly (half a century or less). IMO, for pre-colony habitation (not self-sufficient, not going for exponential growth) the Moon is so much closer that it's almost guaranteed to be the starter and test location, with Mars and then maybe Jovian moons trailing by a few dozen years. At that point, it may turn out that one of the other places has enough more starter atmosphere and ready raw materials than the moon, and it's better to transition base->colony somewhere other than Earth's moon. Or maybe we'll collapse under the singularity or decide to fill the oceans with people before we deal with space.
2Thomas
There is a mountain on the Moon's south pole, where the Sun is always shining. Except when it's covered by Earth, which is rare and not for a long time. A great place for a palace of the Solar System's Emperor.
9[anonymous]
People usually point to there actually being hydrogen and carbon accessible on the surface of Mars, in the form of widespread permafrost/humidity and the CO2 atmosphere, whereas the only biomass/fuel precursor element that exists in large quantities on the moon is oxygen (in the rock, along with various metals and ions, just like rock on Earth, requiring interesing chemistry and/or molten rock electrolysis to get it out). Not much in the way of precursors to organic material on the moon. Personally I think both places are kind of absolute shit-holes for canned monkeys. Both are science bonanzas, the moon for information on the proto-Earth, and Mars for looking at a body which has had much less geological recycling since Hadean times and an ancient second hydrosphere and for all we know biosphere.
6Thomas
Can't use the Moon. It's already booked and reserved for a computronium.

Very nice, thanks. Ahh... Haskell really is quite pretty.

Good analysis, thanks. I buy the first two points. I'd be shocked to see an implementation that actually makes use of the lower metadata requirements. Are there languages that provide a boolean primitive that uses a single bit of memory instead of a full byte? Also I don't understand what you mean by persistence.

2gjm
Suppose you're making tree structures in a pure functional language, where there is no mutable state. Then what you need is functions that e.g. take a tree and a new element and return a new tree, sharing as much of its structure as possible with the old for efficiency's sake, that has the new element in it. These are sometimes called persistent data structures because the old versions stick around (or at least might; they might get garbage-collected once the runtime can prove nothing will ever use them).

-1, this is pointlessly negative. There's a disclaimer at the top (so it's not like he's claiming false authority), the title is appropriate (so it's not like you were tricked into clicking on the article), and it's reasonably on-topic because LW people are in the software/AI/entrepreneurship space. Sure, maybe most of the proposals are far-fetched, but if one of the ideas sparks an idea that sparks an idea, the net value could be very positive.

1Dagon
No, it's pointedly negative. This post doesn't belong on LW.
0Lumifer
I'm not saying it's misleading. I'm saying it's stupid.

Has anyone studied the Red Black Tree algorithms recently? I've been trying to implement them using my Finite State technique that enables automatic generation of flow diagrams. This has been working well for several other algorithms.

But the Red Black tree rebalancing algorithms seem ridiculously complicated. Here is an image of the deletion process (extracted from this Java code) - it's far more complicated than an algorithm like MergeSort or HeapSort, and that only shows the deletion procedure!

I'm weighing two hypotheses:

  1. Keeping a binary tree balance
... (read more)
6pangel
An intuition is that red-black trees encode 2-3-4 trees (B-trees of order 4) as binary trees. For a simpler case, 2-3 trees (Ie. B-trees of order 3) are either empty, a (2-)node with 1 value and 2 subtrees, or a (3-)node with 2 values and 3 subtrees. The idea is to insert new values in their sorted position, expand 2-nodes to 3-nodes if necessary, and bubble up the extra values when a 3-node should be expanded. This keeps the tree balanced. A 2-3-4 tree just generalises the above. Now the intuition is that red means "I am part of a bigger node." That is, red nodes represent the values contained in some higher black node. If the black node represents a 2-node, it has no red children. If it represents a 3-node, it has one red child, and if it represents a 4-node, it has 2 red children. In this context, the "rules" of the red-black trees make complete sense. For instance we only count black trees when comparing branch heights because those represent the actual nodes. I'm sure that with a bit of work, it's possible to make complete sense of the insertion/deletion rules through the B-tree lens but I haven't done it. edit: I went through the insertion rules and they do make complete sense if you think about a B-tree while you read them.
6gjm
There are other kinds of binary tree with simpler rebalancing procedures, most notably the AVL tree mentioned by cousin_it. I think red-black tends to dominate for some combination of these reasons: * Tradition. Some influential sources (e.g., Sedgwick's algorithms book[1], SGI's STL implementation) used, or gave more visibility to, red-black trees, and others copied them. * Fewer rotations in rebalancing. In some circumstances (certainly deletion; I forget whether it's true for insertion too) AVL trees may need to do Theta(log n) rotations, whereas red-black never need more than O(1). * Does this mean an actual advantage in performance? Maaaaybe. Red-black trees are, in the worst case at least, worse-balanced, which may actually matter more. Such benchmarks as I've seen don't suggest a very big advantage for either red-black or AVL over the other. * Persistence. If you want to make a persistent data structure out of a binary tree, whether for practical reasons or just to show your students in Haskell, it's easier with a red-black tree. * Metadata requirements. A red-black tree needs one bit per node, to store the redness/blackness. An AVL tree needs one and a half bits :-) to store the -1/0/+1 height difference. Perhaps in some implementations it's OK to "waste" one bit per node but not two. [1] I think. I don't have a copy myself. Surely it must at least mention AVL trees too, but my hazy recollection is that the balanced-tree algorithm Sedgwick gives most space to is red-black.
2cousin_it
I always felt that AVL trees were easier to understand than red-black. Just wrote some Haskell code for you. As you can see, both insertion and deletion are quite simple and rely on the same rebalancing operation.
0[anonymous]
AVL trees always felt simpler to me than red-black. Here's a quick implementation in Haskell, adapted from some code I found online. Delete seems to be about as complex as insert. It might have bugs, but I'm pretty sure the correct version would be about as long. Edit: it seems like someone has implemented deletion for RB trees in Haskell as well, and it's doesn't look too complicated. I haven't checked it carefully though.

Can anyone offer a linguistic explanation for the following phenomenon related to pronoun case and partial determiners:

  1. None of us want to go to school tomorrow.
  2. None of we want to go to school tomorrow (**).
  3. We want to go to school tomorrow.
  4. Us want to go to school tomorrow (**).
3lmn
In (1) the subject is the word "none". The word "us" is part of the prepositional phrase "of us".

Theory of programming style incompatibility: it is possible for two or more engineers, each of whom is individually highly skilled, to be utterly incapable of working together productively. In fact, the problem of style incompatibility might actually increase with the skill level of the programmers.

This shouldn't be that surprising: Proust and Hemingway might both be gifted writers capable of producing beautiful novels, but a novel co-authored by the two of them would probably be terrible.

1Lumifer
That seems rather obvious to me.
0WalterL
Kind of... Like, part of being 'highly skilled' as a programmer is being able to work with other people. I mean, I get what you are saying, but working with assholes is part of the devs tool bag, or he hasn't been a dev very long.

I haven't written it up, though you can see my parser in action here.

One key concept in my system is the Theta Role and the associated rule. A phrase can only have one structure for each role (subject, object, determiner, etc).

I don't have much to say about teaching methods, but I will say that if you're going to teach English grammar, you should know the correct grammatical concepts that actually determine English grammar. My research is an attempt to find the correct concepts. There are some things that I'm confident about and some areas where the syst... (read more)

0Valerio
Daniel, I'm curious too. What do you think about Fluid Construction Grammar? Can it be a good theory of language?

Against Phrasal Taxonomy Grammar, an essay about how any approach to grammar theory based on categorizing every phrase in terms of a discrete set of categories is doomed to fail.

0Wei Dai
I'm curious about your "system that doesn’t require a strict taxonomy". Is that written up anywhere? Also, does your work have any relevance to how children should be taught grammar in school?

In terms of strategy, I recommend you to think about going to work at the Montreal Institute for Learning Algorithms. They recently received a grant from OpenPhil to do AI Safety Research. I can personally recommend the two professors at McGill (Joelle Pineau and Doina Precup). Since you are Russian, you should be able to handle the cold :-)

Continuing with Adams' theme of congratulating himself on making correct predictions, I'll point out that I correctly predicted both that Adams did in fact want Trump to win a year ago, and also planned to capitalize on the prediction if it came true, by writing a book:

My guess is that Adams is hoping that Trump wins the election, because he will then write a book about persuasion and how Trump's persuasion skills helped him win. He already has a lot of this material on his blog. In that scenario he can capitalize on his correct prediction, which seemed

... (read more)
3fortyeridania
I hope you do, so I can capitalize on my knowledge of your longstanding plan to capitalize on your knowledge of Adams' longstanding plan to capitalize on his knowledge that Trump would win with a book with a book with a book.

Does anyone have good or bad impressions of Calico Labs, Human Longevity, or other hi-tech anti-aging companies? Are they good places to work, are they making progress, etc?

2Lumifer
I expect them to be nice places to work (because they are not subject to the vulgar and demeaning necessity to turn a profit), I also don't expect them to be making much progress in the near future.

I agree with you in the context of entrepreneurship, but the OP was talking about self improvement. The best strategy for learning or self-improving may be very different from the best strategy for building a company.

2username2
Your post said: Maybe we disagree on what it means to "lone wolf." If I try to steel-man your position, I can come up with a weak and a strong interpretation: The weak interpretation is that being a autodidact (capable of learning things on your own) will bring you higher chances of success. Being an autodidact myself, I agree from anecdotal experience. Also just being an expert in your field means developing autodidact skills at some point because eventually you surpass the level of all available classes and have to learn from the latest research journals and technical reports. However I would argue that this should still remain a social activity where you continue to interact with collaborators and bounce ideas off of trusted colleagues in order to avoid many of the pitfalls that come from truly working alone. This isn't a lone wolf so much as a free-thinking pack wolf, to carry the metaphor, that enjoys the best of both worlds. The strong interpretation is that you will or even can be successful by truly embarking on a lone quest all by yourself. It is this interpretation that I disagree with so strongly for the reasons given. In my experience smart people who go the "lone wolf" route inevitably end up in crackpot / crank territory as they accumulate bad ideas in their personal blind spots, assuming they don't fall prey to akrasia in the first place. In this sense I agree with the OP: glorifying the "lone wolf" path has done a lot of harm to a lot of LW'ers.

This is a mean vs median or Mediocristan vs Extremistan issue. Most people cannot do lone wolf, but if you can do lone wolf, you will probably be much more successful than the average person.

Think of it like this. Say you wanted to become a great writer. You could go to university and plod through a major in English literature. That will reliably give you a middling good skill at writing. Or you could drop out and spend all your time reading sci-fi novels, watching anime, and writing fan fiction. Now most people who do that will end up terrible writers. B... (read more)

5plethora
I think this discussion is somewhat confused by the elision of the difference between 'autodidact' and 'lone wolf'. 'Autodidact', in internet circles, is generally used to mean 'anyone who learns things primarily outside a formalized educational environment'; it's possible to be an autodidact while still being heavily engaged with communities and taking learning things as a social endeavor and so on, and in fact Eliezer was active in communities related to LW's subject matter for a long time before he started LW. By the same token, one of the main things I took from reading Ben Franklin's autobiography was that, despite having little formal schooling and being solely credited for many of his innovations, he didn't actually do it alone. I doubt he would've been even a tenth as successful as he was without something like his Junto. Some people will get more out of formal education than others, although getting things out of formal education is itself a skill that can be learned. (It seems to require an ability to buy into institutions on an emotional level that many of us lack. I saw college as an obnoxious necessity rather than a set of opportunities, and as a result got much less out of it than I could have. This seems to be a common mistake.) But I just don't think it's possible to become a spectacular writer, or even a middling one, as a lone wolf. If nothing else, you need feedback from a community in order to improve. Look at lone-wolf outsider art -- it's frequently unusual, but how much of it is good?
2aqsalose
You could have found a more convincing example. The objective metrics of quality of literature are hard to come by, but HPMOR does suffer from quite many, many stereotypical sins of fanfic / bad genre writing and makes a tiresome read. (One that I found especially grating and made me finally drop the story altogether could be described as "my main character is not OP because I set up these plainly arbitrary obstacles as a 'balance' ". Please, no. There's more to writing enjoyable, interesting characters in meaningful stories than such naive "balancing". The preferable end result is a piece of fiction that has something more going in them than surface-level entertainment plot which is amenable to measured in terms such as "is my character OP".) However, I did not register an account just to lambaste Eliezer's fiction. Here's a couple of points that hopefully tie this comment to the main thread of discussion (so that this contribution provides some signal instead of pure noise): 1. Taking a year-long course in lit or at least some input from the tradition of literature might have improved Eliezer's writing. A class isn't the only way to attain that input, but it certainly helps in finding out if you have missed something vital in your self-study. (After finding out those pieces of information you are free to judge and dismiss them, too, if you want, but you are now dismissing that information with the knowledge that it exists, which is prone to make your act of dismissal more intelligent and productive.) 2. I don't have exhaustive collection of biographies at hand, but I believe the general trend in "successful writing" is that the significant portion (probably majority) of successful writers (including, but not limited to, authors included in the Western canon as creators of "good literature"), read a lot, wrote a lot, and had a lot of corrective input to improve their writing during their writing careers. Actually, I wouldn't be surprised if prior to starting

Taking classes is a relatively Mediocristan-style way to work with others, but there are other ways that get you Extremistan-style upside.

One way is to find a close collaborator or two. Amos Tversky and Daniel Kahneman had an extremely close collaboration, doing most of their thinking in conversation as they were developing the field of heuristics and biases research (as described in The Undoing Project). It's standard startup advice to have more than one founder so that you'll have someone "to brainstorm with, to talk you out of stupid decisions, and... (read more)

This is a mean vs median or Mediocristan vs Extremistan issue. Most people cannot do lone wolf, but if you can do lone wolf, you will probably be much more successful than the average person.

I cannot disagree with this more strongly. I am serial entrepreneur, and a somewhat successful one. Still chasing the big exit, but I've built successful companies that are still private. Besides myself I've met many other people in this industry which you'd be excused for thinking are lone wolfs. But the truth is the lone wolf's don't make it as they build things t... (read more)

0[anonymous]
You can also have impact by being okay at several things. For example, I'm okay at both algorithms and UI, so I can get hired easily and then have my pick of projects (because most programmers prefer backend work). If I was also okay at writing or management, I'd probably be rich by now. I think Eliezer's success was also due to being good at several things, not being the best at one thing.
1Viliam
Why not both? The English literature lessons and sci-fi novels / anime / fan fiction. I don't know much about writing, but e.g. studying computer science at universities does not seem to prevent people from creating open source software.

I am working on a software tool that allows programmers to automatically extract FSM-like sequence diagrams from their programs (if they use the convention required by the tool).

Here is a diagram expressing the Merge Sort algorithm

Here is the underlying source code.

I believe this kind of tool could be very useful for code documentation purposes. Suggestions or improvements welcome.

0jackk
You might be interested in Conal Eliott's work on Compiling to Categories, which enables automatic diagram extraction (among a bunch of other things) for Haskell.
2ChristianKl
Most of code documentation happens in text files. Maybe it's worth drawing the diagram in ASCII or unicode characters?

There are lots of cacti that are mostly hairy/fuzzy instead of pointy.

In terms of air flow protection purchased vs biological effort expended, I'm not sure a leaf is better than a spike.

0Elo
The fur/spike can also be at specific widths to block certain wavelengths of light

For a long time it was odd to me that cacti have lots of spikes and big thorns. I supposed that the goal was to ward off big ruminants like cows, but that doesn't really make much sense, since the desert isn't really overflowing with big animals that eat a lot of plants.

It turns out that protection from predators is only a secondary goal. The main goal is protection from the environment. The spikes capture and slow the air moving around the plant, to preserve moisture and protect against the heat.

3cousin_it
Hang on, I'm not sure I buy it. Why are they so thin, hard and sharp then? Some kind of fuzz or flat leaves would work better.

Given that many of the most successful countries are small and self-contained (Singapore, Denmark, Switzerland, Iceland, arguably the other Scandinavian countries), and also the disasters visited upon humanity by large unified nation-states, why are people so attached to the idea of large-scale national unity?

0ChristianKl
Large nation-states have a lot of power and that power allows them to convince people that large-scale national unity is important. It's easier to lobby large states than to lobby a bunch of small states and that means the think tanks prefer centralization.
0Viliam
Maybe the countries you named also have something else in common, for example geography that makes them easier to defend (I am just guessing here), which allows them to use strategies that are not possible for most other countries. The obvious answer to why some people like an idea of a big country, is that being a small country next to a big country may be very unfortunate if the big country decides to expand its territory in your direction. More soldiers, greater military budget, greater propaganda budget, more brainpower to develop strategy and tactics, etc. How do you survive this? Saying "because you are clearly superior e.g. technologically" does not quite explain how you survived to the point when you became superior. Possible answer: your territory sucks; no natural resources, not even enough food. The big country never actually wanted your territory; or at least always had a more attractive alternative. No one was messing with you from the outside, and you succeeded to gradually develop a highly functional society inside. But if your neighbors are repeatedly trying to take pieces of your territory, the idea of uniting with someone stronger seems attractive.
0gilch
I'm not sure what idea you're talking about. Are you talking about intranational unity or international unity? Can you give examples?

I really don't think you should try to convince mid-career professionals to switch careers to AI safety risk research. Instead, you should focus on recruiting talented young people, ideally people who are still in university or at most a few years out.

3[anonymous]
I agree. I must admit that the "convince academics" part of the plan is still a bit vague. It's unclear to me how new fields become fashionable in academia. How does one even figure that out? I'd love to know. The project focuses on the "create a MOOC" part right now, which is plenty of value in itself.

Does anyone follow the academic literature on NLP sentence parsing? As far as I can tell, they've been writing the same paper, with minor variations, for the last ten years. Am I wrong about this?

0MrMind
I'm not following NLP per se, but lately I've seen papers on grammar's analysis based on the categorical semantics of quantum mechanics (that is, dagger-compact categories). Search the latest papers by Coecke on the arXiv.
2Darklight
Well, as far as I can tell, the latest progress in the field has come mostly through throwing deep learning techniques like bidirectional LSTMs at the problem and letting the algorithms figure everything out. This obviously is not particularly conducive to advancing the theory of NLP much.

We're neither Athenians nor Spartans. Athens and Sparta were city-states. Greek culture thrived because Greece is a mountainous archipelago that prevented large empires from forming. The Greek city-states were constantly at war with each other and with the outside world, and so they had to develop strong new ideas to survive.

You mentioned the Netherlands, which is quite similar in the sense that it was a small country with strong threatening neighbors, but still became successful because of its good social technology. The story of Europe in general is bas... (read more)

0wubbles
But we are a community that faces a choice about what values we want: insularity and strong group membership, or openness and intellectualism. This seems fairly analogous to me, and after all don't we need strong new ideas to stop the AI apocalypse or improve lives all over the world? Perhaps the Amish vs. liberal German judaism would be a better analogy.

Yes, definitely. There is something about the presence of other agents with differing beliefs that changes the structure of the mathematics in a deep way.

P(X) is somehow very different from P(X|another agent is willing to take the bet).

How about using a "bet" against the universe instead of other agents? This is easily concretized by talking about data compression. If I do something stupid and assign probabilities badly, then I suffer from increased codelengths as a result, and vice versa. But nobody else gains or loses because of my success or failure.

0cousin_it
I think the idea in the post works for all bets including those offered by smart agents, stupid agents, and nature.

Can someone give me an example problem where this particular approach to AI and reasoning hits the ball out of the park? In my mind, it's difficult to justify a big investment in learning a new subfield without a clear use case where the approach is dramatically superior to other methods.

To be clear, I'm not looking for an example of where the Bayesian approach in general works, I'm looking for an example that justifies the particular strategy of scaling up Bayesian computation, past the point where most analysts would give up, by using MCMC-style inference.

(As an example, deep learning advocates can point to the success of DL on the ImageNet challenge to motivate interest in their approach).

3jsalvatier
There's not that many that I know of. I do think its much more intuitive and lets you build more nuanced models that are useful for social sciences. You can fit the exact model that you want instead of needing to fit your case in a preexisting box. However, I don't know of too many examples where this is hugely practically important. The lack of obviously valuable use cases is part of why I stopped being that interested in MCMC, even though I invested a lot in it. There is one important industrial application of MCMC: hyperparameter sampling in Bayesian optimization (Gaussian Processes + priors for hyper parameters). And the hyperparameter sampling does substantially improve things.

Most of the pessimistic people I talk to don't think the government will collapse. It will just get increasingly stagnant, oppressive and incompetent, and that incompetence will make it impossible for individual or corporate innovators to do anything worthwhile. Think European-style tax rates, with American-style low quality of public services.

There will also be a blurring of the line between the government and big corporations. Corporations will essentially become extensions of the bureaucracy. Because of this they will never go out of business and they will also never innovate. Think of a world where all corporations are about as competent as AmTrak.

4tukabel
hmm, blurred lines between corporations and political power... are you suggesting EU is already a failed state? (contrary to the widespread belief that we are just heading towards the cliff damn fast) well, unlike Somalia, where no goverment means there is no border control and you can be robbed, raped or killed on the street anytime.... in civilized Europe our eurosocialist etatists achieved that... there are nor borders for invading millions of crimmigrants that may rob/rape/kill you anytime day or night... and as a bonus we have merkelterrorists that kill by hundreds sometimes (yeah, these uncivilized Somalis did not even manage this... what a shame, they certainly need more cultural marxist education)

LessWrong: kind of an odd place to find references to Christian ethical literature.

0Screwtape
You'd think that, but rationalist spaces are pretty much the only places where people recognize what I'm referencing.

and that's about it.

We can agree to disagree, but my view is that the US has dozens or hundreds of problems we can't solve - education, criminal justice, the deficit, the military-industrial complex - because the government is paralyzed because of partisan hatred.

4Lumifer
True Not true. The government is paralyzed (see the grandparent: "ossified and sclerotic") because people and institutions which find the status quo convenient and profitable are powerful and able to block changes. And if you want a dominant active government, well, be careful of what you wish for.
0MrMind
Eh, no. Not in the case of USA. Republicans have locked the Congress, and they have a (theoretically) Republican president. It should be smooth sailing if it were only for partisan hatred.
  1. I live in Berkeley, where there are literally armed gangs fighting each other in the streets.
  2. Stability isn't intrinsically valuable. The point is that we know our current civilizational formula is a pretty good one for innovation and most others aren't, so we should stick to the current formula more or less.
  3. My recommendation is a political ceasefire. Even if we could just decrease the volume of partisan hate speech, without solving any actual problems, that seems like it would have a lot of benefits.
0lmn
More like one armed gang, and a group of people who have finally had enough and decided to stand up for themselves.

Claim: EAs should spend a lot of energy and time trying to end the American culture war.

America, for all its terrible problems, is the world's leading producer of new technology. Most of the benefits of the new technology actually accrue to people who are far removed from America in both time and space. Most computer technology was invented in America, and that technology has already done worlds of good for people in places like China, India, and Africa; and it's going to continue help people all over the world in the centuries and millennia to come. Like... (read more)

2woodchopper
I think it's an interesting point about innovation actually being very rare, and I agree. It takes a special combination of things for to happen and that combination doesn't come around much. Britain was extremely innovative a few hundred years ago. In fact, they started the industrial revolution, literally revolutionising humanity. But today they do not strike me as particularly innovative even with that history behind them. I don't think America's ability to innovate is coming to end all that soon. But even if America continues to prosper, will that mean it continues to innovate? It takes more than prosperity for innovation to happen. It takes a combination of factors that nobody really understands. It takes a particular culture, a particular legal system, and much more.
1pcm
You write about its importance, yet I suspect EAs mostly avoid it due to doubts about tractability and neglectedness.
1ChristianKl
France is at place 18 in the global innovation index with a score of 54.04 while the US is at place 4 with a score of 61.40. Given that you live in Berkely US innovation is more visible to you than French innovation. You don't see the French trains that are much better than anything that the US has at present. The US is a bit more innovative then France but if you say that France isn't innovating at all today while the US does, that produces a flawed view.
0Lumifer
True. Not true. There's rationale to help America continue be inventive, but that's not the same thing at all as "continue to prosper" since the US looks at the moment like an empire in decline -- one that will continue to prosper for a while, but will be too ossified and sclerotic to continue innovating. Note that it's received wisdom in Silicon Valley (and elsewhere) that you need to innovate in the world of bits because the world of atoms is too locked-down. There are some exceptions (see e.g. Musk), but overall the difference between innovations in bits and innovations in atoms is huge and stark. Not true at all. Even in Berkeley what you have is young males playing political-violence LARP games (that's how you get laid, amirite?) and that's about it. Read less media -- it optimizes for outrage.
0tristanm
First you will have to fight against the current trend of rationalists avoiding even discussing culture-war topics. SSC is currently edging towards more limitations on what can be discussed, where culture-war topics can be banned and are at least siloed into separate discussion areas. I think we should try to keep LessWrong an area where there are no limitations on topics that can be discussed - although we might try to enforce the level and quality of discussion to a certain standard. Politics is the mind-killer, but that doesn't mean you can't avoid being mind-killed when you talk about it.
0Dagon
Please expand on "Currently the most serious threat to the stability of American society is the culture war", and provide some reasoning for "stability" being a driver of producing beneficial technology. I dispute (or perhaps just don't understand) both premises. I also am not sure if you mean "end the culture war" or "win the culture war for my side". Is surrendering your recommended course of action?

I really want self-driving cars to be widely adopted as soon as possible. There are many reasons, the one that occurred to me today while walking down the street is : look at all the cars on the street. Now imagine all the parked cars disappear, and only the moving cars remain. A lot less clutter, right? What could we do with all that space? That's the future we could have if SDCs appear (assuming that most people will use services like Lyft/Uber with robotic drivers instead of owning their own car).

0WalterL
Nah. We'll own our own SDC's, and they'll wait for us like the existing ones do. Uber/Lyft would need to be HUGELY cheaper than owning a car to make up for having to wait for them to arrive to go anywhere.
0Brendan Long
I sometimes wonder if there is more low hanging fruit in lives that could be saved if car safety was improved. Self driving cars are obviously one way to do that, but I worry that we're ignoring easier solutions because self driving cars will solve the problem eventually (not that I know what those easier solutions are).
0Lumifer
I don't know, what? Nothing particularly exciting comes to my mind...
2Kallandras
The improvement in human productivity would be substantial, just in terms of the time saved while not driving, not to mention the extra man-hours from people not dying in preventable collisions. I've also been thinking that it could cause a big shakeup in the housing market, as living in suburbs would be more appealing when your hour-long commute is reading/working time instead of driving time.

I agree with the broad sentiment, but I think it's increasingly unrealistic to believe that the liberal/conservative distinction is based on a fundamental philosophical difference instead of just raw partisan tribal hatred. In theory people would develop an ethical philosophy and then join the party that best represents the philosophy, but in practice people pick a tribe and then adopt the values of that tribe.

2Viliam
I think it's both. My model is that people join "tribes" that attract them psychologically, because their reflect either their traits or experience, but often also because of peer pressure. And then, the "tribes" create political coalitions to get more power, and this is where many "strange bedfellows" and dogmatism happens. -- In other words, that there are natural "clusters in the opinion-space", and also historical coalitions of clusters based on random events. The difficult part is to find out, when we talk about a group, how much of its constitution is a natural cluster, and how much is a historically evolved coalition of potentially unrelated clusters. For example, it is natural for a person to enjoy the idea of a world where their specific traits are highly rewarded, and the skills they miss are considered irrelevant. It is also natural, for people who feel oppressed by a group X, to make "fighting against X" a part of their identity. But whether groups A and B make a coalition against a coalition of C and D, or whether A and C make a coalition against B and D, that mostly depends on history. Maybe A and B originally had nothing in common, but they joined forces because their common enemy C was too strong at some moment of history; and now it may be different, but the idea that A and B should be allies is already considered common sense between the members of both groups, so C chose D as an ally, despite having nothing else in common. Talking about "Republicans" and "Democrats" is likely too far on the coalition-making level. Not sure how we operationally define e.g. "conservatives" -- for example, would that include the communists in the former communist countries (people who want to "make Soviet Russia great again")? Because "clinging to the past" seems like a psychological trait, but whether the past happens to be capitalist or communist or islamic or whatever, that's a historical accident.

If there's anything we can do now about the risks of superintelligent AI, then OpenAI makes humanity less safe.

I feel quite strongly that people in the AI risk community are overly affected by the availability or vividness bias relating to an AI doom scenario. In this scenario some groups get into an AI arms race, build a general AI without solving the alignment problem, the AGI "fooms" and then proceeds to tile the world with paper clips. This scenario could happen, but some others could also happen:

  • An asteroid is incoming and going to des
... (read more)
1Wei Dai
I think Eliezer wrote this in part to answer your kind of argument. In short, aside from your first scenario (which is very unlikely since the probability of an asteroid coming to destroy Earth is already very small, and then the probability of a narrow AI making a difference is even smaller) none of the others constitute a scenario where a narrow AI provides a permanent astronomical benefit, to counterbalance the irreversible astronomical damage that would be caused by an unaligned AGI.

Good catch. Adverbial attachment is really hard, because there aren't a lot of rules about where adverbs can go.

Actually, Ozora's parse has another small problem, which is that it interprets "complex" as an NN with a "typeadj" link, instead of as a JJ with an "adject" link. The typeadj link is used for noun-noun pairings such as "police officer", "housing crisis", or "oak tree".

For words that can function as both NN and JJ (eg "complex"), it is quite hard to disambiguate the two patterns.

0Douglas_Knight
Some things are really hard, but if everyone else can get this adverb correct, maybe it isn't that hard.

Why is it so hard to refrain from irrational participation in political arguments? One theory is that in the EEA, if you overheard some people talking covertly about political issues, there was a good chance that they were literally plotting against you. In a tribal setting, if you're being left out of the political conversation, you're probably going to be the victim of the political change being discussed. So we've probably evolved a mental module that causes us to be hyperaware of political talk, and when we hear political talk we don't like, to jump in and try to disrupt it.

Anyone have any good mind hacks to help stay out of political conversations?

7Viliam
When people are plotting, there is going to be an "inner group". And your winning choices are either to join the "inner group" (if you predict it will win) or express disinterest publicly (if you predict it will lose). This is true both in EEA and at high school. In other environments, people overestimate their importance essentially for two reasons: First, with larger numbers of people in general, each individual matters less. A marginal new ally is more important to a group with ten members, than to a group with thousand members. Second, mere numbers of people matter less than their power. For a high-school clique another average person can be a valuable asset, but a political party some people are orders of magnitude more worth than an average person. I try to remind myself of (what I believe to be) the big picture. If you are going to participate in online political debates, you essentially have two choices to make: (1) is this going to be a casual opinion expressing, or are you going to play it like a pro? and (2) are you going to present sane opinions, or will you make yourself into a two-dimensional caricature of human being? I believe that if you are not playing it like a pro, you are just wasting your time, achieving nothing good (neither for you, nor for the world in general). I also believe that unless you are already very famous, presenting sane opinions is a losing strategy (because sane opinions are suboptimal for signalling loyalty to a tribe). Therefore, most likely the only winning strategy for you is in the "insane pro" quadrant. Now the question is whether you are going to do it, or if it seems like too much work and too little fun. For me, laziness usually wins at this point. To explain, "doing it like a pro" means that instead of commenting on other people's websites or social networks, you will make your own trademarked content. You will post articles on your own website (where you have absolute moderator powers), and where you will build y
0tristanm
I see no issue with engaging in rational political discussion. The key is avoiding the overly tribal arguments that proliferate throughout social media. I think those are a lot like sports arguments - you want to join in just to root for your team. I doubt that it has to do with the kind of social gossip that was used to determine the status hierarchy in our early tribal environments - that still exists in almost the same form as it did then I think.

Sorry to hear that, I know a lot of LW-adjacent people were involved.

Is there a postmortem discussion or blog post anywhere?

positive-sum information-conveying component and a zero-sum social-control/memetic-warfare component.

Style complaint: did you really need to use five hyphenated words in one line in the first sentence?

4Zack_M_Davis
Yes.

A lesson on the linguistic concept of argument structure, with special reference to observational verbs (see/hear/watch/etc) and also the eccentric verb "help".

The more biased away from neutral truth, the better the communication functions to affirm coalitional identity, generating polarization in excess of actual policy disagreements. Communications of practical and functional truths are generally useless as differential signals, because any honest person might say them regardless of coalitional loyalty.

2TheAncientGeek
ie, people use strange beliefs as tribal shibboleths.

Peter McCluskey wrote a review of my book, and I wrote a response here. Thanks to Peter for writing the review!

If you really believe in this allegory, you should try to intervene before people choose what research field to specialize in. You are not going to convince people to give up their careers in AI after they've invested years in training. But if you get to people before they commit to advanced training, it should be pretty easy to divert their career trajectory. There are tons of good options for smart idealistic young people who have just finished their undergraduate degrees.

1Lumifer
I would like to see a single real-life example where this worked. ("single" not as in "a single person", but as in "a single field to avoid")

"But, Bifur, the prophecies are not that clear. It's possible the Balrog will annihilate us, but it's also possible he will eradicate poverty, build us dwarf-arcs to colonize other planets, and grant us immortality. Our previous mining efforts have produced some localized catastrophes, but the overall effect has been fantastically positive, so it's reasonable to believe continued mining will produce even more positive outcomes."

0scarcegreengrass
"Yes, but while i'd pay a million diamonds for immortality, i'd pay a thousand million to save the dwarven race. The two paths are wildly disproportionate."

always regarded Taleb as a half-crackpot

My guess is Taleb wouldn't be offended by this, and would in fact argue that any serious intellectual should be viewed as a half-crackpot.

Serious intellectuals get some things right and get some things wrong, but they do their thinking independently and therefore their mistakes are uncorralated with others'. That means their input is a valuable contribution to an ensemble. You can make a very strong aggregate prediction by calling up your half-crackpot friends, asking their opinion, and forming a weighted average.... (read more)

0BiasedBayes
Thats a way too simplistic way to think about this. One has to stand on the shoulders of giants to be intellectual in the first place. Also there is this thing called scientific consensus and there are reason why its usually rational to lean ones opinions in line with scientific consensus- not because of other people think like it too but because its usually the most balanced view of the current evidence. Talebs argument about being IYI is pretty ridiculous and includes stuff like not deadlifting, not cursing on twitter and not drinking white wine with steak while naming some of attributes of IYI using people he does not like. I get it its partly satire but he fails to make any sharp arguments, its mostly sweeping generalisation while generating these heuristics around the concept of IYI that are grossly simplistic. Come on : ”The IYI has been wrong, historically, on Stalinism, Maoism, GMOs, Iraq, Libya, Syria, lobotomies, urban planning, low carbohydrate diets, gym machines, behaviorism, transfats, freudianism, portfolio theory, linear regression, Gaussianism, Salafism, dynamic stochastic equilibrium modeling, housing projects, selfish gene, election forecasting models, Bernie Madoff (pre-blowup) and p-values. But he is convinced that his current position is right.” OK.

Request for programmers: I have developed a new programming trick that I want to package up and release as open-source. The trick gives you two nice benefits: it auto-generates a flow-chart diagram description of the algorithm, and it gives you steppable debugging from the command line without an IDE.

The main use case I can see is when you have some code that is used infrequently (maybe once every 3 months), and by default you need to spend an hour reviewing how the code works every time you run it. Or maybe you want to make it easier for coworkers to get... (read more)

8Lumifer
Converting local time to UTC and back. Time zones, daylight savings times, etc. are very messy.

reduction in male female differences in lifespan

The lifespan gap may be enforced by biology, but it seems wildly unjust to me that retirement-related social programs like Social Security and Medicare do not take the lifespan expectancy gap into account. For example, if the life expectancy gap is 5 years, the Medicare age of eligibility should be 68 for women and 63 for men, so that both sexes get the same number of years of expected coverage.

0DryHeap
I would wager that the majority of gender inequalities in the Western world are reinforced by biology.

How do you weight the opinion of people whose arguments you do not accept? Say you have 10 friends who all believe with 99% confidence in proposition A. You ask them why they believe A, and the arguments they produce seem completely bogus or incoherent to you. But perhaps they have strong intuitive or aesthetic reasons to believe A, which they simply cannot articulate. Should you update in favor of A or not?

0ChristianKl
If I don't understand a topic well I'm likely to simply copy the beliefs of friends who seem to have delved deep into an issue even if they can't tell me exactly why they believe what they believe. If I on the hand already have a firm opinion and especially if the reasons for my opinions aren't possible to be communicated easily I don't update much.
1TheAncientGeek
Trying to steelman arguments by talking to people you know in real life isnt a good method. You will find the best arguments in books and papers written by people who have acquired the rare skill of articulating intuitions.
0Dagon
What's your prior for A, and what was your prior for their confidence in A? very roughly speaking, updates feel like surprise.

Everyone has every right to feel as pissed off and angry at this bullshit that’s coming down the pike as they want.

This really is not true. You have a right to be annoyed, but if your ideology causes you to actually hate millions of your fellow American citizens, then I submit you have an ethical obligation to emigrate.

Rationality principle, learned from strategy board games:

In some games there are special privileged actions you can take just once or twice per game. These actions are usually quite powerful, which is why they are restricted. For example, in Tigris and Euphrates, there is a special action that allows you to permanently destroy a position.

So the principle is: if you get to the end of the game and find you have some of these "power actions" left over, you know (retrospectively) that you were too conservative about using them. This is true even if... (read more)

2ChristianKl
Relationships also get build by asking for favor and by giving favors. Social capital isn't necessarily used up by asking for favors.
4Gunnar_Zarncke
Another example is the question: "When have you failed the last time?" Because when you don't fail you are not advancing as fast as you could (and don't learn as much). On the other hand: Running of full energy may drain your power over the long run.
2WalterL
Sure. If you never miss a plane then you are getting to the airport too early.
5Dagon
This isn't true in all games, and doesn't generalize to life. There are lots of "power moves" that just don't apply to all situations, and if you don't find yourself in a situation where it helps, you shouldn't use them just because they're limited. It doesn't even generalize completely to those games where a power move is always helpful (but varies in how helpful it is). It's perfectly reasonable to wait for a better option, and then be surprised when a better option doesn't occur before the game ends, See also https://en.wikipedia.org/wiki/Secretary_problem - the optimal strategy for determining the best candidate for a one-use decision ends up with the last random-valued option 1/e of the time. (edit:actually, it only FINDS the best option 1/e of the time. close to 2/3 of the time it fails.)

Evicted, by Mathew Desmond, is an amazing work of ethnographic research into the lives of the urban poor and in particular their experiences with housing. Most importantly to me it feels real: nothing is sugarcoated. The poor people are incredibly irresponsible, but also the landlords are greedy, and the government agencies are incompetent and counterproductive. One typical event sequence goes something like this: a tenant living in a decrepit unit calls the building inspector to report some egregious violation. The inspector arrives and promptly demands t... (read more)

Five Factor Model (FFM) ... the model is founded on the lexical hypothesis:

I notice I am confused. I was sure that the FFM came out of doing the following simple procedure:

  1. Give people a many-item personality survey
  2. Do a PCA of the resulting data
  3. Keep the top 5 eigenvectors
  4. Label them with reasonably accurate adjectives that seem to describe the general drift of the vector

How wrong is this? How important is the "lexical hypothesis" part?

4Douglas_Knight
That's right. The lexical hypothesis only comes in at step 1 by including questions like "I am [adjective]." We start with a vague theory in the questionnaire and apply dimension reduction. The lexical hypothesis is that language gives us a vague theory. We want as broad a theory as possible, so it is useful to combine questionnaires. Some sources claim that the original questionnaire was generated from language without questions from explicit theories, but I don't think that's correct.
Load More