Why do people see Mars as a better target for human colonization than the Moon? Most comments on lunar colonization seem to refer to two facts:
In my mind, both of these problems can be solved by a ceiling or dome structure. The ceiling both retains the atmosphere and also blocks harmful radiation. Note that a failure in the ceiling won't be catastrophic: the atmosphere won't drain rapidly, and the amount of radiation exposure per unit time isn't disastrously high even without the ceiling.
Good analysis, thanks. I buy the first two points. I'd be shocked to see an implementation that actually makes use of the lower metadata requirements. Are there languages that provide a boolean primitive that uses a single bit of memory instead of a full byte? Also I don't understand what you mean by persistence.
-1, this is pointlessly negative. There's a disclaimer at the top (so it's not like he's claiming false authority), the title is appropriate (so it's not like you were tricked into clicking on the article), and it's reasonably on-topic because LW people are in the software/AI/entrepreneurship space. Sure, maybe most of the proposals are far-fetched, but if one of the ideas sparks an idea that sparks an idea, the net value could be very positive.
Has anyone studied the Red Black Tree algorithms recently? I've been trying to implement them using my Finite State technique that enables automatic generation of flow diagrams. This has been working well for several other algorithms.
But the Red Black tree rebalancing algorithms seem ridiculously complicated. Here is an image of the deletion process (extracted from this Java code) - it's far more complicated than an algorithm like MergeSort or HeapSort, and that only shows the deletion procedure!
I'm weighing two hypotheses:
Theory of programming style incompatibility: it is possible for two or more engineers, each of whom is individually highly skilled, to be utterly incapable of working together productively. In fact, the problem of style incompatibility might actually increase with the skill level of the programmers.
This shouldn't be that surprising: Proust and Hemingway might both be gifted writers capable of producing beautiful novels, but a novel co-authored by the two of them would probably be terrible.
I haven't written it up, though you can see my parser in action here.
One key concept in my system is the Theta Role and the associated rule. A phrase can only have one structure for each role (subject, object, determiner, etc).
I don't have much to say about teaching methods, but I will say that if you're going to teach English grammar, you should know the correct grammatical concepts that actually determine English grammar. My research is an attempt to find the correct concepts. There are some things that I'm confident about and some areas where the syst...
Against Phrasal Taxonomy Grammar, an essay about how any approach to grammar theory based on categorizing every phrase in terms of a discrete set of categories is doomed to fail.
In terms of strategy, I recommend you to think about going to work at the Montreal Institute for Learning Algorithms. They recently received a grant from OpenPhil to do AI Safety Research. I can personally recommend the two professors at McGill (Joelle Pineau and Doina Precup). Since you are Russian, you should be able to handle the cold :-)
Continuing with Adams' theme of congratulating himself on making correct predictions, I'll point out that I correctly predicted both that Adams did in fact want Trump to win a year ago, and also planned to capitalize on the prediction if it came true, by writing a book:
...My guess is that Adams is hoping that Trump wins the election, because he will then write a book about persuasion and how Trump's persuasion skills helped him win. He already has a lot of this material on his blog. In that scenario he can capitalize on his correct prediction, which seemed
Does anyone have good or bad impressions of Calico Labs, Human Longevity, or other hi-tech anti-aging companies? Are they good places to work, are they making progress, etc?
This is a mean vs median or Mediocristan vs Extremistan issue. Most people cannot do lone wolf, but if you can do lone wolf, you will probably be much more successful than the average person.
Think of it like this. Say you wanted to become a great writer. You could go to university and plod through a major in English literature. That will reliably give you a middling good skill at writing. Or you could drop out and spend all your time reading sci-fi novels, watching anime, and writing fan fiction. Now most people who do that will end up terrible writers. B...
Taking classes is a relatively Mediocristan-style way to work with others, but there are other ways that get you Extremistan-style upside.
One way is to find a close collaborator or two. Amos Tversky and Daniel Kahneman had an extremely close collaboration, doing most of their thinking in conversation as they were developing the field of heuristics and biases research (as described in The Undoing Project). It's standard startup advice to have more than one founder so that you'll have someone "to brainstorm with, to talk you out of stupid decisions, and...
This is a mean vs median or Mediocristan vs Extremistan issue. Most people cannot do lone wolf, but if you can do lone wolf, you will probably be much more successful than the average person.
I cannot disagree with this more strongly. I am serial entrepreneur, and a somewhat successful one. Still chasing the big exit, but I've built successful companies that are still private. Besides myself I've met many other people in this industry which you'd be excused for thinking are lone wolfs. But the truth is the lone wolf's don't make it as they build things t...
I am working on a software tool that allows programmers to automatically extract FSM-like sequence diagrams from their programs (if they use the convention required by the tool).
Here is a diagram expressing the Merge Sort algorithm
Here is the underlying source code.
I believe this kind of tool could be very useful for code documentation purposes. Suggestions or improvements welcome.
There are lots of cacti that are mostly hairy/fuzzy instead of pointy.
In terms of air flow protection purchased vs biological effort expended, I'm not sure a leaf is better than a spike.
For a long time it was odd to me that cacti have lots of spikes and big thorns. I supposed that the goal was to ward off big ruminants like cows, but that doesn't really make much sense, since the desert isn't really overflowing with big animals that eat a lot of plants.
It turns out that protection from predators is only a secondary goal. The main goal is protection from the environment. The spikes capture and slow the air moving around the plant, to preserve moisture and protect against the heat.
Given that many of the most successful countries are small and self-contained (Singapore, Denmark, Switzerland, Iceland, arguably the other Scandinavian countries), and also the disasters visited upon humanity by large unified nation-states, why are people so attached to the idea of large-scale national unity?
We're neither Athenians nor Spartans. Athens and Sparta were city-states. Greek culture thrived because Greece is a mountainous archipelago that prevented large empires from forming. The Greek city-states were constantly at war with each other and with the outside world, and so they had to develop strong new ideas to survive.
You mentioned the Netherlands, which is quite similar in the sense that it was a small country with strong threatening neighbors, but still became successful because of its good social technology. The story of Europe in general is bas...
Yes, definitely. There is something about the presence of other agents with differing beliefs that changes the structure of the mathematics in a deep way.
P(X) is somehow very different from P(X|another agent is willing to take the bet).
How about using a "bet" against the universe instead of other agents? This is easily concretized by talking about data compression. If I do something stupid and assign probabilities badly, then I suffer from increased codelengths as a result, and vice versa. But nobody else gains or loses because of my success or failure.
Can someone give me an example problem where this particular approach to AI and reasoning hits the ball out of the park? In my mind, it's difficult to justify a big investment in learning a new subfield without a clear use case where the approach is dramatically superior to other methods.
To be clear, I'm not looking for an example of where the Bayesian approach in general works, I'm looking for an example that justifies the particular strategy of scaling up Bayesian computation, past the point where most analysts would give up, by using MCMC-style inference.
(As an example, deep learning advocates can point to the success of DL on the ImageNet challenge to motivate interest in their approach).
Most of the pessimistic people I talk to don't think the government will collapse. It will just get increasingly stagnant, oppressive and incompetent, and that incompetence will make it impossible for individual or corporate innovators to do anything worthwhile. Think European-style tax rates, with American-style low quality of public services.
There will also be a blurring of the line between the government and big corporations. Corporations will essentially become extensions of the bureaucracy. Because of this they will never go out of business and they will also never innovate. Think of a world where all corporations are about as competent as AmTrak.
Claim: EAs should spend a lot of energy and time trying to end the American culture war.
America, for all its terrible problems, is the world's leading producer of new technology. Most of the benefits of the new technology actually accrue to people who are far removed from America in both time and space. Most computer technology was invented in America, and that technology has already done worlds of good for people in places like China, India, and Africa; and it's going to continue help people all over the world in the centuries and millennia to come. Like...
I really want self-driving cars to be widely adopted as soon as possible. There are many reasons, the one that occurred to me today while walking down the street is : look at all the cars on the street. Now imagine all the parked cars disappear, and only the moving cars remain. A lot less clutter, right? What could we do with all that space? That's the future we could have if SDCs appear (assuming that most people will use services like Lyft/Uber with robotic drivers instead of owning their own car).
I agree with the broad sentiment, but I think it's increasingly unrealistic to believe that the liberal/conservative distinction is based on a fundamental philosophical difference instead of just raw partisan tribal hatred. In theory people would develop an ethical philosophy and then join the party that best represents the philosophy, but in practice people pick a tribe and then adopt the values of that tribe.
If there's anything we can do now about the risks of superintelligent AI, then OpenAI makes humanity less safe.
I feel quite strongly that people in the AI risk community are overly affected by the availability or vividness bias relating to an AI doom scenario. In this scenario some groups get into an AI arms race, build a general AI without solving the alignment problem, the AGI "fooms" and then proceeds to tile the world with paper clips. This scenario could happen, but some others could also happen:
Good catch. Adverbial attachment is really hard, because there aren't a lot of rules about where adverbs can go.
Actually, Ozora's parse has another small problem, which is that it interprets "complex" as an NN with a "typeadj" link, instead of as a JJ with an "adject" link. The typeadj link is used for noun-noun pairings such as "police officer", "housing crisis", or "oak tree".
For words that can function as both NN and JJ (eg "complex"), it is quite hard to disambiguate the two patterns.
Why is it so hard to refrain from irrational participation in political arguments? One theory is that in the EEA, if you overheard some people talking covertly about political issues, there was a good chance that they were literally plotting against you. In a tribal setting, if you're being left out of the political conversation, you're probably going to be the victim of the political change being discussed. So we've probably evolved a mental module that causes us to be hyperaware of political talk, and when we hear political talk we don't like, to jump in and try to disrupt it.
Anyone have any good mind hacks to help stay out of political conversations?
A lesson on the linguistic concept of argument structure, with special reference to observational verbs (see/hear/watch/etc) and also the eccentric verb "help".
The more biased away from neutral truth, the better the communication functions to affirm coalitional identity, generating polarization in excess of actual policy disagreements. Communications of practical and functional truths are generally useless as differential signals, because any honest person might say them regardless of coalitional loyalty.
If you really believe in this allegory, you should try to intervene before people choose what research field to specialize in. You are not going to convince people to give up their careers in AI after they've invested years in training. But if you get to people before they commit to advanced training, it should be pretty easy to divert their career trajectory. There are tons of good options for smart idealistic young people who have just finished their undergraduate degrees.
"But, Bifur, the prophecies are not that clear. It's possible the Balrog will annihilate us, but it's also possible he will eradicate poverty, build us dwarf-arcs to colonize other planets, and grant us immortality. Our previous mining efforts have produced some localized catastrophes, but the overall effect has been fantastically positive, so it's reasonable to believe continued mining will produce even more positive outcomes."
always regarded Taleb as a half-crackpot
My guess is Taleb wouldn't be offended by this, and would in fact argue that any serious intellectual should be viewed as a half-crackpot.
Serious intellectuals get some things right and get some things wrong, but they do their thinking independently and therefore their mistakes are uncorralated with others'. That means their input is a valuable contribution to an ensemble. You can make a very strong aggregate prediction by calling up your half-crackpot friends, asking their opinion, and forming a weighted average....
Request for programmers: I have developed a new programming trick that I want to package up and release as open-source. The trick gives you two nice benefits: it auto-generates a flow-chart diagram description of the algorithm, and it gives you steppable debugging from the command line without an IDE.
The main use case I can see is when you have some code that is used infrequently (maybe once every 3 months), and by default you need to spend an hour reviewing how the code works every time you run it. Or maybe you want to make it easier for coworkers to get...
reduction in male female differences in lifespan
The lifespan gap may be enforced by biology, but it seems wildly unjust to me that retirement-related social programs like Social Security and Medicare do not take the lifespan expectancy gap into account. For example, if the life expectancy gap is 5 years, the Medicare age of eligibility should be 68 for women and 63 for men, so that both sexes get the same number of years of expected coverage.
How do you weight the opinion of people whose arguments you do not accept? Say you have 10 friends who all believe with 99% confidence in proposition A. You ask them why they believe A, and the arguments they produce seem completely bogus or incoherent to you. But perhaps they have strong intuitive or aesthetic reasons to believe A, which they simply cannot articulate. Should you update in favor of A or not?
Everyone has every right to feel as pissed off and angry at this bullshit that’s coming down the pike as they want.
This really is not true. You have a right to be annoyed, but if your ideology causes you to actually hate millions of your fellow American citizens, then I submit you have an ethical obligation to emigrate.
Rationality principle, learned from strategy board games:
In some games there are special privileged actions you can take just once or twice per game. These actions are usually quite powerful, which is why they are restricted. For example, in Tigris and Euphrates, there is a special action that allows you to permanently destroy a position.
So the principle is: if you get to the end of the game and find you have some of these "power actions" left over, you know (retrospectively) that you were too conservative about using them. This is true even if...
Evicted, by Mathew Desmond, is an amazing work of ethnographic research into the lives of the urban poor and in particular their experiences with housing. Most importantly to me it feels real: nothing is sugarcoated. The poor people are incredibly irresponsible, but also the landlords are greedy, and the government agencies are incompetent and counterproductive. One typical event sequence goes something like this: a tenant living in a decrepit unit calls the building inspector to report some egregious violation. The inspector arrives and promptly demands t...
Five Factor Model (FFM) ... the model is founded on the lexical hypothesis:
I notice I am confused. I was sure that the FFM came out of doing the following simple procedure:
How wrong is this? How important is the "lexical hypothesis" part?
First, I appreciate the work people have done to make LW 2 happen. Here are my notes: