Posts

Sorted by New

Wiki Contributions

Comments

Great post.

I don't think communicating trades is the only issue. Even if we could communicate with ants, e.g. "Please clean this cafeteria floor and we'll give you 5 kg of sugar" "Sure thing, human", I think there are still barriers.

  • Can the ants formulate a good plan for cleaning the floor?
  • Can the ants tell when the floor is clean enough?
  • Can the ants motivate their team?
  • Can the ants figure out where to deposit debris, and figure this out if a human janitor accidentally leaves the bin in a different place than yesterday

There's a lot to the task of cleaning the cafeteria floor beyond is it mechanically possible for the worker and can the worker speak English well enough to articulate a trade.

A spatial framing:

(1) All objects have positions in space
(2) The desire by people to consume and use objects is not uniform over space (cars are demanded in Los Angeles more than Antarctica)
(3) The productive capacity to create and improve objects is not uniform over space (it's easier to create iron ore from an Australian mine, or a car at a Detroit factory)
(4) Efficiently satisfying the distribution of desires over space by the distribution of productive capacity over space necessarily involves linking separate points in space through transportation of goods
(5) Owning an object is easier when it is near you and harder when it is far from you

Summing up, satisfying preferences requires transportation, and transportation is easier if ownership is transferred along with the physical object. Therefore it is advantageous to trade.

I spent years trading in prediction markets so I can offer some perspective.

If you step back and think about it, the question 'How well can the long-term future be forecasted?' doesn't really have an answer. The reason for this is that it completely depends on the domain of the forecasts. Like, consider all facts about the universe. Some facts are very, very predictable. In 10 years, I predict the Sun will exist with 99.99%+ probability. Some facts are very, very unpredictable. In 10 years, I have no clue whether the coin you flip will come up heads or tails. As a result, you cannot really say the future is predictable or not predictable. It depends on which aspect of the future you are predicting. And even if you say, ok sure it depends, but like what's the average answer - even then, the only the way to arrive at some unbiased global sense of whether the future is predictable is to come up with some way of enumerating and weighing all possible facts about the future universe... which is an impossible problem. So we're left with the unsatisfying truth that the future is neither predictable or unpredictable - it depends on which features of the future you are considering.

So when you show the plot above, you have to realize it doesn't generalize very well to other domains. For example, if the questions were about certain things - e.g., will the sun exist in 10 years - it would look high and flat. If the questions were about fundamentally uncertain things - e.g., what will the coin flip be 10 years from now - it would look low and flat. The slope we observe in that plot is less a property of how well the future can be predicted and more a property of the limited set of questions that were asked. If the questions were about uncertain near-term geopolitical events, then that graph shows the rate that information came in to the market consensus. It doesn't really tell us about the bigger picture of predicting the future.

Incidentally, this was my biggest gripe with Tetlock and Gardner's Superforecasting book. They spent a lot of time talking about how Superforecasters could predict the future, but almost no time talking about how the questions were selected and how if you choose different sets of counterfactual questions you can get totally different results (e.g., experts cannot predict the future vs rando smart people can predict the future). I don't really fault them for this, because it's a slippery thorny issue to discuss. I hope I have given you some flavor of it here.

Answer by TedSandersMay 28, 201920

Rationalists should have mental models of the world that say if aliens/AI were out there, a few rare and poorly documented UFO encounters is not at all how we would find out. These stories are not worth the oxygen it takes to contemplate them.

In general, thinking more rationally can change confidence levels in only two directions: either toward more uncertainty or toward more certainty. Sometimes, rationalism says to open your mind, free yourself of prejudice, and overcome your bias. In these cases, you will be guided toward more uncertainty. Other time, rationalism says, c'mon, use your brain and think about the world in a way that's deeply self-consistent and don't fall for surface-level explanations. In these cases, you will be guided toward more certainty.

In my opinion, this is a case where rationalism should make us more certain, not less. Like, if there were aliens, is this really how we would find out? Obviously no.


My hypothesis: They don't anticipate any benefit.

Personally, I prefer to chat with friends and high-status strangers over internet randos. And I prefer to chat in person, where I can control and anticipate the conversation, rather than asynchronously via text with a bunch of internet randos who can enter and exit the conversation whenever they feel like it.

For me, this is why I rarely post on LessWrong.

Seeding and cultivating a community of high value conversations is difficult. I think the best way to attract high quality contributors is to already have high quality contributors (and perhaps having mechanisms to disincentivize the low quality contributors). It's a bit of a bootstrapping problem. LessWrong is doing well, but no doubt it could do better.

That's my initial reaction, at least. Hope it doesn't offend or come off as too negative. Best wishes to you all.

Observation: I tried to take your survey, but discovered it's only for people who have attended meetups.

Recommendation: Edit your title to be 'If you've attended a LW/SSC meetup, please take the meetups survey!'

Anticipated result: This will save time for non-meetup people who click the survey, start to fill it out, and then realize it wasn't meant for them.

Re: your request for collaboration - I am skeptical of ROI of research on AI X-risk, and I would be happy to help offer insight on that perspective, either as a source or as a giver of feedback. Feel free to email me at {last name}{first name}@gmail.com

I'm not an expert in AI, but I have a PhD in semiconductors (which gives me perspective on hardware) and currently work on machine learning at Netflix (which gives me perspective on software). I also was one of the winners of the SciCast prediction market a few years back, which is evidence that my judgment of near-term tech trends is decently calibrated.

Answer by TedSandersDec 12, 201810

I didn't perceive either of you as hostile.

I think you each used words differently.

For example, you interpret the post as saying, "metoo has never gone too far."

What the post actually said was, "I've heard people complain that it 'goes too far,' but in my experience the cases referred to that way tend to be cases where someone... didn't endure much in the way of additional consequences."

I read that sentence that as much more limited in scope than your interpretation. (And because it says 'tend' and not 'never', supplying a couple of data points isn't enough information, by itself, to challenge the author's conclusion.)

In addition, you interpreted "metoo" as broadly meaning action against those accused of sexual misconduct.

However, the author interprets "metoo" more narrowly, as meaning action against those accused of sexual misconduct that would otherwise not have occurred in a counterfactual world without the #metoo movement that took off in 2017.

So in the end you didn't seem to disagree with the author's point, just their word usage.

I can empathize why the author wasn't eager to sustain the interaction with you. You used words differently and asked a bunch of questions asking the author to explain themselves. The author may have logically perceived the conversation as a cost, not a benefit.

This is my perception of your conversation. I hope it is helpful to you.

What does winning look like?

I think I might be a winner. In the past five years: I have won thousands of dollars across multiple prediction market contests. I earned a prestigious degree (PhD Applied Physics from Stanford) and have held a couple of prestigious high-paying jobs (first as a management consultant at BCG, and now an algorithms data scientist at Netflix). I have a fulfilling social life with friends who make me happy. I give tens of thousands to charity. I enjoy posting to Facebook and surfing the internet. I have the means and motivation to keep learning about areas outside my expertise. I floss and exercise and generally am satisfied with my health.

I think I could be considered both a rationalist and a winner.

But I post rarely to LessWrong because my rational perception is that it takes effort but does not provide return. Generally I think my shortcomings are shortcomings of execution rather than irrationality, and those are the areas I aim to improve upon. My arena for self-improvement is my workplace and my life, not a website. As a result, my stories like mine might be underrepresented in your sampling.

If rationalists were winning, how would we know? What would winning look like?

Load More