Great post.
I don't think communicating trades is the only issue. Even if we could communicate with ants, e.g. "Please clean this cafeteria floor and we'll give you 5 kg of sugar" "Sure thing, human", I think there are still barriers.
There's a lot to the task of cleaning ...
A spatial framing:
(1) All objects have positions in space
(2) The desire by people to consume and use objects is not uniform over space (cars are demanded in Los Angeles more than Antarctica)
(3) The productive capacity to create and improve objects is not uniform over space (it's easier to create iron ore from an Australian mine, or a car at a Detroit factory)
(4) Efficiently satisfying the distribution of desires over space by the distribution of productive capacity over space necessarily involves linking separate points in space through transportation of g...
I spent years trading in prediction markets so I can offer some perspective.
If you step back and think about it, the question 'How well can the long-term future be forecasted?' doesn't really have an answer. The reason for this is that it completely depends on the domain of the forecasts. Like, consider all facts about the universe. Some facts are very, very predictable. In 10 years, I predict the Sun will exist with 99.99%+ probability. Some facts are very, very unpredictable. In 10 years, I have no clue whether the coin you flip will come ...
Rationalists should have mental models of the world that say if aliens/AI were out there, a few rare and poorly documented UFO encounters is not at all how we would find out. These stories are not worth the oxygen it takes to contemplate them.
In general, thinking more rationally can change confidence levels in only two directions: either toward more uncertainty or toward more certainty. Sometimes, rationalism says to open your mind, free yourself of prejudice, and overcome your bias. In these cases, you will be guided toward more uncertainty. Other time, r...
My hypothesis: They don't anticipate any benefit.
Personally, I prefer to chat with friends and high-status strangers over internet randos. And I prefer to chat in person, where I can control and anticipate the conversation, rather than asynchronously via text with a bunch of internet randos who can enter and exit the conversation whenever they feel like it.
For me, this is why I rarely post on LessWrong.
Seeding and cultivating a community of high value conversations is difficult. I think the best way to attract high quality contributors is to already h...
Observation: I tried to take your survey, but discovered it's only for people who have attended meetups.
Recommendation: Edit your title to be 'If you've attended a LW/SSC meetup, please take the meetups survey!'
Anticipated result: This will save time for non-meetup people who click the survey, start to fill it out, and then realize it wasn't meant for them.
Re: your request for collaboration - I am skeptical of ROI of research on AI X-risk, and I would be happy to help offer insight on that perspective, either as a source or as a giver of feedback. Feel free to email me at {last name}{first name}@gmail.com
I'm not an expert in AI, but I have a PhD in semiconductors (which gives me perspective on hardware) and currently work on machine learning at Netflix (which gives me perspective on software). I also was one of the winners of the SciCast prediction market a few years back, which is evidence that my judgment of near-term tech trends is decently calibrated.
I didn't perceive either of you as hostile.
I think you each used words differently.
For example, you interpret the post as saying, "metoo has never gone too far."
What the post actually said was, "I've heard people complain that it 'goes too far,' but in my experience the cases referred to that way tend to be cases where someone... didn't endure much in the way of additional consequences."
I read that sentence that as much more limited in scope than your interpretation. (And because it says 'tend' and not...
If the housekeeper were to earn a wage of 3x rent, 15 other housemates would be required at those price points. That's a lot of cooking and cleaning.
What does winning look like?
I think I might be a winner. In the past five years: I have won thousands of dollars across multiple prediction market contests. I earned a prestigious degree (PhD Applied Physics from Stanford) and have held a couple of prestigious high-paying jobs (first as a management consultant at BCG, and now an algorithms data scientist at Netflix). I have a fulfilling social life with friends who make me happy. I give tens of thousands to charity. I enjoy posting to Facebook and surfing the internet. I have the means and motivation to ke...
I think this is why attending universities and otherwise surrounding yourself with smart people is crucial. Their game will elevate your game. I often find myself learning more after someone smart asks me questions about a topic I thought I already knew. And the more this happens, the more I am able to short-circuit the process and preemptively ask those questions of myself.
"Thus, if we had to give animals rights – this would result in us being their slaves."
If we give other citizens the right to not be murdered, does that make us their slaves? Obviously not.
If we give animals the right to not be murdered, does that make us their slaves? Again, obviously not.
I'm not sure how someone thinks that giving rights means slavery. Obviously obligations can fall into a spectrum of severity, but I don't think the entire spectrum is worth labeling "slavery."
This is excellent. Thank you for writing it!
Interesting. I was surprised at how predictable the studies were. It felt like results that aligned with my intuition were likely to be replicated, and results that didn't (e.g., priming affecting a pretty unrelated task) were unlikely to be replicated. Makes me wonder - what's the value of this science if a layperson like me can score 18/18 (with 3 I don't knows) by gut feel after reading only a paragraph or two? Hmm.
(Then again, I guess my attitude of finding predictable results low-value is what has incentivized so much bad science in the hunt for counterintuitive results with their higher rewards.)
Elephant in the Brain convinced me that many things human say are not to convey information or achieve conscious goals; rather, we say things to signal status and establish social positioning. Here are three hypotheses for why the community focuses on AI that have nothing to do with the probability or impact of AI:
Generally yes, I think it's better when titles reveal the answer rather than the question alone. "Dangerous AI timing" sounds a bit awkward to my ear. Maybe a title like "Catastrophically dangerous AI is plausible before 2030" would work.
I think it's great that you and other people are investing time and thought into writing articles like these.
I also think it's great that you're soliciting early feedback to help improve the work.
I left some comments that I hope you find helpful.
Is this actually true? Do you have a source? I have tried Googling for it.
My understanding is that the sky's blue color was caused by Rayleigh scattering. This scattering is higher for shorter wavelengths. There's no broad peak in scattering associated with nitrogen absorption lines (which I imagine would be very narrowband, rather than broadband).
Wikipedia's article on Rayleigh scatting mentions oxygen twice but makes no reference to your theory.
Wavelengths of visible light are around ~500 nm. Even infrared is on the order of micrometers. I don't think the spikes that we're imagining are micrometers apart.
Very cool! Appreciate the time you took to share your findings. I learned something new.