Rationality lessons from Overwatch, a multiplayer first-person shooter:
1) Learning when you're wrong: The killcam, which shows how I died from the viewpoint of the person who killed me, often corrects my misconception of how I died. Real life needs a killcam that shows you the actual causes of your mistakes. Too bad that telling someone why they are wrong is usually considered impolite.
2) You get what you measure: Overwatch's post-game scoring gives metals for teamwork activities such as healing and shots blocked and this contributes to players' willingness to help their teammates.
3) Living in someone else's shoes: The game has several different classes of characters that have different strengths and weaknesses. Even if you rarely play a certain class, you get a lot from occasionally playing it to gain insight into how to cooperate with and defeat members of this class.
Addressing 1) "Learning when you're wrong" (in a more general sense):
Absolutely a good thing to do, but the problem is that you're still losing time making the mistakes. We're rationalists; we can do better.
I can't remember what book I read it in, but I read about a practice used in projects called a "pre-mortem." In contrast to a post-mortem, in which the cause of death is found after the death, a pre-mortem assumes that the project/effort/whatever has already failed, and forces the people involved to think about why.
Taking it as a given that the project has failed forces people to be realistic about the possible causes of failures. I think.
In any case, this struck me as a really good idea.
Overwatch example: If you know the enemy team is running a Mcree, stay away from him to begin with. That flashbang is dangerous.
Real life example: Assume that you haven't met your goal of writing x pages or amassing y wealth or reaching z people with your message. Why didn't you?
I am trying to outline main trends in AI safety this year, may I ask an advise what I should add or remove from the following list?
1.Elon Musk became the main player in AI field with his OpenAI program. But the idea of AI openness now opposed by his mentor Nick Bostrom, who is writing an article which is questioning safety of the idea of openness in the field of AI. http://www.nickbostrom.com/papers/openness.pdf Personally I think that here we see an example of arrogance of billionaire. He intuitively come to idea which looks nice, appealing and may work...
The Einstein Toolkit Consortium is developing and supporting open software for relativistic astrophysics
this is a core product, that you can attach modules to for specific models that you want to run. able to handle GR on a cosmological scale !
Say you are a strong believer and advocate for the Silicon Valley startup tech culture, but you want to be able to pass an Ideological Turing Test to show that you are not irrational or biased. In other words, you need to write some essays along the lines of "Startups are Dumb" or "Why You Should Stay at Your Big Company Job". What kind of arguments would you use?
Being a believer in X inherently means, for a rationalist, that you think there are no good arguments against X.
Huh? You are proposing a very stark, black-and-white, all-or-nothing position. Recall that for a rationalist a belief has a probability associated with it. It doesn't have to be anywhere near 1. Moreover, a rationalist can "believe" (say, with probability > 90%) something against which good arguments exist. It just so happens that the arguments pro are better and more numerous than the arguments con. That does not mean that the arguments con are not good or do not exist.
And, of course, you should not think yourself omniscient. One of the benefits of steelmanning is that it acquaints you with the counterarguments. Would you know what they are if you didn't look?
I didn't realize that the biggest supporter of UBI in the US is the ex-leader of the Service Employees Union. Guess i will have to read that book next. Have Agars 'Humanities End' to tackle next..
http://www.alternet.org/economy/universal-basic-income-solves-robots-taking-jobs
and a write-up on why the elites don't get the Brexit drama right..
http://www.bloomberg.com/view/articles/2016-06-24/-citizens-of-the-world-nice-thought-but
Is the EU regulations on algorithmic decision-making and a “right to explanation” positive for our future? Does it make a world with UFAI less likely?
Room for improvement in Australia’s overseas development aid
...Poor countries typically receive aid from many donors. In Vietnam, Australia is one of 51 multilateral and bilateral donors (Vietnam Ministry of Planning 2010). Interactions between a large number of donors and a single recipient government can have a cumulative and damaging impact. For example, in 2005, the Tanzanian government produced about 2,400 reports for the more than 50 donors operating in the country (TASOET 2005: 1). In the Pacific Islands, some senior government officials are so busy
In the quest to optimize my sleep I have found over the last days that I relaxed a lot more as usual. I sleep on the side but I put cushion between my back and the wall so that part of my weight rests on the back and part rests on the mattress of the bed.
Are there any real reasons why standard beds are flat? Or is it just a cultural custom like our standard toilet design that exists for stupid reasons?
not that I know of. Various suggestions of sleeping with a body pillow exist. Hammocks exist. Plenty of people take naps on couches or in reclining chairs.
I wonder if it has anything to do with ease of manufacture.
I am sure you have read this: www.lesswrong.com/r/discussion/lw/mvf/
(relevant side note) Traditional Japanese beds are harder and thinner than western beds.
Is post-rationalism dead? I'm following some trails and the most updated material is at least three years old.
If so, good riddance?
Estimation of timing of AI risk
I want to once again try to assess expected time until Strong AI. I will estimate prior probability of AI, and then try to update it based on recent evidences.
At first, I will try to prove the following prior probability of AI: "If AI is possible, it most likely will be built in the 21 century, or it will be proven that the task has some very tough hidden obstacles". Arguments for this prior probability:
Science power argument. We know that humanity was able to solve many very complex tasks in the past, and it to
Comp Vision and Machine Learning conference on in Vegas. Some recommended reading at the bottom
https://sites.google.com/site/multiml2016cvpr/
and this is one guy blogging it, must be a lot of twittering too...
https://gab41.lab41.org/all-your-questions-answered-cvpr-day-1-40f488103076#.braqj1fdj
Quantified hedonism - Personal Key Performance Indicators
The phrase burn the boats comes from the VIking practice of burning boats on the shore before invading so they have to win and settle. No retreat, it's an inspiring analogy, but I heard it in the context of another Real Social Dynamics video, so the implication is to approach sets as if there is no retreat? Bizaare, those guys.....anyway that RSDPapa video suggested that personal KPI's were useful. What's measured gets improved, or so the saying goes. So which KPI's should you choose? After some thou...
Thoughts on the King, Warrior, Magician, Lover archetypes?
Having been at the self-dev, PUA, systems, psychology, lesswrong, kegan, philosophy, and other things - game for a very long time. My discerning eye suggests that some of the model is good, and some is bad. My advice to anyone looking at that model is that there are equal parts shit and diamonds. If you haven't been reading in this area for 9 years you can't see what's what. Don't hold anything too closely but be a sponge and absorb it all. Throw out the shit when you come across it and keep the diamonds.
At the end of the 4 (KWML) pages suggest some various intelligent and reasonable ways to develop one's self:
Estimation of timing of AI risk
I want to once again try to assess expected time until Strong AI. I will estimate prior probability of AI, and then try to update it based on recent evidences.
At first, I will try to prove the following prior probability of AI: "If AI is possible, it most likely will be built in the 21 century, or it will be proven that the task has some very tough hidden obstacles". Arguments for this prior probability:
Science power argument. We know that humanity was able to solve many very complex tasks in the past, and it took typically around 100 years. That is flight of heavy than air objects, nuclear technologies, space exploration. 100 years is enough for several generations of scientists to concentrate on a complex task and extract everything about it which we could do without some extraordinary insight from outside our knowledge. We are already working on AI for 65 years, by the way.
Moore's argument Moore's law will run out of its power in the 21 century, but this will not stop growth of stronger and stronger computers for a couple decades.
This growth will result from cheaper components, from large number of interconnected computers, for cumulative production of components and from large money investment. It means that even if Moore's laws stops (that is there will be no more progress in microelectronic chips technology), in 10-20 years from that day the power of most powerful computers in the world will continue to grow, but in lower and lower speed, and may grow 100 -1000 times from the moment of Moore law ending.
But such computer will be very large, power consuming and expensive. They will cost hundreds billions of dollars and consume gigawatts of energy. The biggest computer planned now is 200 petaflops Intel "Summit" and event if Moore law end on it, it means that 20 exaflops computers will be eventually built.
There also several almost unused option: quantum computers, superconducting, FPGA, new ways of parallelization, graphen, memristors, optics, use of genetically modified biological neurons for calculations.
All it means that: A) 10 power 20 flops computers will be eventually built. (And its is comparable with some estimates of human brain capacity.) B) They will be built in the 21 century. C) 21 century will see the biggest advance in computer power compare with any other century and almost all, what could be built, will be built in 21 century and not after.
So, the computer on which AI may run will be built in the 21 century.
"3" Uploading argument
The uploading even of a worm is lagging, but uploading provides upper limit on AI timing. There is no reason to believe that scanning human brain will take more than 100 years.
Conclusion from prior: Flat probability distribution.
If we know for sure that AI will be built in the 21 century, we could give it flat probability, which gives it equal probability to appear in any year, around 1 per cent. (It results in cumulated exponential probability by the way, but we will not concentrate on it now). We could use this probability as prior for our future updates of it. Now we will consider argument for updating this prior probability.
Updates of the prior probability.
Now we could use this prior probability of AI to estimate timing of AI risks. Before we discussed AI in general, but now we add the word “risk”.
Arguments for rising AI risks probability in near future:
We don’t need a) self improving b) super human с) universal d) world domination AI for extinction catastrophe. All these conditions are not necessary. Extinction is simpler task than friendliness. Even a program which helps to built biological viruses and is local, non self-improving, not agent and specialized could create enormous harm by helping to build hundreds designed pathogens-viruses in the hands of existential terrorists. Extinction-grade AI may be simple. And it also could come earlier in time than full friendly AI. While UFAI may be ultimate risk, we may not be able to survive until it because of simpler form of AIs, almost on the level of computer viruses. In general earlier risks overshadow later risks.
We should take lower estimates of timing of AI arrival based on precautionary principle. Basically this means that we should treat 10 per cent probability of its arrival as 100 per cent.
We may use events of last several years for update our estimation of AI timing. In last years we saw enormous progress in AI based on neural nets. The doubling time of AI efficiency in different test is around 1 year now, and it win on many games (Go, Poker so on). Belief in AI possibility rose in recent years, which result in overhype and large growth in investments as well as many new startups. Specialized hardware for neural nets was built. If such growth will continue for 10-20 years, it would mean 1000- 1 000 000 growth in AI capabilities, which must include reaching of human level AI.
AI is increasingly used to built new AIs. AI writes programs, help to calculate connectome of human brain.
All it means that we should expect human level AI in 10-20 years and superintelligence soon afterwards.
It also means that AI probability is distributed exponentially, from now and until it creation.
The biggest argument against it is also historic: we saw a lot of AI hypes before and they failed to produce meaningful results. AI is always 10 years from now and researchers in AI tend to overestimate it. Humans tend to be overconfident about AI research.
We also are still far from understanding how human brain works, and even simplest question about it may be puzzling. Another way to assess AI timing is idea that AI is unpredictable black swan event, depending from only one idea to appear (it seems that Yudkowsky think so). If someone gets this idea, AI is here.
In this case we should multiply number of independent AI researchers on number of trails, that is number of new ideas they get. I suggest to think that the last rate is constant. In this case we should estimate the number of active and independent AI researchers. It seems that it is growing fuelled by new funding and hype.
So my conclusion is that if we going to be afraid of AI we should estimate it arrival in 2025-3025 and have our preventive ideas ready and deployed to this time. If we want to hope to use AI in preventing other x-risks or in life extension, we should not expect it until second half of 21 century. We should use earlier estimation for bad AI than for Good AI.
We know that humanity was able to solve many very complex tasks in the past, and it took typically around 100 years. That is flight of heavy than air objects,
That seems to be false. Leonardo da Vinci had drafts of flying machines and it took a lot longer than 100 years to get actual flight.
If it's worth saying, but not worth its own post (even in Discussion), then it goes here.
Notes for future OT posters:
1. Please add the 'open_thread' tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should be posted in Discussion, and not Main.
4. Open Threads should start on Monday, and end on Sunday.