Prisoner's Dilemma Variant
There are a few tweaks to the Iterated Prisoner's Dilemma which can affect which strategies tend to be successful. A very common one is to randomize how long the round is, so predicting the end-game doesn't overwhelm all other strategy factors. A less common one is adding noise, so that what each program tries to do isn't necessarily what happens.
Does anyone know of any tourneys that have been run where, in addition to Cooperation or Defection, each program also has the choice to End The Game, simulating quitting a business relationship, moving away, shunning, or otherwise ceasing to interact with another program?
Reddit is giving away 10% of their ad revenue to 10 charities that receive the most votes from the community. You can vote for as many charities as you want, with any account that has been created before 10AM PST today.
You can vote for your favorite charities here. I've had problems with the search by name, so if you don't find something, try searching by EIN instead.
GiveWell, GiveDirectly, Evidence Action/Deworm the World. You can vote for multiple charities.
I've noticed that when I'm working on personal projects or (to a lesser extent) complex games, my motivation to continue working or playing drops off significantly as soon as I've figured out the solution for the project or the optimal strategies for the games. The sensation is a bit like, say, playing an RPG and doing some postgame quest for the Infinity+1 sword, and then not wanting to play anymore once you have it -- I worked hard to get a godweapon but don't have any urge to use it. This is particularly frustrating for programming projects, that tend to get dropped unfinished after the major problems have been solved.
I don't really have any point to make with that. It's just irritating.
You've dropped out of the lower end of Flow, the optimal level of challenge for a task.
You've solved the intellectually interesting nugget, or believe you have, and now all that's left are the mundane and often frustrating details of implementation. Naturally you'll lose some motivation.
So you have to embrace that mundanity, and/or start looking at the project differently.
Does anyone else find that the problem of qualia seems like more of a problem for some senses than others? For example, my sense of sight versus my sense of hearing. When I look at the color red, I perceive some fundamentally different sensation than when I look at blue. Though they are both caused by looking at different frequencies of light, there is something "over and above" the frequencies that is difficult to explain, and has caused so much ink to be spilled in the philosophy of consciousness.
However, when I hear something, I just hear frequencies. This is whether I am listening to a symphony or a single sine wave, white noise, or the person currently shoveling snow outside. There isn't anything "over and above" the sounds; they are all obviously the same "kind" of thing to me. I can categorize the individual frequencies if it is a simple enough sound, and more complicated sounds, while I can't categorize them, don't feel like they are anything different than just combinations of simple frequencies.
None of the sound frequencies are fundamentally different in the way that red and blue are. An oboe and a violin may have different profiles of overto...
I recently found out that Feynmann only had an IQ of 125.
This is very surprising to me. How should I/you update?
Perhaps the IQ test was administered poorly.
I think that high g/IQ is still really important to success in various fields. (Stephen Hsu points out that more physicists have IQs of 150 than 140, etc. In other words, that marginal IQ matters even past 140.).
Feynman was younger than 15 when he took it, and very near this factoid in Gleick's bio, he recounts Feynman asking about very basic algebra (2^x=4) and wondering why anything found it hard - the IQ is mentioned immediately before the section on 'grammar school', or middle school, implying that the 'school IQ test' was done well before he entered high school, putting him at much younger than 15. (15 is important because Feynman had mastered calculus by age 15, Gleick says, so he wouldn't be asking his father why algebra is useful at age >15.) - Given that Feynman was born in 1918, this implies the IQ test was done around 1930 or earlier. Given that it was done by the New York City school district, this implies also that it was one of the 'ratio' based IQ tests - utterly outdated and incorrect by modern standards. - Finally, it's well known that IQ tests are very unreliable in childhood; kids can easily bounce around compared to their stable adult scores.
So, it was a bad test, which even under ideal circumstances is unreliable & prone to error, and administered in a mass fashion and likely not by a genuine psychometrician.
-- gwern
Stephen Hsu estimates that we'll be able to have genetically enhanced children with IQs ~15 points higher in the next 10 years.
Bostrom and Carl Schulman's paper on iterated embryo selection roughly agrees.
It seems almost too good to be true. The arguments/facts that lead us to believe that it will happen soon are:
I st...
The difference between having a child with an IQ 55 vs IQ 145 partner already gives you more than 15 IQ points in your child on average.
I want to spend a few weeks seriously looking into cryonics: how it works, the costs, the theory about revival, the changes in the technology in the past 60 years, the options that are available.
I want to become an expert in cryonics to the extent that I can answer, in depth, the questions that people typically have when they hear about this "crazy idea" for the first time. {Hmm...That sounds a little like bottom-line reasoning, trying to prepare for objections, instead of ferreting out the truth. I'll have to be careful of that. To be fair, I will need to overcome objections to get my family to sign up. Still, be careful of looking for data just to affirm my naive presumption.}
What should I read?
Perhaps individuals are more liable to believe claims that have a small number of strong arguments than believe claims with a large number of weak arguments, even if the sum of the evidence for both claims is equally strong, as individuals often can’t hear all arguments for something, so they only hear the n strongest ones.
One of my favorite moments in Diaspora, or in any book ever, was when one character (Inoshira?) was trying to convince another (Yatima?) to do something. Yatima runs a nonsentient simulation of verself, and realizes that Inoshira has something like a 90% chance of convincing ver, and decides to save ver the trouble and goes along with it.
Except... I was searching for that passage for the recent quotes repository in /r/discussion, and I can't find it. The closest I've found is before the two of them visit the bridgers, but that's not how that scene goes.
I'v...
I remembered it too. Found the quote you're referring to, I think:
"He ran a quick self-predictive model. There was a ninety-three per cent chance that he’d give in, after a kilotau spent agonising over the decision. It hardly seemed fair to keep Karpal waiting that long."
Egan, Greg (2010-12-30). Diaspora (Kindle Locations 3127-3129). Orion. Kindle Edition.
Scholarship hack: Get accepted to UMUC. All University of Maryland libraries are interconnected, and as an online college UMUC will ship books anywhere in the US (not sure about Hawaii / Alaska) free of charge + free return shipping. So for the price of the application fee you get access to 12 university libraries with delivery to your doorstep.
Disclaimer: I myself haven't done precisely as described above, since it's not necessary for me - my neighbor takes UMUC classes and lets me use his account.
Does anybody else have similar hacks? Any similar institutions that'll ship to your door?
Considering options for reducing environmental impact of energy, it seems it would be both more economical and more environmentally sound for a large group of people to get together and invest in a nuclear power plant than for each of them to individually install solar panels on their roofs. Taking the USA as an example, the typical home consumes about 15,000 kWh/year and an average home solar installation providing this power would have a total cost of $30,000, or about $12,000 after city rebates and tax credits. It would provide power for about 20 years ...
I'm wondering if there are any fellow chess players here on Less Wrong. I'm a Fide Master from Sweden, ELO rating ~2300.
I think it would be interesting to get some statistics on the strength of the chess players here and relate that to other LW community statistics, such as IQ for example.
I've had an experience a couple of times that feels like being stuck in a loop of circular preferences.
It goes like this. Say I have set myself the goal of doing some work before lunch. Noon arrives, and I haven't done any work--let's say I'm reading blogs instead. I start feeling hungry. I have an impulse to close the blogs and go get some lunch. Then I think I don't want to "concede defeat" and I better do at least some work before lunch, to feel better about myself. I open briefly my work, and then… close it and reopen the blogs. The cycle re...
It's multiple agents with their own preferences fighting for the mic. One agent with a loop is not a good model here, imo.
"When visibility is poor, people have car accidents because they can't see what's ahead of them, right? Actually, the Mandelbaum Effect implies that sometimes people have accidents because they aren't even looking for what's ahead of them. [...] Although it's possible to test for it, no one quite knows how to compensate for the effect yet."
Re: http://lesswrong.com/lw/h3/superstimuli_and_the_collapse_of_western/ "The regulator's career incentive does not focus on products that combine low-grade consumer harm with addictive superstimuli; it focuses on products with failure modes spectacular enough to get into the newspaper."
The issue with these "standard libertarian" arguments is that its model of government is the Federal Government of the USA. For smaller, less diverse nations, that operate less on clear incentives and more on a shared sense of culture or "shared co...
Reading the Main post on Sidekicks, I considered it worth noting in passing that I'm looking for a sidekick if someone feels that such would be an appropriate role for them.
This is me for those who don't know me: https://docs.google.com/document/d/14pvS8GxVlRALCV0xIlHhwV0g38_CTpuFyX52_RmpBVo/edit
And this is my flowchart/life;autobiography in the last few years: https://drive.google.com/file/d/0BxADVDGSaIVZVmdCSE1tSktneFU/view
Nice to meet you! :)
Yo, I know I'm pretty unpopular and all, but what's with my last 30 days karma fluctating but my actual karma not changing at all?
I want to look at deep neural-net learning and hierarchical inference through some kind of information-theoretic lens and try to show why hierarchical learning is such a powerful general principle. Anyone have an idea whether mutual information or KL-divergence is the normal measure used for this kind of study, or where I might look for literature other than surveys of deep learning, or why I might use one rather than the other?
It may be beneficial for people in some space colonies to be outlawed from going to some other inhabited astronomical objects, as this could potentially prevent harmful nanotechnology and superviruses from reaching them. My main concern with this idea is that colonists wouldn't want to leave their old homes forever, even though they wouldn't be able to visit their old homes very much anyways due to the transportation costs. Alternatively, perhaps individuals going to a different astronomical object could be thoroughly searched to make sure they aren't carrying any supervirus or nanotechnology, though I don't know if this is feasible.
What is the probability of having afterlife in a non-magical universe?
Aside from the simulation hypothesis (which is essentially another form of a magical universe), there is at leas one possibility for afterlife to exist: human ancestors travel back in time (or discover a way to get information from the past without passing anything back) to mind-upload everyone right before they die. There would be astrong incentive for them to not manifest themselves, as well as tolerate all the preventable suffering around the world: if changing the past leads to killi...
Why I buy lottery tickets (re: http://lesswrong.com/lw/hl/lotteries_a_waste_of_hope/ )
I simply don't understand the logic of the post. Deciding what things you need to do is to improve your life is easy, the hard thing is to commit to it and do them. You still have plenty of daydreaming time left especially before you fall asleep. Investing dreaming and fantasizing into them does not make your decisions any better: usually they are simple decisions, the hard thing is just executing them. Spending thousands of hours mulling over why I need to work out more ...
If it's worth saying, but not worth its own post (even in Discussion), then it goes here.
Notes for future OT posters:
1. Please add the 'open_thread' tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should be posted in Discussion, and not Main.
4. Open Threads should start on Monday, and end on Sunday.