This happened in Canon, and was done by Hermione herself, albeit in her I think 6th or 7th year.
Hermione didn't erase her own existence, but implanted false memories to get her parents to go to Australia. Distance did what magic couldn't.
I would like to declare the following: I have submitted a program that cooperates only if you would cooperate against CooperateBot. You can of course create a selective defector against it, but that would be a bit tricky, as I am not revealing the source code. Evaluate your submissions accordingly.
Does that include tail-calls in mutual recursion? Even if you were going to return whatever your opponent's code did, you probably couldn't count on them doing the same.
Yes: all tail-calls are guarantied not to exhaust resources. The precise definition of what is tail is in the spec.
However, the approach to investing I will present in this article is endorsed by many economists, Warren Buffet, and Vanguard
Of course Vanguard endores that investment approach. It makes money with selling those funds. Just because some people besides yourself endorse that approach.
Over the last 5 years the S&P 500 produced a return of 3.5% per year. That's not a lot. Do you believe that it will again produce a higher return? If so, what's your reason for expecting again a higher return?
I don't think that there a good reason to focus all investment capital on the 500 biggest companies the way you do when you buy S&P 500 shares. Angel investments do provide good returns for the average Angel investor: http://techcrunch.com/2012/10/13/angel-investors-make-2-5x-returns-overall/
A lot of the economy consists of small businesses. If you are a smart person you might want to find local investment opportunities that aren't on the radar of the big banks.
Small business are illiquid. They also have capital requirements that might not fit your investment needs. If you have enough money that these two problems aren't an issue, as well as the skill in finding small business that won't fail and that are on the market, and have the time to supervise them/manage them, then they could be a good investment. But that isn't a lot of people.
Anyone have a good idea of where to park an "emergency fund" type account, and especially resources that talk about this? Most of my money is sitting in a checking account right now, which I have realized is not so good, but I want to keep most of it liquid (and the remainder might not be enough to start an index fund account with Vanguard).
Some banks offer money market accounts, or even a savings account might be a good idea.
Thanks for engaging.
Why would a good AI policy be one which takes as a model a universe where world destroying weapons in the hands of incredibly unstable governments controlled by glorified tribal chieftains is not that bad of a situation? Almost but not quite destroying ourselves does not reflect well on our abilities. The Cold War as a good example of averting bad outcomes? Eh.
The point is that I would have expected things to be worse, and that I imagine that a lot of others would have as well.
This is assuming that people understand what makes an AI so dangerous - calling an AI a global catastrophic risk isn't going to motivate anyone who thinks you can just unplug the thing (and even worse if it does motivate them, since then you have someone who is running around thinking the AI problem is trivial).
I think that people will understand what makes AI dangerous. The arguments aren't difficult to understand.
The fact that someone is powerful is evidence that they are good at gaining a reputation in their specific field, but I don't see how this is evidence for rationality as such (and if we are redefining it to include dictators and crony politicians, I don't know what to say),
Broadly, the most powerful countries are the ones with the most rational leadership (where here I mean "rational with respect to being able to run a country," which is relevant), and I expect this trend to continue.
Also, wealth is skewing toward more rational people over time, and wealthy people have political bargaining power.
why would someone who has no experience with these kind of issues suddenly grab it out of the space of all possible ideas he could possibly be thinking about?
Political leaders have policy advisors, and policy advisors listen to scientists. I expect that AI safety issues will percolate through the scientific community before long.
It seems like you are claiming that AI safety does not require a substantial shift in perspective (I'm taking this as the reason why you are optimistic, since my cynicism tells me that expecting a drastic shift is a rather improbable event) - rather, we can just keep chugging along because nice things can be "expected to increase over time", and this somehow will result in the kind of society we need. [...]
I agree that AI safety requires a substantial shift in perspective — what I'm claiming is that this change in perspective will occur organically substantially before the creation of AI is imminent.
Also I really don't know where you got that last idea - I can't imagine that most people would find AI safety more glamorous then, you know, actually building a robot.
You don't need "most people" to work on AI safety. It might suffice for 10% or fewer of the people who are working on AI to work on safety. There are lots of people who like to be big fish in a small pond, and this will motivate some AI researchers to work on safety even if safety isn't the most prestigious field.
If political leaders are sufficiently rational (as I expect them to be), they'll give research grants and prestige to people who work on AI safety.
Things were a lot worse then everyone knew: Russia almost invaded Yugoslavia, which would have triggered a war according to newly declassified NSA journals, in the 1950's. The Cuban Missile Crisis could easily have gone hot, and several times early warning systems were triggered by accident. Of course, estimating what could have happened is quite hard.
I don' think so. If one person or grouping in a democracy decides to suspend elections, there are plenty of others groups (opposition parties, constitutional monarchs, the media, other politicians in the same party) who can object. By contrast, it's definitional of dictatorship that it comes down do one person's say-so.
If one person tries to rule a dictorship without regards to the interest of any other person he soon faces a coup d'état.
Also see Fareed Zakaria's
There is not even a system whereby a benevolent dictator, if you happened to install one, could ensure a succession of future benevolent dictators.
Of course there is. The benevolent dictator can groom a successor.
If they choose their successor by genetics, that;s monarchy
North Korea isn't a monarchy. Monarchy is about sovereignty claims in addition to being about succession.
Nerva, Trajan, Hadrian, Antionus Pius, Marcus Aurelius we all able and capable administrators, and their reign was largely peaceful. But then they were followed by Commodus. Benevolent dictatorship with succession by training and adoption was tried, and so long as it worked it worked. But the one failure was a pretty dramatic one, considered by some to be the start of the fall of the Roman Empire.
The market as a whole also hedges against risk, and this affects asset prices. For example if there are two assets with equal expected returns but one is more correlated with the market return than the other, then the less correlated asset should have a higher price because it's more useful for hedging. (See capital asset pricing model for details.) The upshot is that you can't naively derive revealed beliefs without taking this into account. (And maybe introduce additional assets to your prediction market to figure out how correlated the participants believe the various bets are to the market return? There are probably papers about this but I'm too lazy to search for them.)
Intrade sells fed OMC rate binary options. I think if the market was more liquid, it would absolutely get used to hedge interest rate risks, and as people tend to be long bonds and short rates as a result, there might be a bias towards predicting higher rates as people want the protection. But the total holdings and the degree of hedging would be very opaque. Policy prediction markets could have similar problems.
Surely 7. should come before 6., and probably 4. and 5. as well.
No, IRA's avoid a tax which the other investments don't, so money in an IRA is worth more than money outside. There is a cost, namely you have restricted access to it.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
I would have expected a few more metaevaluator bots which clean up the environment to prevent detection. Of course this can be an expensive strategy, and certainly is in programmer time. A metaevaluator bot would have probably broken several recursion detection strategies, or even defining set! out of the language.