Do we know whether quantum mechanics could rule out acausal between partners outside eachother's lightcone? Perhaps it is impossible to model someone so far away precisely enough to get a utility gain out of an acuasal trade? I started thinking about this after reading this wiki article on the 'Free will theorem' https://en.wikipedia.org/wiki/Free_will_theorem .
The whole point of acausal trading is that it doesn't require any causal link. I don't think there's any rule that says it's inherently hard to model people a long way away.
Imagine being an AI running on some high-quality silicon hardware that splits itself into two halves, and one half falls into a rotating black hole (but has engines that let it avoid the singularity, at least for a while). The two are now causally disconnected (well, the one outside can send messages to the one inside, but not vice versa) but still have very accurate models of each other.
I have a Gmail, Google Drive, Google Calendar, Facebook and Facebook Messenger apps on my mobile (iphone).
Can I streamline (reduce the number of) my apps without losing functioning?
This sounds like an XY problem - what are you trying to achieve by reducing the number of apps?
I recommend that before setting out to beat the market, you worry about whether you’ll be able to do as well as the market. The typical investor does worse than the market averages, usually due to buying more when the market is high than when it is low. Take a few minutes to imagine that you will be influenced by the mood of other investors to be pessimistic when the market has been doing poorly, and optimistic when the market has been doing well. Also imagine that you will have more money available to invest when the market is high than when it is low. If you’re confident that you can avoid these problems, please stop reading this post – you’re either good enough to not need my advice, or deluded enough that you ought to start somewhere else.
The efficient market hypothesis is an approximation that is good enough for many purposes, such as telling you that you shouldn’t be confident that you can beat the market by much unless you’ve got a really good track record [1].Many people infer from this that any effort to beat the market will be wasteful. Why do I disagree?The simplest answer is that if there were no inefficiencies in the market, the people who are making the market efficient wouldn’t have incentives to continue doing so.
Diversifying across countries reduces some hard-to-measure risks. One of the most thoughtless mistakes investors make is to invest mostly in stocks of their own country, when it makes more sense to underweight the country whose economy their other income is most correlated with. Betting on one country might make some sense if you have good reason to think it will do better, but you’re more likely to do it for signaling purposes or due to availability bias.
Note that most fund managers are experts at something. But that something is typically some form of “doing what the customer asks”, not beating the market. Amateur investors who try to pay experts to beat the market usually fail by mistaking luck for skill.
I'm not convinced on the international diversification example, particularly if the best argument is "some hard-to-measure risks". Most of the time the things you want to buy are in your own country, so any diversification is taking on a large foreign exchange risk.
Rick and Morty season 2 is absolutely brilliant and hilarious. If you guys haven't watched it - you should, it's amazing.
Maybe be more specific/detailed?
Is there a difference between "x is y" and "assuming that x is y generates more accurate predictions than the alternatives"? What else would "is" mean?
<boggle> Are you saying the model with the currently-best predictive ability is reality??
Not quite - rather the everyday usage of "real" refers to the model with the currently-best predictive ability. http://lesswrong.com/lw/on/reductionism/ - we would all say "the aeroplane wings are real".
Have you ever heard of someone designing a nonagentive programme that unexpectedly turned out to be agentive? Because to me that sounds like into the workshop to build a skateboard abd coming with a F1 car.
I've known plenty of cases where people's programs were more agentive than they expected. And we don't have a good track record on predicting which parts of what people do are hard for computers - we thought chess would be harder than computer vision, but the opposite turned out to be true.
Notice the difference (emphasis mine):
A program designed to answer a question necessarily wants to answer that question
vs
...it becomes more predictive to think of it as wanting things
Is there a difference between "x is y" and "assuming that x is y generates more accurate predictions than the alternatives"? What else would "is" mean?
It's all standard software engineering.
I'm a professional software engineer, feel free to get technical.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Thanks to Turing completeness, there might be many possible worlds whose basic physics are much simpler than ours, but that can still support evolution and complex computations. Why aren't we in such a world? Some possible answers:
1) Luck
2) Our world has simple physics, but we haven't figured it out
3) Anthropic probabilities aren't weighted by simplicity
4) Evolution requires complex physics
5) Conscious observers require complex physics
Anything else? Any guesses which one is right?
My guess is #2.