I've written an essay criticizing the claim that computational complexity means a Singularity is impossible because of bad asymptotics: http://www.gwern.net/Complexity%20vs%20AI
Still reading minor nitpick: for point 2 you don't want to say NP (since P is in NP). It is the NP-hard problems that people would say can't be solved but for small instances (which as you point out is not a reasonable assumption).
It isn't just risk that explains why you might not be willing to pay more than $1 for a share that you expect to be worth $1.10 in a year's time.
First of all (rather trivially, and I am not suggesting you've overlooked it) there is inflation. That $1.10 next year is denominated in dollars that will be less valuable than today's dollar. (Assuming positive inflation rates, which is the usual situation.)
Second, there is opportunity cost. While your money is invested in the company you bought shares in, it isn't available for you to spend on other things. Hence, even after adjusting for inflation and even if there were no risk involved, if you buy an asset today and sell it in a year, you should expect to be compensated for that inconvenience by getting more for it than you pay. I think this is the main thing you've overlooked. Relevant finance term: "risk-free interest rate".
Third, there is growth. That company you're buying shares in presumably thinks it is actually adding value to the world through its work -- maybe they're inventing new things, or extracting resources from the ground that were previously embedded in deep rocks and no use to anyone, or trading between people with different utility functions so that everyone gains.
The second and third things there aren't additive. Growth is what makes it possible for the share to be worth more next year than this year; opportunity cost is what makes it necessary. If a business isn't able to produce value then no one will want to buy its shares.
So your first and second point make sense to me, they together make the nominal interest rate. What I don't understand is your point about growth. The price of a stock should be determined by the adjusted future returns of the company right? The growth you speak of should be accounted for already in our models of the future returns. So if the price going up that means the models are underestimating future returns right?
People in finance tend to believe (reasonably I think) that the stock market trends upward. I believe they mean it trends upward even after you account for the value of the risk you take on by buying stock in a company (i.e. being in the stock market is not just selling insurance). So how does this mesh with the general belief that the market is at least pretty efficient. Why are we systematically underestimating future returns of companies?
How nearsighted are you (in diopters)?
About 20/50, I don't know if that can be unambiguously converted to diopters. I measure by performance by sitting at a constant 20 feet away and when I am over 80% correct I shrink the font on the chart a little bit. I can currently read a slightly smaller font than what corresponds to 20/50 on an eye chart.
Does anyone know of some good program for eye training. I would like to try to become a little less near-sighted by straining to make out things which are at the edge of my range of good vision. I know near-sighted means my eyeball is squashed, but I am hoping my brain can fix a bit of the distortion in software. Currently I am doing random printed out eye charts, and I have gotten a bit better over time, but printing out the charts is tedious.
This is a really fascinating idea, particularly the aspect that we can influence the likelihood we are in a simulation by making it more likely that simulations happen.
To boil it down to a simple thought experiment. Suppose I am in the future where we have a ton of computing power and I know something bad will happen tomorrow (say I'll be fired) barring some 1/1000 likelihood quantum event. No problem, I'll just make millions of simulations of the world with me in my current state so that tomorrow the 1/1000 event happens and I'm saved since I'm almost certainly in one of these simulations I'm about to make!
Random sequences aren't really interesting. Even the digits of pi are believed to contain every possible sequence of integers. The hard part is finding where each sequence is located. The index is likely to be longer than the sequence itself!
And a sequence of digits isn't computation. A recording of your neural activity isn't conscious. It's just a static object.
If I chose ahead of time the procedure of how the thermal noise fluctuates and I seed in two instances of noise I think of as representing 2 and 3, and after a while it outputs a thermal noise I think of as 5 then I am ok calling that a computation.
But there is no computation happening there. It's just random noise. It's just as likely to output 5 as 6 or 3. There is no causal link between you inputting "2+3" and the output.
I agree with your sentiment. I am hoping though that one can define formally what a computation is given a physical system. Perhaps you are on to something with the causal requirement, but I think this is hard to pin down precisely. The noise is still being caused by the previous state of the system, so how can we sensibly talk about cause in a physical system. It seems like we would be more interested in 'causes' associated to more agent-like objects like an engine than formless things like the previous state of a cloud of gas. Actually I think Caspar's article was trying to formalize something like this but I don't understand it that well: http://lesswrong.com/r/discussion/lw/msg/publication_on_formalizing_preference/
I'm confused about your "interpretation". Lets say I throw together a bunch of random transistors. They compute a totally random function. What "encoding" can you possibly use to interpret this is a conscious mind?
Lets just say we already know what consciousness is and what algorithm the human brain uses. Maybe it's something like current neural networks. How would you find a computation of a neural network inside a random circuit?
I don't think you could. You'd need to find groups of logic gates which just happen to compute multiplication of two numbers. And other groups which computes addition. And another group which saves the state. And all of these groups would have to be connected in just the right way.
I think conscious minds are a very specific kind of computation. That's very unlikely to form by random chance.
Take the thermal noise generated in part of the circuit. By setting a threshold we can interpret it as a sequence 110101011 etc. Now if this list sequence was enormous we would eventually have a pixel by pixel description of any picture, letter by letter description of every book, state after state description of the tape on any Turing machine etc (basically a Library of Babel situation). Now of course we would need a crazy long sequence for this, but there is similar noise associated with the motion of every atom in the circuit, likewise the noise is far more complex if we don't truncate it to 0's and 1's, and finally there are many many many encodings of our resulting strings (does 110 represent the letter A, 0101 a blue pixel and so on).
If I chose ahead of time the procedure of how the thermal noise fluctuates and I seed in two instances of noise I think of as representing 2 and 3, and after a while it outputs a thermal noise I think of as 5 then I am ok calling that a computation. But why should my naming of the noise and dictating how the system develops be required for computation to occur?
It is interesting to compare the Less Wrong and Wikipedia articles on Recursive self improvement: http://wiki.lesswrong.com/wiki/Recursive_self-improvement https://en.wikipedia.org/wiki/Recursive_self-improvement I still find the anti-foom arguments based on diminishing returns in the Wikipedia article to be compelling. Has there been any progress on modelling recursively self improving systems systems beyond what we can find in the foom-debate?
View more: Next
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Advice solicited. Topics of interest I have lined up for upcoming posts include:
Any thoughts on which of these are of particular interest, or other ideas to delve into?
CellBioGuy all your astrobiology posts are great I'd be happy to read all of those. This may be off the astrobiology topic but I would love to see a post with your opinion on the foom question. For example do you agree with Gwern's post about there not being complexity limitations preventing runaway self-improving agents?