Comment author: SodaPopinski 03 September 2015 05:00:33PM 3 points [-]

Can we use the stock market itself as a useful prediction market in any way? For example can we get useful information about how long Moore's law type growth in microprocessors will likely continue based on how much the market values certain companies? Or are there too many auxiliary factors, so that reverse engineering anything interesting from price information is hopeless?

Comment author: SodaPopinski 11 June 2015 03:27:02AM 1 point [-]

What do we really understand about the perception of time speeding up as we get older? Every time I have seen it brought up one of two explanations are given. Either time is speeding up because we have fewer novel experiences which, in turn, lead to fewer new memories being created. Then, supposedly, our feeling of time passing is dependent on how many new memories we have in a given time frame and so we feel time is speeding up.

The other explanation I have seen is that time speeds up because each new year is a smaller percentage of your life up to that point. For example, it is easier to distinguish a 2kg weight and 4kg weight than a 50kg weight and a 52kg weight. So the argument goes that a similar thing holds for our perception of time passing.

These arguments both feel sketchy to me. Is there a more rigorous investigation into this question?

Comment author: SodaPopinski 28 February 2015 05:49:19PM 3 points [-]

The problem is the mental construct of "I". Yes we can't help but believe that there is feeling, thinking, subjective experience etc. The problem is that our brain seems to naturally construct a concept of "I" which is a sort of owner of these subjective experiences that persists over time. This construct, while deeply engrained and probably useful, is not consistent with physical reality. This can be seen either with teleporter type thought experiments or to some extent with real life cases of brain trauma (for example in Oliver Sacks's or Ramachandran's books). Our brains' care about protecting some potential future entities, which barring crazy technology or anthropic scenarios are easy to specify, but there is not going to be a coherent general principle to decide when we should count potential future entities as being us.

Comment author: caffemacchiavelli 11 February 2015 02:30:37PM 3 points [-]

Even if you, personally, happen to die, you've still got a copy of yourself in backup that some future generation will hopefully be able to reconstruct.

Is there a consensus on the whole brain backup identity issue?

I can't say that trying to come up with intuition pumps about life extension has made me less confused about consciousness, but it does seem fairly obvious to me that if I'm backing up my brain, I'm just creating a second version who shares my values and capacities, not actually extending the life of version A. Being able to have both versions alive at the same time seems a clear indicator that they're not the same, and that when source A dies, copy B just goes on with their life and doesn't suddenly become A.

Unfortunately, I'm not sure the same argument doesn't apply to one brain at different points in time, too. If you atomize my brain now and put it back together later, am I still A or is A dead? What about koma, sleep, or any other interruption of consciousness?

It's all kind of a blur to me.

Comment author: SodaPopinski 11 February 2015 04:36:51PM 3 points [-]

The idea of a persistent personal identity has no physical basis. I am not questioning consciousness only saying that the mental construct that there is an ownership to some particular sequence of conscious feelings over time is inconsistent with reality (as I would argue all the teleporter-type thought experiments show). So in my view all that matters is how much a certain entity X decides (or instinctually feels) it should care about some similar seeming later entity Y.

Comment author: SodaPopinski 09 February 2015 11:03:20PM 12 points [-]

Are there things we should be doing now to take advantage of future technology. What I mean would be something like people who bank umbilical cord fluid for potential future stem cell usages. Another example would be if we had taken a lot of pictures of a historical building which is now gone, then we could use modern day photogrammetry to make a 3d model of it. A potential current example, suppose we recorded a ton of our day to day vocal communication. Then, some day in the future, a new machine learning algorithm could make use of the data. So what I am looking for is whether there are any potential 'missed opportunity' of this type we should be considering (posted similar question on futurology subreddit).

Comment author: SodaPopinski 15 December 2014 04:50:10PM 2 points [-]

How do Bostrom type simulation arguments normally handle nested simulations? If our world spins off simulation A and B, and B spins off C and D, then how do we assign the probabilities of finding ourselves in each of those? Also troubling to me is what happens if you have a world that simulates itself, or simulations A and B that simulate each other. Is there a good way to think about this?

Comment author: Metus 15 December 2014 12:27:24AM 9 points [-]

I am looking to set a morning routine for myself and wanted to hear if you have some unusual component in your morning routine other people might benefit from.

Comment author: SodaPopinski 15 December 2014 03:30:01PM 2 points [-]

One part is writing down whatever dreams I can remember right upon awaking. This has led to me occasionally experiencing lucid dreams without really trying.

Also since I am writing dreams anyway, this makes it easy to do the other writing which I find beneficial. Namely, writing the major plan of the day and gratitude stuff.

Comment author: timeholmes 05 December 2014 04:44:54PM 1 point [-]

Yes, continued development of AI seems unstoppable. But this brings up another very good point: if humanity cannot become a Singleton in our search for good egalitarian shared values, what is the chance of creating FAI? After years of good work in that direction and perhaps even success in determining a good approximation, what prevents some powerful secret entity like the CIA from hijacking it at the last minute and simply narrowing its objectives for something it determines is a "greater" good?

Our objectives are always better than the other guy's, and while violence is universally despicable, it is fast, cheap, easy to program and the other guy (including FAI developers) won't be expecting it. For the guy running the controls, that's friendly enough. :-)

Comment author: SodaPopinski 05 December 2014 07:18:00PM 1 point [-]

On one hand, I think the world is already somewhat close to a singleton (with regard to AI, obviously it is nowhere near singleton with regard to most other things). I mean google has a huge fraction of the AI talent. The US government has a huge fraction of the mathematics talent. Then, there is Microsoft, FB, Baidu, and a few other big tech companies. But every time an independent AI company gains some traction it seems to be bought out by the big guys. I think this is a good thing as I believe the big guys will act in there own best interest including their interest in preserving their own life (i.e., not ending the world). Of course if it is easy to make an AGI, then there is no hope anyway. But, if it requires companies of Google scale, then there is hope they will choose to avoid it.

Comment author: Alex123 02 December 2014 07:44:13AM 0 points [-]

Maybe people shouldn't make Superintelligence at all? Narrow AIs are just fine if you consider the progress so far. Self-driving cars will be good, then applications using Big Data will find cures for most illnesses, then solve starvation and other problems by 3D printing foods and everything else, including rockets to deflect asteroids. Just give 10-20 more years only. Why to create dangerous SI?

Comment author: SodaPopinski 02 December 2014 01:31:49PM 0 points [-]

Totally agree, and I wish this opinion was voiced more on LW rather than the emphasis on trying to make a friendly self improving AI. For this to make sense though I think the human race needs to become a singleton, although perhaps that is what Google's acquisitions and massive government surveillance is already doing.

Comment author: SodaPopinski 02 December 2014 03:39:00AM *  4 points [-]

(warning brain dump most of which probably not new to the thinking on LW) I think most people who take the Tegmark level 4 universe seriously (or any of the preexisting similar ideas) get there by something like the following argument: Suppose we had a complete mathematical description of the universe, then exactly what more could there be to make the thing real (Hawking's fire into the equations).

Here is the line of thinking that got me to buy into it. If we ran a computer simulation, watched the results on a monitor, and saw a person behaving just like us, then it would be easy for me to interpret their world and their mind etc. as real (even if I could never experience it viscerally living outside the simulation). However, if we are willing to call one simulation real, then we get into the slippery slope problem which I have no idea how to avoid whereby any physical phenomena implementing any program from the perspective of any universal Turing machine must really exist. So it seems to me if we believe some simulation is real there is no obvious barrier to believing every (computable) universe exists. As for whether we stop at computable universes or include more of mathematics, I am not sure anything we would call conscious could tell the difference, so perhaps it makes no difference.

(Resulting beliefs + aside on decision theory) I believe in a Level 4 Tegmark with no reality fluid measure (as I have yet to see a convincing argument for one) a la http://lesswrong.com/r/discussion/lw/jn2/preferences_without_existence/ . Moreover, I don't think there is any "correct" decision theory that captures what we should be doing. All we can do is pick the one that feels right with regard to our biological programming. Which future entities are us, how many copies of us will there be, and who should I care about etc. are all flaky concepts at best. Of course, my brain won't buy into the idea I should jump off a bridge or touch a hot stove, but I think it is unplausable that this will follow from any objective optimization principle. Nature didn't need a decision theory to decide if it is a good idea to walk into a teleporter machine if two of us walk out the other side. We have our built in shabby biological decision theory, we can innovate on it theoretically, but there is no objective sense in which some particular decision theory will be the right one for us.

View more: Prev | Next