Comment author: Eitan_Zohar 12 July 2015 02:45:12PM *  2 points [-]

This is horrifying to me and I doubt that real posthumans exploring Fun Space will actually do this, just like they wouldn't eat every meal possible in order to explore Food Space. It's an unsophisticated human fetish.

Comment author: Jan_Rzymkowski 12 July 2015 10:10:44PM 0 points [-]

I don't think it is any more horryfing then being stuck in one reality, treasuring memories. It is certainly less horrifying then our current human existence with prospects of death, suffering, boredom, heartache, etc. Your fear seems to just be about something different than you're used to.

Comment author: Douglas_Knight 14 June 2015 09:01:49PM 4 points [-]

Note that (2) and (3) are formal tasks where the optimizer has access to the full set of rules. My understanding is that a lot of chips designed by optimizing a simulator have been pretty lousy in the real world, either being complete failures that only worked in the simulator because of bugs, or being real solutions, but not being usefully robust.

Comment author: Jan_Rzymkowski 14 June 2015 10:24:37PM 3 points [-]

Actually for (2) the optimizer didn't know the set of rules, it played the game as if it were normal player, controlling only keyboard. It has in fact started exploiting "bugs" of which its creator were unaware. (Eg. in Supermario, Mario can stomp enemies in mid air, from below, as long as in the moment of collision it is already falling)

Comment author: MathiasZaman 14 June 2015 07:42:41PM *  10 points [-]

Slime mold can be used to map subway routes.

Edit: Markets can also be seen as a non-human optimizing actor, even if the smallest parts are human.

Comment author: Jan_Rzymkowski 14 June 2015 09:02:24PM *  6 points [-]

I am more interested in optimizations, where an agent finds a solution vastly different from what humans would come up with, somehow "cheating" or "hacking" the problem.

Slime mold and soap bubbles produce results quite similar to those of human planners. Anyhow, it would be hard to strongly outperform humans (that is find surprising solution) at problems of the type of minimal trees - our visual cortexes are quite specialized in this kind of task.

Comment author: Jan_Rzymkowski 04 June 2015 11:56:44PM 4 points [-]

Let's add here, that most of the scientists treat conferences as a form of vacation funded by academia or grant money. So there is a strong bias to find reasons for their necessity and/or benefits.

Comment author: g_pepper 01 May 2015 01:31:47PM *  2 points [-]

Well, glitch or not, I'm glad to have it; I would not want to be an unconscious automaton! As Socrates said, "The life which is unexamined is not worth living."

However, it remains to be seen whether consciousness is an automatic by-product of general intelligence. It could be the case that consciousness is an evolved trait of organic creatures with an implicit, inexact utility function. Perhaps a creature with an evolved sense of self and a desire for that self to continue to exist is more likely to produce offspring than one with no such sense of self. If this is the reason that we are conscious, then there is no reason to believe that an AGI will be conscious.

Comment author: Jan_Rzymkowski 01 May 2015 07:44:40PM 2 points [-]

"I would not want to be an unconscious automaton!"

I strongly doubt that such sentence bear any meaning.

Comment author: g_pepper 01 May 2015 04:57:33AM 7 points [-]

I agree; the OP is anthropomorphic; in fact, there is no reason to assume that an AGI paperclip maximizer would think like we do. In fact, in Superintelligence, Bostrom avoids any assumption that an AGI would have subjective conscious experiences. An unconscious AGI paperclip maximizer would presumably not be troubled by the fact that a paperclip is just an ill-defined configuration of matter, or by anything else, for that matter.

Comment author: Jan_Rzymkowski 01 May 2015 01:24:34PM 1 point [-]

Well, humans have existentialism despite no utility of it. It just seems like a glitch that you end up having, when your conciousness/intelligence achieves certain level (my reasoning is thus: high intelligence needs analysing many "points of view", many counterfactuals. Technicaly, they end up internalized to some point.) Human trying to excel his GI, which is a process allowing him to reproduce better, ends up wondering the meaning of life. It could in turn drastically decrease his willingness to reproduce, but it is overridden by imperatives. In the same way, I belive AGI would have subjective conscious experiences - as a form of glitch of general intelligence.

Comment author: JoshuaZ 19 April 2015 10:54:49PM 3 points [-]

First everything in any practical simulation is always and everywhere an approximation. An exact method is an enormously stupid idea - a huge waste of resources.

We haven't seen anything like evidence that our laws of physics are only approximations at all. If we're in a simulation, this implies that with high probability either a) the laws of physics in the parent universe are not our own laws of physics (in which case the entire idea of ancestor simulations fails) or b) they are engaging in an extremely detailed simulation.

The optimal techniques only simulate down to the quantum level when a simulated scientist/observer actually does a quantum experiment. In an optimal simulated world, stuff literally only exists to the extent observers are observing or thinking about it.

And our simulating entities would be able to tell that someone was doing a deliberate experiment how?

The limits of optimal approximation appear to be linear in observer complexity - using output sensitive algorithms.

I'm not sure what you mean by this. Can you expand?

The upshot of these results is that one cannot make a detailed simulation of an object without using at least much resources as the object itself.

Ultra-detailed accurate simulations are only high value for quantum level phenomena. Once you have a good model of the quantum scale, you can percolate those results up to improve your nano-scale models, and then your micro-scale models, and then your milli-meter scale models, and so on.

Only up to a point. It is going to be for example very difficult to percolate up simulations from micro to milimeter for many issues, and the less detail in a simulation, the more likely that someone notices a statistical artifact in weakly simulated data.

We already can simulate entire planets using the tiny resources of today's machines. I myself have created several SOTA real-time planetary renderers back in the day.

Again, the statistical artifact problem comes up, especially when there are extremely subtle issues going on, such as the different (potential) behavior of neutrinos.

Your basic point that I may be overestimating the difficulty of simulations may be valid; since simulations don't explain the Great Filter for other reasons I discussed, this causes an update in the direction of us being in a simulation but doesn't really help explain the Great Filter much at all.

Comment author: Jan_Rzymkowski 21 April 2015 07:34:20PM 0 points [-]

If we're in a simulation, this implies that with high probability either a) the laws of physics in the parent universe are not our own laws of physics (in which case the entire idea of ancestor simulations fails)

It doesn't has to be simulation of ancestor, we may be example of any civilisation, life, etc. While our laws of physics seem complex and weird (for macroscopic effects they generate), they may be actually very primitive in comparison to parent universe physics. We cannot possibly estimate computation power of parent universe computers.

Comment author: jacob_cannell 20 April 2015 04:45:12PM 0 points [-]

A huge swarm/sphere of solar collectors uses up precious materials (silicon, etc) that are far more valuable to use in ultimate compact reversible computers - which don't need much energy to sustain anyway.

Comment author: Jan_Rzymkowski 21 April 2015 07:27:18PM 0 points [-]

You seem to be bottomlining. Earlier you gave cold reversible-computing civs reasonable probability (and doubt), now you seem to treat it as an almost sure scenario for civ developement.

Comment author: Jan_Rzymkowski 18 April 2015 07:07:44PM 1 point [-]

Does anybody now if dark matter can be explained as artificial systems based on known matter? It fits well the description of stealth civilization, if there is no way to nullify gravitational interaction (which seems plausible). It would also explain, why there is so much dark matter - most of the universe's mass was already used up by alien civs.

Comment author: Jan_Rzymkowski 13 April 2015 07:22:05PM 9 points [-]

Overscrupulous chemistry major here. Both Harry and Snape are wrong. By the Pauli exclusion principle an orbital can only host two electrons. But at the same time, there is no outermost orbital - valence shells are only oversimplified description of atom. Actually, so oversimplified that no one should bother writing it down. Speaking of HOMOs of carbon atom (highest [in energy] occupied molecular orbitals), each has only one electron.

View more: Prev | Next