- It must kill you (at least make you unconscious) on a timescale shorter than that on which you can become aware of the outcome of the quantum coin-toss
- It must be virtually certain to really kill you, not just injure you.
Both seem to be at odds with Many World Interpretation. In infinite number of those it will just injure you and/or you will become aware before, due to same malfuntion.
Isn't it the formalization of Pascal mugging? It also reminds of the human sacrifice problem - if we don't sacrifice a person, the Sun won't come up the next day. We have no proof, but how can we check?
Good (not only Friendly, but useful to full extent) AI would understand the intention, hence answer that luminous aether is not a valid way of explaining behavior of light.
After years of confusion and lengthy hours of figuring out, in a brief moment I finally understood how is it possible for cryptography to work and how can Alice and Bob share secrets despite middleman listening from the start of their conversation. And of course now I can't imagine not getting it earlier.
Is there a foundation devoted to promotion of cryonics? If no, it would be probably very desirable to create such. Popularizing cryonics can save an incredible amout of existences and so, many people supporting cryonics would probably be willing to donate money to make some more organized promotion. Not to mention personal gains - the more popular cryonics would become, the lower the costs and better logistics.
If you are or know someone supporting cryonics and having experience/knowledge in non-profit organisations or professional promotion, please consider that.
I'm sorry for overly light-hearted presentation. It seemed suited for a presentation of a, to simplify greatly, form of fun.
Waker's reality doesn't really rely on dreams, but on waking in new realities and a form of paradoxical commitment to equally reality she lives in and a random reality she would wake up in.
It's rationale is purely a step in exploring new experiences, a form of meta-art. As human and transhuman needs will have been fulfilled, posthumans would (and here at least I expect future me) search for entirely new ways of existing, new subjectiv...
Disclaimer: This comment may sound very crackpottish. I promise the ideas in it aren't as wonky as they seem, but it would be to hard to explain them properly in such short time.
By living your life in this way, you'd be divorcing yourself from reality.
Here comes the notion that in posthumanism there is no definite reality. Reality is a product of experiences and how your choices influence those experiences. In posthumanism however you can modify it freely. What we call reality is a very local phenomenon.
Anyhow, it's not the case that your computing inf...
Well, creating new realities at will and switching between them is an example of Hub World. And I expect that would indeed be the first thing the new posthumans would go for. But this type of existence is stripped from many restrictions, which in a way make life interesting and give it structure. So I expect some of the posthumans (amongst them - me in the future) to create curated copies of themselves, which would gather entirely new experiences, like Waker's subjectivity. (it's experiences would be reported to some top-level copy)
You see, a Waker doesn't...
There are, of course, many variants possible. The one I focus on is largely solipsistic, where all the people are generated by an AI. Keep in mind that AI needs to fully emulate only a handful of personas and they're largely recycled in transition to a new world. (option 2, then)
I can understand your moral reservations, we should however keep the distinction between real instantiation and an AI's persona. Imagine reality generating AI as a skilful actor and writer. It generates a great number of personas with different stories, personalities and apparent i...
I don't think it is any more horryfing then being stuck in one reality, treasuring memories. It is certainly less horrifying then our current human existence with prospects of death, suffering, boredom, heartache, etc. Your fear seems to just be about something different than you're used to.
Actually for (2) the optimizer didn't know the set of rules, it played the game as if it were normal player, controlling only keyboard. It has in fact started exploiting "bugs" of which its creator were unaware. (Eg. in Supermario, Mario can stomp enemies in mid air, from below, as long as in the moment of collision it is already falling)
I am more interested in optimizations, where an agent finds a solution vastly different from what humans would come up with, somehow "cheating" or "hacking" the problem.
Slime mold and soap bubbles produce results quite similar to those of human planners. Anyhow, it would be hard to strongly outperform humans (that is find surprising solution) at problems of the type of minimal trees - our visual cortexes are quite specialized in this kind of task.
Let's add here, that most of the scientists treat conferences as a form of vacation funded by academia or grant money. So there is a strong bias to find reasons for their necessity and/or benefits.
"I would not want to be an unconscious automaton!"
I strongly doubt that such sentence bear any meaning.
Well, humans have existentialism despite no utility of it. It just seems like a glitch that you end up having, when your conciousness/intelligence achieves certain level (my reasoning is thus: high intelligence needs analysing many "points of view", many counterfactuals. Technicaly, they end up internalized to some point.) Human trying to excel his GI, which is a process allowing him to reproduce better, ends up wondering the meaning of life. It could in turn drastically decrease his willingness to reproduce, but it is overridden by imperatives. In the same way, I belive AGI would have subjective conscious experiences - as a form of glitch of general intelligence.
If we're in a simulation, this implies that with high probability either a) the laws of physics in the parent universe are not our own laws of physics (in which case the entire idea of ancestor simulations fails)
It doesn't has to be simulation of ancestor, we may be example of any civilisation, life, etc. While our laws of physics seem complex and weird (for macroscopic effects they generate), they may be actually very primitive in comparison to parent universe physics. We cannot possibly estimate computation power of parent universe computers.
You seem to be bottomlining. Earlier you gave cold reversible-computing civs reasonable probability (and doubt), now you seem to treat it as an almost sure scenario for civ developement.
Does anybody now if dark matter can be explained as artificial systems based on known matter? It fits well the description of stealth civilization, if there is no way to nullify gravitational interaction (which seems plausible). It would also explain, why there is so much dark matter - most of the universe's mass was already used up by alien civs.
Overscrupulous chemistry major here. Both Harry and Snape are wrong. By the Pauli exclusion principle an orbital can only host two electrons. But at the same time, there is no outermost orbital - valence shells are only oversimplified description of atom. Actually, so oversimplified that no one should bother writing it down. Speaking of HOMOs of carbon atom (highest [in energy] occupied molecular orbitals), each has only one electron.
The notion that (neutral) Carbon has 4 electrons to share and prefers to have 4 electrons shared with it is so oversimplified that no one should bother writing it down?
That is, umm, a surprising viewpoint to me.
My problem with such examples is that it seems more like Dark Arts emotional manipulation than actual argument. What your mind hears is that, if you're not believing in God, people will come to your house and kill your family - and if you believed in God they wouldn't do that, because they'd somehow fear the God. I don't see how is this anything else but an emotional trick.
I understand that sometimes you need to cut out the nuance in morality thought experiments, like equaling taxes to being threatened to be kidnapped, if you don't regularly pay a racket. ...
Can anybody point me to what choice of interpretation changes? From what I understand it is an interpretation, so there is no difference in what Copenhagen/MWI predict and falsification isn't possible. But for some reason MWI seems to be highly esteemed in LW - why?
Small observation of mine. While watching out for sunk cost fallacy it's easy to go to far and assume that making the same spending is the rational thing. Imagine you bought TV and the way home you dropped it and it's destroyed beyond repair. Should you just go buy the same TV as the cost is sunk? Not neccesarily - when you were buying the TV the first time, you were richer by the price of the TV. Since you are now poorer, spending this much money might not be optimal for you.
Big thanks for poiting me to Sleeping beauty.
It is a solution to me - it doesn't feel like a suffering, just as few minute tease before sex doesn't feel that way.
What I had in mind isn't a matter of manually changing your beliefs, but rather making accurate prediction whether or not you are in a simulated world (which is about to become distinct from "real" world), based on your knowledge about existence of such simulations. It could just as well be that you asked your friend, to simulate 1000 copies of you in that moment and having him teleport you to Hawaii as 11 AM strikes.
By "me" I consder this particular instance of me, which is feeling that it sits in a room and which is making such promise - which might of course be a simulated mind.
Now that I think about it, it seems to be a problem with a cohesive definition of identity and notion of "now".
Anthropic measure (magic reality fluid) measures what the reality is - it's like how an outside observer would see things. Anthropic measure is more properly possessed by states of the universe than by individual instances of you.
It doesn't look like a helpful notion and seems very tautological. How do I observe this anthropic measure - how can I make any guesses about what the outside observer would see?
...Even though you can make yourself expect (probability) to see a beach soon, it doesn't change the fact that you actually still have to sit through th
What is R? LWers use it very often, but Google search doesn't provide any answers - which isn't surprising, it's only one letter.
Also: why is it considered so important?
I'd say the only requirement is spending some time living on Earth.
Thanks, I'd get to sketching drafts. But it'll take some time.
There's also an important difference in their environment. Underwater (oceans, seas, lagoons) seems much more poor. There are no trees underwater to climb on, branches or sticks of which could be used for tools, you can't use gravity to devise traps, there's no fire, much simpler geology, lithe prospects for farming, etc.
I wonder - if an underwater civilisation were to arise, would they consider an open-air civilisation impossible?
"You're stuck crawling around in a mere two dimensions, unless you put a lot of evolutionary effort into wings, but then you have terrible weight limits on the size of the brain; you can't assign land to kelp farms and then live in the area above it, so total population is severely limited; and every couple of centuries or so a tsunami will come and wipe out anything built along the coast..."
Or, conversely, Great Filter doesn't prevent civilizations from colonising galaxies, and we've been colonised long time ago. Hail Our Alien Overlords!
And I'm serious here. Zoo hypothesis seems very conspiracy-theory-y, but generalised curiosity is one of the requirments for developing civ capable of galaxy colonisation, and powerful enough civ can sacrifice few star systems for research purposes, and it seem that most efficient way of simulating biological evolution or civ developement is actually having a planet develop on its own.
It's not impossible that human values are itself conflicted. Sole existence of AGI would "rob" us from that, because even if AGI restrained from doing all the work for humans, it would still be "cheating" - AGI could do all that better, so human achievement is still pointless. And since we may not want to be fooled (to be made think that it is not the case), it is possible that in that regard even best optimisation must result in loss.
Anyway - I can think of at least two more ways. First is creating games, vastly simulating the "joy of work". Second, my favourite, is humans becoming part of the AGI, in other words, AGI sharing parts of its superintelligence with humans.
PD is not a suitable model for MAD. It would be if a pre-emptive attack on an opponent would guarantee his utter destruction and eliminate a threat. But that's not the case - even in case of a carefully orchestrated attack, there is a great chance of rebuttal. Since military advantage of pre-emptive attack is not preferred over a lack of war, this game doesn't necessarily indicate to defect-defect scenario.
This could probably be better modeled with some form of iterated PD with number of iterations and values of outcomes based on decisions made along the game. Which I guess would be non-linear.
It wasn't my intent to give a compelling definition. I meant to highlight, which features of the internet I find important and novel as a concept.
Sounds very reasonable.
I'm not willing to engage in a discussion, where I defend my guesses and attack your prediction. I don't have sufficient knowledge, nor a desire to do that. My purpose was to ask for any stable basis for AI dev predictions and to point out one possible bias.
I'll use this post to address some of your claims, but don't treat that as argument for when AI would be created:
How are Ray Kurzweil's extrapolations an empiric data? If I'm not wrong, all he takes in account is computational power. Why would that be enough to allow for AI creation? By 1900 world had e...
This whole debate makes me wonder , if we can have any certainity for AI predictions. Almost all is based on personal opinions, highly susceptible to biases. And even people with huge knowledge about these biases aren't safe. I don't think anyone can trace their prediction back to empiric data, it all comes from our minds' black boxes, to which biases have full access and which we can't examine with our conciousness.
While I find Mark's prediction far from accurate, I know it might be just because I wouldn't like it. I like to think that I would have some i...
Yeah. Though actually it's more of a simplified version of a more serious problem.
One day you may give AI precise set of instructions, which you think would make good. Like find a way of curing diseases, but without harming patients, and without harming people for the sake of research and so on. And you may find that your AI is perfectly friendly, but it wouldn't yet mean it actually is. It may simply have learned human values as a mean of securing its existence and gaining power.
EDIT: And after gaining enough power it may as well help improve human health even more or reprogram human race to think unconditionaly that diseases were eradicated.
But Musk starts with mentioning "Terminator". There's plenty of sf literature showing much more accuratly danger of AI, though none of them as widely known as "Terminator".
That AI may have unexpected dangers seems too vague to me, to expect Musk to think along lines of LWers.
It's not only unlikely - what's much worse, is that it points to wrong reasons. It suggests that we should fear AI trying to take over the world or eliminating all people, as if AI would have incentive to do that. It stems from nothing more, but anthropomorphisation of AI, imagining it as some evil genius.
This is very bad, because smart people can see that those reasonings are flawed and get impression that these are the only arguments against unbounded developement of AGI. While reverse stupidity isn't smart, it's much harder to find good reasons why we s...
I think you are looking into it too deep. Skynet as an example of AI risk is fine, if cartoonish.
Of course, we are very far away from strong AIs and therefore from existential AI risk.
Ummm... He points to "Terminator" movie. Doesn't that mean he's just going along usual "AI will revolt and enslave the human race... because it's evil!" rather than actually realising what existential risk involving AI is?
I started to use it as a good rule of thumb. When somebody mentions Skynet, he's probably not worth listening to. Skynet really isn't a reasonable scenario for what may go wrong with AI.
I don't fault using incorrect analogies. It's often easier to direct people to an idea from inaccurate but known territory than along a consistently accurate path.
...JB: That's amazing. But you did just invest in a company called Vicarious Artificial Intelligence. What is this company?
MUSK: Right. I was also an investor in DeepMind before Google acquired it and Vicarious. Mostly I sort of – it's not from the standpoint of actually trying to make any investment return. It's really, I like to just keep an eye on what's going on with artificial intelligence.
While yoga seems like a salutary way of spending time, I woudn't call that sport. Clear win-states and competition seems crutial to sport.
And that's why sport for rationalists is someting so hard to come up with and so valuable - it needs to combine the happiness from the effort to be better than others, while battling the sense of superiority, which often comes with winning.
Sense of group superiority is to me the most revolting thing about most sports.
Now I think I shouldn't mention hindsight bias, it doesn't really fit here. I'm just saying that some events would be more probably famous, like: a) laymen posing extraordinary claim and ending up being right b) group of experts being spectacularly wrong
If some group of experts met in 1960s and pose very cautious claims, chances are small that it would end up being widely known. And ending up in above paper. Analysing famous predictions is bound to end up with many overconfident predictions - they're just more flashy. But it doesn't yet mean most of predictions are overconfident.
Isn't this article highly susceptible to hindsight bias? For example, the reason authors analyse Dreyfus's prediction is that, he was somewhat right. If he weren't, authors woudn't include that data-point. Therefore it skewes the data, even if it is not their intention.
It's hard to take valuable assessements from the text, when it would be naturally prone to highlight mistakes of the experts and correct predictions by laymen.
It reminds me greatly my making of conlangs (artificial languages). While I find it creative, it takes vehement amounts of time to just create a simple draft and an arduous work to make satisfactory material. And all I'd get is just two or three people calling it cool and showing just a small interest. And I always know I'll get bored with that language in few days and never make as much as to translate simple texts.
And yet every now and then I get an amazing idea and can't stop myself from "wasting" hours, planning and writing about some conlang...
Stuart, it's not about control groups, but that such test actually would test negatively for blind, who are intelligent. Blind AI would also test negatively, so how is that useful?
Actually physics test is not about getting closer to humans, but about creating something useful. If we can teach program to do physics, we can teach it to do other stuff. And we're getting somewhere mid narrow and real AI.
Ad 4. Elite judges is quite arbitrary. I'd rather iterate the test, each time choosing only those judges, who recognized program correctly or some variant of that (e.g. top 50% with most correct guesses). This way we select those, who go beyond simply conforming to a conversation and actually look for differences between program and human. (And as seen from transcripts, most people just try to have a conversation, rather than looking for flaws) Drawback is that, if program has set personality, judges could just stick to identifing that personality rather t...
You're right. I got way too far with claiming equivalence.
As for non-identity problem - I have trouble answering it. I don't want to defend my idea, but I can think of an example when one brings up non-identity and comes to wrong conclusion: Drinking alcohol while pregnant can cause a fetus to develop a brain damage. But such grave brain damage means this baby is not the same one, that would be created, if his mother didn't drink. So it is questionable that the baby would benefit from its mother abstinence.
Little correction:
Phosphorus is highly reactive; pure phosphorus glows in the dark and may spontaneously combust. Phosphorus is thus also well-suited to its role in adenosine triphosphate, ATP, your body's chief method of storing chemical energy.
Actually, the above isn't true. Reactivity is a property of a molecule, not of an element. Elemental phosphorus is prone to get oxidised with atmospheric oxygen, producing lots of energy. ATP is reactive, because anhydride bonds are fairly unstable - but none change of oxidation takes place. That it contains phos...
"if you failed hard enough to endorse coercive eugenics"
This might be found a bit too controversial, but I was tempted to come up with not-so-revolting coercive eugenics system. Of course it's not needed, if there is technology for correcting genes, but let's say we only have circa 1900 technology. It has nothing to do with the point of Elizer's note, it's ust my musing.
Coervie eugenics isn't strictly immoral itself. It is a way of protecting people not yet born from genetical flaws - possible diseases, etc. But even giving them less then optimal...
Does anybody knows any moodtracking app that asks you about your mood at random time of the day? (Simple rating of the mood and maybe some small question about whether something happened that day influencing your mood) All I found needed me to turn on the app, which meant I used to forget to rate my mood or when I was down I just couldn't be bothered. So it would be perfect if it would just daily pop-up an alert, make me choose something and then disappeared.