All of Jan_Rzymkowski's Comments + Replies

Does anybody knows any moodtracking app that asks you about your mood at random time of the day? (Simple rating of the mood and maybe some small question about whether something happened that day influencing your mood) All I found needed me to turn on the app, which meant I used to forget to rate my mood or when I was down I just couldn't be bothered. So it would be perfect if it would just daily pop-up an alert, make me choose something and then disappeared.

2Elo
"how are you feeling" other than not activating after a phone restart will popup at fixed intervals. I solved this by having duplicate icons everywhere so I often remember to reopen it. I have it set at 2 hours
  1. It must kill you (at least make you unconscious) on a timescale shorter than that on which you can become aware of the outcome of the quantum coin-toss
  2. It must be virtually certain to really kill you, not just injure you.

Both seem to be at odds with Many World Interpretation. In infinite number of those it will just injure you and/or you will become aware before, due to same malfuntion.

0PeterCoin
I'm not sure what you're trying to draw from here, but I don't think MWI requires an infinite number of possibilities. What matters is in my interpretation of Tegmark's view is that there are many many more cases (by infinite or finite measure) where it works properly than cases where it doesn't. Example: 499,999,999,999,000 cases cause death without observer experience 500,000,000,000,000 cases do nothing 1000 cases represent equipment failures We should expect that the subject can predict for himself the do nothing case will occur with extremely high probability.

Isn't it the formalization of Pascal mugging? It also reminds of the human sacrifice problem - if we don't sacrifice a person, the Sun won't come up the next day. We have no proof, but how can we check?

Good (not only Friendly, but useful to full extent) AI would understand the intention, hence answer that luminous aether is not a valid way of explaining behavior of light.

After years of confusion and lengthy hours of figuring out, in a brief moment I finally understood how is it possible for cryptography to work and how can Alice and Bob share secrets despite middleman listening from the start of their conversation. And of course now I can't imagine not getting it earlier.

Is there a foundation devoted to promotion of cryonics? If no, it would be probably very desirable to create such. Popularizing cryonics can save an incredible amout of existences and so, many people supporting cryonics would probably be willing to donate money to make some more organized promotion. Not to mention personal gains - the more popular cryonics would become, the lower the costs and better logistics.

If you are or know someone supporting cryonics and having experience/knowledge in non-profit organisations or professional promotion, please consider that.

3Andy_McKenzie
Yes. This is part of the mission of the Brain Preservation Foundation. The American Cryonics Society is also in this space, I believe.

I'm sorry for overly light-hearted presentation. It seemed suited for a presentation of a, to simplify greatly, form of fun.

Waker's reality doesn't really rely on dreams, but on waking in new realities and a form of paradoxical commitment to equally reality she lives in and a random reality she would wake up in.

It's rationale is purely a step in exploring new experiences, a form of meta-art. As human and transhuman needs will have been fulfilled, posthumans would (and here at least I expect future me) search for entirely new ways of existing, new subjectiv... (read more)

Disclaimer: This comment may sound very crackpottish. I promise the ideas in it aren't as wonky as they seem, but it would be to hard to explain them properly in such short time.

By living your life in this way, you'd be divorcing yourself from reality.

Here comes the notion that in posthumanism there is no definite reality. Reality is a product of experiences and how your choices influence those experiences. In posthumanism however you can modify it freely. What we call reality is a very local phenomenon.

Anyhow, it's not the case that your computing inf... (read more)

Well, creating new realities at will and switching between them is an example of Hub World. And I expect that would indeed be the first thing the new posthumans would go for. But this type of existence is stripped from many restrictions, which in a way make life interesting and give it structure. So I expect some of the posthumans (amongst them - me in the future) to create curated copies of themselves, which would gather entirely new experiences, like Waker's subjectivity. (it's experiences would be reported to some top-level copy)

You see, a Waker doesn't... (read more)

There are, of course, many variants possible. The one I focus on is largely solipsistic, where all the people are generated by an AI. Keep in mind that AI needs to fully emulate only a handful of personas and they're largely recycled in transition to a new world. (option 2, then)

I can understand your moral reservations, we should however keep the distinction between real instantiation and an AI's persona. Imagine reality generating AI as a skilful actor and writer. It generates a great number of personas with different stories, personalities and apparent i... (read more)

0gjm
My reservations aren't only moral; they are also psychological: that is, I think it likely (whether or not I am "right" to have the moral reservations I do, whether or not that's even a meaningful question) that if there were a lot of Wakers, some of them would come to think that they were responsible for billions of deaths, or at least to worry that they might be. And I think that would be a horrific outcome. When I read a good book, I am not interacting with its characters as I interact with other people in the world. I know how to program a computer to describe a person who doesn't actually exist in a way indistinguishable from a description of a real ordinary human being. (I.e., take a naturalistic description such as a novelist might write, and just type it into the computer and tell it to write it out again on demand.) The smartest AI researchers on earth are a long way from knowing how to program a computer to behave (in actual interactions) just like an ordinary human being. This is an important difference. It is at least arguable that emulating someone with enough fidelity to stand up to the kind of inspection our hypothetical "Waker" would be able to give (let's say) at least dozens of people requires a degree of simulation that would necessarily make those emulated-someones persons. Again, it doesn't really matter that much whether I'm right, or even whether it's actually a meaningful question; if a Waker comes to think that it does, then they're going to be seeing themselves as a mass-murderer. [EDITED to add: And if our hypothetical Waker doesn't come to think that, then they're likely to feel that their entire life involves no real human interaction, which is also very very bad.]

I don't think it is any more horryfing then being stuck in one reality, treasuring memories. It is certainly less horrifying then our current human existence with prospects of death, suffering, boredom, heartache, etc. Your fear seems to just be about something different than you're used to.

3Baughn
But you're always stuck in one reality. Let's take a step back, and ask ourselves what's really going on here. It's an interesting idea, for which I thank you; I might use it in a story. But... By living your life in this way, you'd be divorcing yourself from reality. There is a real world, and if you're interacting solely with these artificial worlds you're not interacting with it. That's what sets off my "no way, no how" alert, in part because it seems remarkably dangerous; anything might happen, your computing infrastructure might get stolen from underneath you, and you wouldn't necessarily know.
0Tem42
I don't see how making our past less memorable is desirable -- you might choose to fade certain memories, but in general there's no obvious benefit to making all memories weaker. It seems that you would be destroying things (memories) that we (apparently) valued, and doing it for no particular reason. I can see that if you got really really bored you might like to cycle through variations on your favorite realities without losing novelty, but in that case it seems like you would want to try almost everything else first... you are basically giving up on personal progress in favor of hedonism. You might also question, once you've reached the point of being a preferential waker (that is, you aren't doing it as some sort of therapy, but because you honestly prefer it), if personal identity over 'wakes' is a real thing anymore.

Actually for (2) the optimizer didn't know the set of rules, it played the game as if it were normal player, controlling only keyboard. It has in fact started exploiting "bugs" of which its creator were unaware. (Eg. in Supermario, Mario can stomp enemies in mid air, from below, as long as in the moment of collision it is already falling)

4Douglas_Knight
It knows the rules in the sense that the game is built into the optimizer. There's a reason "time travel" is in the title of the paper.

I am more interested in optimizations, where an agent finds a solution vastly different from what humans would come up with, somehow "cheating" or "hacking" the problem.

Slime mold and soap bubbles produce results quite similar to those of human planners. Anyhow, it would be hard to strongly outperform humans (that is find surprising solution) at problems of the type of minimal trees - our visual cortexes are quite specialized in this kind of task.

Let's add here, that most of the scientists treat conferences as a form of vacation funded by academia or grant money. So there is a strong bias to find reasons for their necessity and/or benefits.

4IlyaShpitser
Hi, you have no possible way to know this.
4Richard_Kennaway
That has not been my experience. Visiting foreign parts is icing on the cake, but no more than that. There may be a few exceptions, such as a large conference that a colleague of mine once went to in Hawaii. I heard that the lecture theatres were thinly attended.

"I would not want to be an unconscious automaton!"

I strongly doubt that such sentence bear any meaning.

2[anonymous]
.

Well, humans have existentialism despite no utility of it. It just seems like a glitch that you end up having, when your conciousness/intelligence achieves certain level (my reasoning is thus: high intelligence needs analysing many "points of view", many counterfactuals. Technicaly, they end up internalized to some point.) Human trying to excel his GI, which is a process allowing him to reproduce better, ends up wondering the meaning of life. It could in turn drastically decrease his willingness to reproduce, but it is overridden by imperatives. In the same way, I belive AGI would have subjective conscious experiences - as a form of glitch of general intelligence.

3g_pepper
Well, glitch or not, I'm glad to have it; I would not want to be an unconscious automaton! As Socrates said, "The life which is unexamined is not worth living." However, it remains to be seen whether consciousness is an automatic by-product of general intelligence. It could be the case that consciousness is an evolved trait of organic creatures with an implicit, inexact utility function. Perhaps a creature with an evolved sense of self and a desire for that self to continue to exist is more likely to produce offspring than one with no such sense of self. If this is the reason that we are conscious, then there is no reason to believe that an AGI will be conscious.

If we're in a simulation, this implies that with high probability either a) the laws of physics in the parent universe are not our own laws of physics (in which case the entire idea of ancestor simulations fails)

It doesn't has to be simulation of ancestor, we may be example of any civilisation, life, etc. While our laws of physics seem complex and weird (for macroscopic effects they generate), they may be actually very primitive in comparison to parent universe physics. We cannot possibly estimate computation power of parent universe computers.

0JoshuaZ
Yes, but at that point this becomes a completely unfalsifiable or evaluatable claim and even less relevant to Filtration concerns.

You seem to be bottomlining. Earlier you gave cold reversible-computing civs reasonable probability (and doubt), now you seem to treat it as an almost sure scenario for civ developement.

0jacob_cannell
No I don't see it as a sure scenario, just one that has much higher probability mass than dyson spheres. Compact, cold structures are far more likely than large hot constructions - due to speed of light and thermodynamic considerations.

Does anybody now if dark matter can be explained as artificial systems based on known matter? It fits well the description of stealth civilization, if there is no way to nullify gravitational interaction (which seems plausible). It would also explain, why there is so much dark matter - most of the universe's mass was already used up by alien civs.

5[anonymous]
You can't get rid of the waste heat without it being visible. You can't even sequester it - you always need to dump it to a location of lower temperature.
2RomeoStevens
I like this quote from Next Big Future: " looking on planets and around stars could be like primitives looking into the best caves and wondering where the advanced people are."
5James_Miller
But then why not all of it? Why leave anything for civs like ours?

Overscrupulous chemistry major here. Both Harry and Snape are wrong. By the Pauli exclusion principle an orbital can only host two electrons. But at the same time, there is no outermost orbital - valence shells are only oversimplified description of atom. Actually, so oversimplified that no one should bother writing it down. Speaking of HOMOs of carbon atom (highest [in energy] occupied molecular orbitals), each has only one electron.

The notion that (neutral) Carbon has 4 electrons to share and prefers to have 4 electrons shared with it is so oversimplified that no one should bother writing it down?

That is, umm, a surprising viewpoint to me.

My problem with such examples is that it seems more like Dark Arts emotional manipulation than actual argument. What your mind hears is that, if you're not believing in God, people will come to your house and kill your family - and if you believed in God they wouldn't do that, because they'd somehow fear the God. I don't see how is this anything else but an emotional trick.

I understand that sometimes you need to cut out the nuance in morality thought experiments, like equaling taxes to being threatened to be kidnapped, if you don't regularly pay a racket. ... (read more)

0Ben Pace
So, if we were to follow that line of argument, should we not allow philosophy on television? Is it too dangerous for the public to be exposed to? :)

Can anybody point me to what choice of interpretation changes? From what I understand it is an interpretation, so there is no difference in what Copenhagen/MWI predict and falsification isn't possible. But for some reason MWI seems to be highly esteemed in LW - why?

-2Shmi
Mostly because Eliezer wrote a number of highly emotional and convincing posts about it.
-1DanielLC
Because Copenhagen introduces additional rules that act in ways counter to everything we know about physics and gives no experimental evidence to justify them.

Small observation of mine. While watching out for sunk cost fallacy it's easy to go to far and assume that making the same spending is the rational thing. Imagine you bought TV and the way home you dropped it and it's destroyed beyond repair. Should you just go buy the same TV as the cost is sunk? Not neccesarily - when you were buying the TV the first time, you were richer by the price of the TV. Since you are now poorer, spending this much money might not be optimal for you.

2emr
In principle, absolutely. In practice, trying to fit many observed instances to to a curved utility-money curve will result in an implausibly sharp curve. So unless the TV purchase amounts to a large chunk of your income, this probably won't match the behavior. Rabin has a nice example of this for risk aversion, showing that someone who wasn't happy taking a -100:110 coin flip due to a utility-money curve would have an irrationally large risk aversion for larger amounts.
2gjm
If the price of the TV is a small enough fraction of your wealth and there isn't any special circumstance that makes your utility depend in a weird way on wealth (e.g., there's a competition this weekend that you want to enter, it's only open to people who can demonstrate that their net wealth is at least $1M, and your net wealth is very close to $1M), then your decision to buy the TV shouldn't be altered by having lost it. Some TVs are quite expensive and most people aren't very wealthy, so this particular case might well be one in which being one TV's cost poorer really should change your decision. [EDITED to fix a trivial typo.]

Big thanks for poiting me to Sleeping beauty.

It is a solution to me - it doesn't feel like a suffering, just as few minute tease before sex doesn't feel that way.

2Shmi
Sure, if you feel that "cold winter + hope" 1001 > "hawaiian beach" 1000 + "cold winter with disappointment after 11:00", then it's a solution.

What I had in mind isn't a matter of manually changing your beliefs, but rather making accurate prediction whether or not you are in a simulated world (which is about to become distinct from "real" world), based on your knowledge about existence of such simulations. It could just as well be that you asked your friend, to simulate 1000 copies of you in that moment and having him teleport you to Hawaii as 11 AM strikes.

2Chris_Leong
This problem is more interesting that I thought when I first read it (as Casebash). If you decide not to create the simulation, you are indifferent about having made the decision as you know that you are the original and that you were always going to have this experience. However, if you take this decision, then you are thankful that you did as otherwise there is a good chance that simulated you wouldn't exist and be about to experience a beach.
0casebash
Firstly, I'm not necessarily convinced that simulating a person necessarily results in consciousness, but that is largely irrelevant to this problem, as we can simply pretend that you are going to erase your memory 1000 times. If you are going to simulate yourself 1000 times, then the chance, from your perspective, of being transported to Hawaii is 1000/1001. This calculation is correct, but it isn't a paradox. Deciding to simulate yourself doesn't change what will happen, there isn't an objective probability that jumps from near 0 to 1000/1001. The 0 was produced under a model where you had no tendency to simulate this moment and the 1000/1001 was produced under a model where you are almost certain to simulate this moment. If an observer (with the same information you had at the start) could perfectly predict that you would make this decision to simulate, then they would report the 1000/1001 odds both before and after the decision. If they had 50% belief that you would make this decision before, then this would result in approx. 500/1001 odds before. So, what is the paradox? If it is that you seem to be able to "warp" reality and so that you are almost certainly about to teleport to Hawaii, my answer explains that, if you are about to teleport, then it was always going to happen anyway. The simulation was already set up. Or are you trying to make an anthropic argument? That if you make such a decision and then don't appear in Hawaii that it is highly unlikely that you will be uploaded at some point? This is the sleeping beauty problem. I don't 100% understand this yet.

By "me" I consder this particular instance of me, which is feeling that it sits in a room and which is making such promise - which might of course be a simulated mind.

Now that I think about it, it seems to be a problem with a cohesive definition of identity and notion of "now".

Anthropic measure (magic reality fluid) measures what the reality is - it's like how an outside observer would see things. Anthropic measure is more properly possessed by states of the universe than by individual instances of you.

It doesn't look like a helpful notion and seems very tautological. How do I observe this anthropic measure - how can I make any guesses about what the outside observer would see?

Even though you can make yourself expect (probability) to see a beach soon, it doesn't change the fact that you actually still have to sit through th

... (read more)
0Manfred
The same way you'd make such guesses normally - observe the world, build an implicit model, make interpretations etc. "How" is not really an additional problem, so perhaps you'd like examples and motivation. Suppose that I flip a quantum coin, and if it lands heads I give you cake and tails I don't - you expect to get cake with 50% probability. Similarly, if you start with 1 unit of anthropic measure, it gets split between cake and no-cake 0.5 to 0.5. Everything is ordinary. However, consider the case where you get no cake, but I run a perfect simulation of you in which you get cake in the near future. At some point after the simulation has started, your proper probability assignment is 50% that you'll get cake and 50% that you won't, just like in the quantum coin flip. But now, if you start with 1 unit of anthropic measure, your measure never changes - instead a simulation is started in the same universe that also gets 1 unit of measure! If all we cared about in decision-making was probabilities, we'd treat these two cases the same (e.g. you'd pay the same amount to make either happen). But if we also care about anthropic measure, then we will probably prefer one over the other. It's also important to keep track of anthropic measure as an intermediate step to getting probabilities in nontrivial cases like the Sleeping Beauty problem. If you only track probabilities, you end up normalizing too soon and too often. I mean something a bit more complicated - that probability is working fine and giving sensible answers, but that when probability measure and anthropic measure diverge, probabilities no longer fit into decision-making into a simple way, even though they still really do reflect your state of knowledge. There are many kinks in what a better system would actually be, and hopefully I'll eventually work out some kinks and write up a post.

What is R? LWers use it very often, but Google search doesn't provide any answers - which isn't surprising, it's only one letter.

Also: why is it considered so important?

4ChristianKl
Out there in the world a lot of people use software like Excel for doing their data processing. They want to have tables where they see their data. That has the advantage that you have a nice GUI that normal people can easily learn. However some tasks take a lot of time with tables, and Excel automatically reformats your data when it think it knows better than you. Excel also doesn't handle it well to have 500000 rows in your data. Excel doesn't make pretty customizable plots. Often the choice is between doing a task for 15 minutes in manual labor in Excel or writing 5 lines in R that take you 15 minutes of reading the documentation to find the right parameters. As a result in a lot of professional context where statistics are needed people use specialised statistics software. That might be SPSS, Stata, SAS or R. SPSS, Stata and SAS both need a license and R is free software. State of the art statistics if often done in R and if someone invents a new statistical method they often publish a R package along with their paper to allow other people to use their shiny new technique. It's worth noting that statisticians aren't primarily programmers and R is build for statisticians. It has a lot of powerful magic functions with 20 optional parameters. These days there are also liberaries for like Pandas for Python that allow you to do most of the things that R can do while at the same time having a beautiful language.
3Lumifer
It's a programming language and environment which is widely used in the statistical community, in part because it has a LOT of statistics-related libraries available for it. Historically, it's an open-source re-implementation of the programming language S developed at Bell Labs in mid-70s.
5cousin_it
R is a piece of software) for running statistical analyses on data and getting nice graphs. It's free, has a lot of stuff built in and is quite pleasant to use.

I'd say the only requirement is spending some time living on Earth.

Thanks, I'd get to sketching drafts. But it'll take some time.

There's also an important difference in their environment. Underwater (oceans, seas, lagoons) seems much more poor. There are no trees underwater to climb on, branches or sticks of which could be used for tools, you can't use gravity to devise traps, there's no fire, much simpler geology, lithe prospects for farming, etc.

CCC310

I wonder - if an underwater civilisation were to arise, would they consider an open-air civilisation impossible?

"You're stuck crawling around in a mere two dimensions, unless you put a lot of evolutionary effort into wings, but then you have terrible weight limits on the size of the brain; you can't assign land to kelp farms and then live in the area above it, so total population is severely limited; and every couple of centuries or so a tsunami will come and wipe out anything built along the coast..."

9Strange7
Kelp and fish can be farmed.

Or, conversely, Great Filter doesn't prevent civilizations from colonising galaxies, and we've been colonised long time ago. Hail Our Alien Overlords!

And I'm serious here. Zoo hypothesis seems very conspiracy-theory-y, but generalised curiosity is one of the requirments for developing civ capable of galaxy colonisation, and powerful enough civ can sacrifice few star systems for research purposes, and it seem that most efficient way of simulating biological evolution or civ developement is actually having a planet develop on its own.

2A1987dM
See the last paragraph of this.

It's not impossible that human values are itself conflicted. Sole existence of AGI would "rob" us from that, because even if AGI restrained from doing all the work for humans, it would still be "cheating" - AGI could do all that better, so human achievement is still pointless. And since we may not want to be fooled (to be made think that it is not the case), it is possible that in that regard even best optimisation must result in loss.

Anyway - I can think of at least two more ways. First is creating games, vastly simulating the "joy of work". Second, my favourite, is humans becoming part of the AGI, in other words, AGI sharing parts of its superintelligence with humans.

PD is not a suitable model for MAD. It would be if a pre-emptive attack on an opponent would guarantee his utter destruction and eliminate a threat. But that's not the case - even in case of a carefully orchestrated attack, there is a great chance of rebuttal. Since military advantage of pre-emptive attack is not preferred over a lack of war, this game doesn't necessarily indicate to defect-defect scenario.

This could probably be better modeled with some form of iterated PD with number of iterations and values of outcomes based on decisions made along the game. Which I guess would be non-linear.

It wasn't my intent to give a compelling definition. I meant to highlight, which features of the internet I find important and novel as a concept.

I'm not willing to engage in a discussion, where I defend my guesses and attack your prediction. I don't have sufficient knowledge, nor a desire to do that. My purpose was to ask for any stable basis for AI dev predictions and to point out one possible bias.

I'll use this post to address some of your claims, but don't treat that as argument for when AI would be created:

How are Ray Kurzweil's extrapolations an empiric data? If I'm not wrong, all he takes in account is computational power. Why would that be enough to allow for AI creation? By 1900 world had e... (read more)

0FeepingCreature
Don't underestimate the rapid progress that can be achieved with very short feedback loops. (In this case, probably rapid progress into a wireheading attractor, but still.)
3Izeinwinter
No, but a sufficiently morally depraved research program can certainly do a hard take-off based on direct simulations and "Best guess butchery" alone. Once you have a brain running in code, you can do experimental neurosurgery with a reset button and without the constraints of physicality, biology or viability stopping you. A thousand simulated man-years of virtual people dying horrifying deaths later... This isn't a very desirable future, but it is a possible one.

This whole debate makes me wonder , if we can have any certainity for AI predictions. Almost all is based on personal opinions, highly susceptible to biases. And even people with huge knowledge about these biases aren't safe. I don't think anyone can trace their prediction back to empiric data, it all comes from our minds' black boxes, to which biases have full access and which we can't examine with our conciousness.

While I find Mark's prediction far from accurate, I know it might be just because I wouldn't like it. I like to think that I would have some i... (read more)

-3[anonymous]
Honestly the best empiric data I know is Ray Kurzweil's extrapolations, which places 2045 generically as the date of the singularity, although he places human-level AI earlier around 2029 (obviously he does not lend credence to a FOOM). You have to take some care in using these predictions as individual technologies eventually hit hard limits and leave the exponential portion of the S-curve, but molecular and reversible computation shows that there is plenty of room at the bottom here. 2070 is a crazy late date. If you assume the worst case that we will be unable to build AGI any faster than direct neural simulation of the human brain, that becomes feasible in the 2030's on technological pathways that can be foreseen today. If you assume that our neural abstractions are all wrong and that we need to do a full simulation including the inner working details of neural cells and transport mechanisms, that's possible in the 2040's. Once you are able to simulate the brain of a computational neuroscientist and give it access to its own source code, that is certainly enough for a FOOM. I'm not sure what you're saying here. That we can assume AI won't arrive next month because it didn't arrive last month, or the month before last, etc.? That seems like shaky logic. If you want to find out how long it will take to make a self-improving AGI, then (1) find or create a design for one, and (2) construct a project plan. Flesh that plan out in detail by researching and eliminating as much uncertainty as you are able to, and fully specify dependencies. Then find the critical path. Edit: There's a larger issue which I forgot to mention: I find it a little strange to think of AGI arriving in 2070 vs the near future as comforting. If you assume the AI has evil intentions, then it needs to do a lot of computational legwork before it is able to carry out any of its plans. With today's technology it's not really possible to do that and remain hidden. It could take over a botnet, sure,

Yeah. Though actually it's more of a simplified version of a more serious problem.

One day you may give AI precise set of instructions, which you think would make good. Like find a way of curing diseases, but without harming patients, and without harming people for the sake of research and so on. And you may find that your AI is perfectly friendly, but it wouldn't yet mean it actually is. It may simply have learned human values as a mean of securing its existence and gaining power.

EDIT: And after gaining enough power it may as well help improve human health even more or reprogram human race to think unconditionaly that diseases were eradicated.

But Musk starts with mentioning "Terminator". There's plenty of sf literature showing much more accuratly danger of AI, though none of them as widely known as "Terminator".

That AI may have unexpected dangers seems too vague to me, to expect Musk to think along lines of LWers.

7Luke_A_Somers
Terminator is way more popular than the others. 2001? Not catastrophic enough. I, Robot (the movie)? Not nearly as popular or classic, and it features a comparatively easy solution Terminator has '99% of humanity wiped out, let's really REALLY avoid this scenario' AND 'computers following directions exactly, not accomplishing what intended'

It's not only unlikely - what's much worse, is that it points to wrong reasons. It suggests that we should fear AI trying to take over the world or eliminating all people, as if AI would have incentive to do that. It stems from nothing more, but anthropomorphisation of AI, imagining it as some evil genius.

This is very bad, because smart people can see that those reasonings are flawed and get impression that these are the only arguments against unbounded developement of AGI. While reverse stupidity isn't smart, it's much harder to find good reasons why we s... (read more)

Punoxysm100

I think you are looking into it too deep. Skynet as an example of AI risk is fine, if cartoonish.

Of course, we are very far away from strong AIs and therefore from existential AI risk.

Ummm... He points to "Terminator" movie. Doesn't that mean he's just going along usual "AI will revolt and enslave the human race... because it's evil!" rather than actually realising what existential risk involving AI is?

I started to use it as a good rule of thumb. When somebody mentions Skynet, he's probably not worth listening to. Skynet really isn't a reasonable scenario for what may go wrong with AI.

6Kaj_Sotala
Correct me if I'm wrong, but weren't Skynet's "motives" always left pretty vague? IIRC we mostly only know that it was hooked up to a lot of military tech, then underwent a hard takeoff and started trying to eliminate humanity. And "if you have a powerful AI that's hooked up to enough power that it has a reasonable chance of eliminating humanity's position as a major player, then it may do that for the sake of the instrumental drives for self-preservation and resource acquisition" seems like a reasonable enough argument / scenario to me.
solipsist160

I don't fault using incorrect analogies. It's often easier to direct people to an idea from inaccurate but known territory than along a consistently accurate path.

JB: That's amazing. But you did just invest in a company called Vicarious Artificial Intelligence. What is this company?

MUSK: Right. I was also an investor in DeepMind before Google acquired it and Vicarious. Mostly I sort of – it's not from the standpoint of actually trying to make any investment return. It's really, I like to just keep an eye on what's going on with artificial intelligence.

... (read more)
8ShardPhoenix
While that particular scenario may not be likely, I'm increasingly inclined to think that people being scared by Terminator is a good thing from an existential risk perspective. After all, Musk's interest here could easily lead to him supporting MIRI or something else more productive.

While yoga seems like a salutary way of spending time, I woudn't call that sport. Clear win-states and competition seems crutial to sport.

And that's why sport for rationalists is someting so hard to come up with and so valuable - it needs to combine the happiness from the effort to be better than others, while battling the sense of superiority, which often comes with winning.

Sense of group superiority is to me the most revolting thing about most sports.

Now I think I shouldn't mention hindsight bias, it doesn't really fit here. I'm just saying that some events would be more probably famous, like: a) laymen posing extraordinary claim and ending up being right b) group of experts being spectacularly wrong

If some group of experts met in 1960s and pose very cautious claims, chances are small that it would end up being widely known. And ending up in above paper. Analysing famous predictions is bound to end up with many overconfident predictions - they're just more flashy. But it doesn't yet mean most of predictions are overconfident.

4Stuart_Armstrong
Very valid point. But overconfidence is almost universal, and estimates where selection bias isn't an issue (duck as polls at conferences) seem to show it as well.

Isn't this article highly susceptible to hindsight bias? For example, the reason authors analyse Dreyfus's prediction is that, he was somewhat right. If he weren't, authors woudn't include that data-point. Therefore it skewes the data, even if it is not their intention.

It's hard to take valuable assessements from the text, when it would be naturally prone to highlight mistakes of the experts and correct predictions by laymen.

3Stuart_Armstrong
The Dartmouth conference was very wrong, and is also famous. Not sure hindsight points in a particular direction.

It reminds me greatly my making of conlangs (artificial languages). While I find it creative, it takes vehement amounts of time to just create a simple draft and an arduous work to make satisfactory material. And all I'd get is just two or three people calling it cool and showing just a small interest. And I always know I'll get bored with that language in few days and never make as much as to translate simple texts.

And yet every now and then I get an amazing idea and can't stop myself from "wasting" hours, planning and writing about some conlang... (read more)

Stuart, it's not about control groups, but that such test actually would test negatively for blind, who are intelligent. Blind AI would also test negatively, so how is that useful?

Actually physics test is not about getting closer to humans, but about creating something useful. If we can teach program to do physics, we can teach it to do other stuff. And we're getting somewhere mid narrow and real AI.

Ad 4. Elite judges is quite arbitrary. I'd rather iterate the test, each time choosing only those judges, who recognized program correctly or some variant of that (e.g. top 50% with most correct guesses). This way we select those, who go beyond simply conforming to a conversation and actually look for differences between program and human. (And as seen from transcripts, most people just try to have a conversation, rather than looking for flaws) Drawback is that, if program has set personality, judges could just stick to identifing that personality rather t... (read more)

2NancyLebovitz
Physics problems are an interesting test-- you could check for typical human mistakes. You could throw in an unsolvable problem and see whether you get plausibly human reactions.
3Stuart_Armstrong
So we wouldn't use blind people in the human control group. We'd want to get rid of any disabilities that the AI could use as an excuse (like the whole 13 year-old foreign boy). As for excluding AIs... the Turing test was conceived as a sufficient, but not necessary, measure of intelligence. If AI passes, then intelligent, not the converse (which is much harder).
0RobinZ
Speaking of original Turing Test, the Wikipedia page has an interesting discussion of the tests proposed in Turing's original paper. One of the possible reads of that paper includes another possible variation on the test: play Turing's male-female imitation game, but with the female player replaced by a computer. (If this were the proposed test, I believe many human players would want a bit of advance notice to research makeup techniques, of course.) (Also, I'd want to have 'all' four conditions represented: male & female human players, male human & computer, computer & female human, and computer & computer.)

You're right. I got way too far with claiming equivalence.

As for non-identity problem - I have trouble answering it. I don't want to defend my idea, but I can think of an example when one brings up non-identity and comes to wrong conclusion: Drinking alcohol while pregnant can cause a fetus to develop a brain damage. But such grave brain damage means this baby is not the same one, that would be created, if his mother didn't drink. So it is questionable that the baby would benefit from its mother abstinence.

Little correction:

Phosphorus is highly reactive; pure phosphorus glows in the dark and may spontaneously combust. Phosphorus is thus also well-suited to its role in adenosine triphosphate, ATP, your body's chief method of storing chemical energy.

Actually, the above isn't true. Reactivity is a property of a molecule, not of an element. Elemental phosphorus is prone to get oxidised with atmospheric oxygen, producing lots of energy. ATP is reactive, because anhydride bonds are fairly unstable - but none change of oxidation takes place. That it contains phos... (read more)

"if you failed hard enough to endorse coercive eugenics"

This might be found a bit too controversial, but I was tempted to come up with not-so-revolting coercive eugenics system. Of course it's not needed, if there is technology for correcting genes, but let's say we only have circa 1900 technology. It has nothing to do with the point of Elizer's note, it's ust my musing.

Coervie eugenics isn't strictly immoral itself. It is a way of protecting people not yet born from genetical flaws - possible diseases, etc. But even giving them less then optimal... (read more)

0PetjaY
"But can you have YOUR child, while eugenics prevent you from breeding? Not in genetic sense, but it seems deeply flawed to base parent-child relation simply on genetic code. It's upbringing that matters. Adopted child is in any meaningful way YOUR child." Treating people not genetically your children as if they were is a big minus in our evolutionary game these days. It also helps bad behaviour (making children and letting others raise them), so i´d say that it manages to be bad both for yourself and population, though the second part depends on why the child was given for adoption. In general improving gene pool would be a good idea, but finding collective solutions for it that don´t cause more bad than good seems hard. Also if our evolution gets rid of the heuristic that sex=children=good which isn´t working anymore and replaces it with something like "acts that lead to you children=good" we then get people spending their money smarter, which increases reproductive success of richer people who tend to be >average intelligent.
2Jiro
I don't believe that killing someone is equivalent to letting him die. Why should I believe that making someone stupid is equivalent to letting him be stupid? Also, cheating on someone to improve the health of the offspring results in a non-identity problem since the offspring is not the same one that would have been created without cheating, so whether the offspring is benefited is questionable.
Load More