All of provocateur's Comments + Replies

Brevity is the soul of wit. Why is LW so obviously biased towards long-windedness?

0Shmi
But an enemy of knowledge transfer.
8orthonormal
Have you ever tried to read a math textbook that cherishes being short and concise? They're nigh unreadable unless you already know everything in them. When you're discussing simple concepts that people have an intuitive grasp of, then brevity is better. When there's an inferential distance involved, not so much.
1Grognor
I don't know about others, but it helps me understand an idea when I read a lot of words about it. I think it causes my subconscious to say "this is an important idea!" better than reading a concise, densely-packed explanation of a thing, even if only once. This is a guess; I don't know the true cause of the effect, but I know the effect is there.
0thomblake
wit != rationality. Also, I'm pretty sure the bias, if it exists, runs in the opposite direction. We even like calling our summaries "tl;dr"

this secret area contains hundreds of times as much content as the actual game.

How can a part be bigger than the whole? You probably want to say "as the rest of the game" instead. It took me a bit of effort to understand what you are trying to say.

0John_Maxwell
Thanks for the feedback! I implemented your suggestion. No idea why you were voted down.

All these arguments for the danger of AGI are worthless if the team that creates it doesn't heed the warnings.

I knew about this site for years, but only recently noticed that it has "discussion" (this was before the front page redesign), and that the dangers of AGI are even on-topic here.

Not that I'm about to create an AGI: The team that is will probably be even busier and less willing to be talked down to as in "you need to learn to think", etc.

Just my 2e-2

I can confidently say that many of the ideas on in this community have done much to better my life

Could you give some examples?

I'm an atheist, and believe that my mind can be seen as simply "software" running on my brain. However that "software" also believes that "I" is not just the software, but the brain and perhaps even the rest of the body.

If someone cloned my body atom for atom, "I" feel like it wouldn't really be me, just an illusion fooling outside observers. Same for mind uploads.

Do any other atheists feel the same way?

As to cryonics, that's obviously not quite the same a mind upload, but it feels like a greyish area, if the origina... (read more)

2A1987dM
And also, among other things, software outside your brain (including in other brains). My brain might be the main part of my identity, but not its only part: other parts of it are in the rest of my body, in other people's brain, in my wardrobe, in my laptop's hard disk, in Facebook's server, in my university's database, in my wallet, etc. etc. etc. Many of those things cryonics couldn't preserve.
8[anonymous]
"Look at any photograph or work of art. If you could duplicate exactly the first tiny dot of color, and then the next and the next, you would end with a perfect copy of the whole, indistinguishable from the original in every way, including the so-called 'moral value' of the art itself. Nothing can transcend its smallest elements" - CEO Nwabudike Morgan, "The Ethics of Greed", Sid Meier's Alpha Centauri
7[anonymous]
I briefly thought that way, thought about it more, and realized that view of identity was incoherent. There are lots of thought experiments you can pose in which that picture of identity produces ridiculous, up-physical, or contradictory results. I decided it was simpler to decide that it was the information, and not the meat, that dictated who I was. So long as the computational function being enacted is equivalent, it's me. Period. No if's, and's, or but's.

If someone cloned my body atom for atom, "I" feel like it wouldn't really be me

It would be you as much as you are you of a second ago.

6lsparrish
Yes, many do. A part of me does. However I'm pretty sure that part of me is wrong (i.e. falling for an intuitive trap) because it doesn't make sense with my other, more powerful intuitions of identity. For example, there is the manner in which I anticipate my decisions today impacting my actions tomorrow. This feels identity-critical, yet the effect they have would not be any different on a materially continuous future self than on a cloned or simulated future self. The cells might be repaired instead of being destroyed and replaced. It depends on what is ultimately feasible / comes soonest in the tech tree. Many cryonicists have expressed a preference for this, some saying that uploading has equal value to death for them. Also if we reach the point of perfect brain preservation in your lifetime it could be implanted into a cloned body (perhaps a patchwork of printed organs) without requiring repairs. This would be the least death-like version of cryonics short of actually keeping the entire body from experiencing damage. Note that some cell loss and replacement is going on already in the ordinary course of biology. Presumably one of the future enhancements available would be to make your brain more solid-state so that you wouldn't be "dying and getting replaced" every few months. I'm not sure I follow. If the world is a simulation, there are probably all kinds of copy-paste relationships between your past and future self-moments, this would just be one more to add to the pile. However it is a good point that if you believe your identity is conserved in the original, and you want to survive and don't value the clone's life above your own, you should precommit not to kill the original if you should ever happen to wake up as the clone (you should kill yourself as the clone instead if it comes up as an either/or option). But at the same time as you are anticipating this decision, you would be rejecting the notion that the clone is going to be really you, and the

AGI will only be Friendly if its goals are the kinds of goals that we would want it to have

At the risk of losing my precious karma, I'll play the devil's advocate and say I disagree.

First some definitions: "Friendly" (AI), according to Wikipedia, is one that is beneficial to humanity (not a human buddy or pet). "General" in AGI means not problem-specific (narrow AI).

My counterexample is an AI system that lacks any motivations, goals or actuators. Think of an AIXI system (or, realistically, a system that approximates it), and subtrac... (read more)

0Kaj_Sotala
See Dreams of Friendliness as well as this comment.
0John_Maxwell
I think you've got a good point, and folks have been voted up for saying the same thing in the past...
0JamesAndrix
That would make (human[s] + predictor) in to an optimization process that was powerful beyond the human[s]'s ability to steer. You might see a nice looking prediction, but you won't understand the value of the details, or the value of the means used to achieve it. (Which would be called trade-offs in a goal directed mind, but nothing weighs them here.) It also won't be reliable to look for models in which you are predicted to not hit the Emergency Regret Button As that may just find models in which your regret evaluator is modified.

SPOILER ALERT don't read if you are yet to see Eagle Eye.

I doubt that the Terminator introduced any new important ideas. Its notability is like that of David Chalmers' recent paper, in bringing old ideas to the attention of the broader public.

Eagle Eye was spoofing its own sensors at some point. Again, not a novel idea per se, but pretty great for a movie. In the beginning of the movie, IIRC there was some Bayesian updating going on based on different sources of evidence.

0lukeprog
Yeah, so, those works aren't included because they didn't introduce any new important ideas I can think of.

Since you are including works of fiction, I think Terminator (1984) is worth mentioning. This is what most people think of when it comes to AI risk.

By the way, my personal favorite, when it comes to AI doing what it wasn't intended to, would have to be Eagle Eye (2008) . It's got everything: hard take-off and wireheading of sorts, second-guessing humans, decent acting.

3lukeprog
Which new important ideas were contributed by Terminator or Eagle Eye that were not previously contributed?

The embedded YouTube video seems to end rather abruptly. Did the iPhone battery run out?