Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: Viliam_Bur 28 November 2014 10:48:28AM 2 points [-]

When debating with people, you should only make one inferential step per debate. Leave the next step for tomorrow, when the person has already accepted the former step (and probably believes it was actually their idea).

my opinions are uniquely valuable and important to share with others

They are valuable and important to you. Not to the others, yet.

I do think I'm smarter, more moderate, and more creative than most.

You may be right, but this is irrelevant here. People don't automatically accept smarter people's beliefs (and that's probably a good thing).

Comment author: buybuydandavis 28 November 2014 08:59:53AM 4 points [-]

As far as I'm concerned this defeats my purpose entirely.

Your purpose is what? Why is it so important to you to enter the IDF?

Misdiagnosed Asperger's syndrome is ruining my life.

Catastrophic self talk is a sign and a generator of depression.

There has to be something wrong with this, some way that I can appeal.

No there doesn't. Sometimes, you just lose. Sometimes, you don't get what you want. It doesn't have to make sense. It doesn't have to be fair. Shit happens.

But it doesn't have to mean that your life is ruined.

I just have no idea where to turn, no idea how to do anything, and have no allies whatsoever. I feel like my life is collapsing,

Feelings of helplessness and powerlessness. Feeling alone. Feeling like something horrible is coming, and you can't prevent it.

These are all the marks of depression.

Did you read HPMOR? Do you remember when Harry figured out that he was under the Dementor's influence?

It was too late for him, he’d already sunk too far, he’d never be able to cast the Patronus Charm now—

His life was ruined.

That may be the Dementation talking rather than an accurate estimate, observed the logical part of himself, habits that had been encoded into sheer reflex, requiring no energy to activate.

Think of the Dementors’ fear as a cognitive bias, and try to overcome it the way you would overcome any other cognitive bias. Your hopeless feelings may not indicate that the situation is actually hopeless. It may only indicate that you are in the presence of Dementors. All negative emotions and pessimistic estimates must now be considered suspect, fallacious until proven valid.

Comment author: Viliam_Bur 28 November 2014 10:40:14AM *  3 points [-]

I suspect that symptoms of depression may be rather frequent among rationalists.

Most people are more optimistic than would be epistemically rational; they systematically underestimate the risks and overestimate their abilities. However, this kind of bias may be instrumentally useful: it makes people do things, even if most of the things will not bring the outcome they imagine. Because of some quirks of human brain, people who perceive reality better often have problem to motivate themselves. This hypothesis is called depressive realism.

But I believe that is just a part of the story, and maybe the less important part. It is the part of the story that fits into the just-world narrative. You get something (precision), you lose something (motivation), the harmony in the universe is restored.

The other part of the story is that better epistemic rationality can bring you some social problems. If your friends are not interested in being epistemically rational, you will feel alone with your thoughts. If you perceive how things can go wrong, and others deny it, of couse you see a danger where they don't. The danger is real, the helplessness is real (if a larger cooperation is needed to prevent the danger), the feeling of being alone (in your mental landscape) is real.

I am not merely using different words here. Here is the anticipated experience: -- If we create a rationalist community in real world; under the hypothesis of depressive realism, nothing should change. Putting more depressive people together should probably just make things worse, as they would confirm each other's depressive thoughts. But under the hypothesis of "epistemically rational people are alone, there are real problems, and the cooperation is needed to overcome them", rational people in a rationalist community would be more happy, because they wouldn't be alone, could make other people see the same problems, and could cooperate in overcoming them.

(I also consider it likely that LW readers could actually come from both groups. If we would bring all of them to one village, some of them would focus on debating the end-of-the-world scenarios, and some of them would focus on becoming stronger and changing the world.)

Comment author: artemium 27 November 2014 05:49:39PM *  2 points [-]

Nice blog post about AI and existential risks by my friend and occasional LW poster. He was inspired by disappointingly bad debate on Edge.org. Feel free to share if you like it. I think it is a quite good introduction on Bostrom's and MIRI arguments.

"The problem is harder than it looks, we don’t know how to solve it, and if we don’t solve it we will go extinct."

http://nthlook.wordpress.com/2014/11/26/why-fear-ai/

Comment author: Viliam_Bur 28 November 2014 09:55:07AM *  1 point [-]

Seems very good, but this is coming from a person familiar with the topic. I wonder how good it would seem to someone who hasn't heard about the topic yet.

Comment author: Capla 26 November 2014 10:28:16PM 1 point [-]

I was just reading though the Eliezer article. I'm not sure I understand. Is he saying that my computer actually does have goals?

Isn't there a difference between simple cause and effect and an optimization process that aims at some specific state?

Comment author: Viliam_Bur 27 November 2014 10:21:41AM *  2 points [-]

Maybe it would help to "taboo" the word "goal".

A process can progress towards some end state even without having any representation of that state. Imagine a program that takes a positive number at the beginning, and at each step replaces the current number "x" with value "x/2 + 1/x". Regardless of the original number, the values will gradually move towards a constant. Can we say that this process has a "goal" or achieving the given number? It feels wrong to use this word here, because the constant is nowhere in the process, it just happens.

Typically, when we speak about having a "goal" X, we mean that somewhere (e.g. in human brain, or in the company's mission statement) there is a representation of X, and then the reality is compared with X, various paths from here to X are evaluated, and then one of those paths is followed.

I am saying this to make more obvious that there is a difference between "having a representation of X" and "progressing towards X". Humans typically create representations of their desired end states, and then try finding a way to achieve them. Your computer doesn't have this, and neither does "Tool AI" at the beginning. Whether it can create representations later, that depends on technical details, how specifically such "Tool AI" is programmed.

Maybe there is a way to allow superhuman thinking even without creating representations corresponding to things normally perceived in our world. (For example AIXI.) But even in such case, there is a risk of having a pseudo-goal of the "x/2 + 1/x" kind, where the process progresses towards an outcome even without having a representation of it. AI can "escape from the box" even without having a representation of "box" and "escape", if there exists a way to escape from it.

Comment author: Slider 27 November 2014 03:18:46AM 0 points [-]

I had a similar prompt for knowledge seeking in wanting to figure out how the math supports or doesn't support "converging worlds" or "mangled worlds". The notion of a converging world is also porbably of note worthy intuitive reference point in thought-space. You could have a system that is in a quantum indeterministic state each state have a different interaction so that the futures of the states are identical. At that point you can drop the distinguising of the worlds and just say that two worlds have become one. Now there is a possibility that a state left alone first splits and then converges or that it does both at the same time. There would be middle part that would not be being able to be "classified" which in these theories would be represented by two worlds in different configurations (and waves in more traditional models).

Some times I have stumbled upon an argument that if many worlds creates extra worlds whether that forms as a kind of growing block ontology (such as the flat splitters in the sequence post). Well if the worlds also converge that could keep the amount of "ontology stuff" constant or able to vary in both directions.

I stumbled upon that |psi(x)^2| was how you calculated the evolution of a quantum state which was like taking a second power and then essentailly taking a square root by only careing about the magnitude and not the phase of the complex value. For a double slit wtih L being left and R being R it resulted in P(L+R)^2= <L|L>^2+C<L|R><R|L>+<R|R>^2 (where C was either 1, 2 or sqr(2) don't remember and didn't understand which) . The squarings in the sum I found was claimed to be the classical equivalent of the two options. The interference fridges would be great and appear where the middle term was strong. I also that you could read <x|y> as something like "obtain X if situation was/is y". Getting L when the particle went L is thus very ordinary and all. You can also note that the squaring have the same form as the evolution of a pure state. However I didn't find anything in whether the middle term was interpretable or not. If you try to put it into words it looks a lot like "probability of getting L when the situation was R" which seems very surprising that it could be anything else than zero. But then again I dont' know what imaginary protoprobabilties are. Because it's a multipication of two "chains of events" it's clear you can't single out the "responcible party", it can be a contribution from both. I somehow suspect that this correlates that if your "base" is |L> then the |R>|L> base doesn't apply, ie you can't know the path taken and still get interference. I get that many worlds posits the R world and the L word but it seems there is like a bizarre combination world also involved. One way I in my brute naivity think migth be goign on is taht the particle started in the L world but then "crossed over" to the R world. If worlds in contact can exchange particles it might seem as particles "mysteriously jumped" while the jumping would be loosely related where the particle was. They would have continous trajectories when tracked within the multiverse but they get confused for each other in the single worlds.

However I was unable to grasp the intuition how bras and kets work or what they mean. I pushed the strangeness to wavefunctions but was unable to reach my goal.

It still seems mysterious to me how the single photon state turns into two distinct L and R. I could imagine the starting state to "do a full loop" be a kind of spiral where the direction that photon is travelling is a superpositon of it travelloing in each particular direction with each direction differing from it's neighbour by the phase of the protoprobability phase with their magnitudes summing to 1. That way if the photon has probability one at L it can't have probability 1 as the real part of the protoprobability at R can't be 1 as it is known to differ in phase. I know these intuitions are not well founded, I know the construction of them is known to be unsafe. However intuitive pictures are more easy for me to work with even if it means needing to reimagine them rather than just have them in the right configuration (if somebody know s a more representative way to think about it please tip me about it).

I am also using a kind of guess that you can take a protoprobaility strip it of imaginary parts an dyou get a "single world view" and I am using a view of having 2 time dimensions: a second additional clock makes the phases of the complex values sweep forward (or sweep equal surface areas) even if the "ordinary clock time" would stay still. The undeterminancy under this time would be that a being that is not able to measure the meta-time would be ignorant on what part of the cycle the world is in. Thus you would be ignorant of the phases, but the phases would "resonate". I am assuming one could turn this into a equivalent view where the imaginary component would just select a spatial world in a 1-time multiverse (in otherwise totally real-part only worlds).

I don't have known better understanding but I have a bunch of different understadnings of unknown fittness.

Comment author: Viliam_Bur 27 November 2014 09:51:30AM *  0 points [-]

I don't quite understand this topic, but maybe this could be useful:

The problem with "converging / mangled worlds" is statistical. To make two worlds interact (and become the same world, or erase each other, depending on mutual orientation of their amplitudes), those worlds must have all their particles in the same position. In usual circumstances, this seems unlikely. Imagine the experiment with the cat, where in one world the cat is dead, and in other world the cat happily walks away. How likely is it that at some moment in the future, both universes will have all particles in the same positions?

So, in usual circumstances two worlds interact only if a moment ago they were the same world, and the only difference was one particle going two different paths. (Yes, there are also all the other particles in the universe, also splitting all the time. But this happens the same way in both branches, so it cancels out.)

It still seems mysterious to me how the single photon state turns into two distinct L and R.

My intuition is that this "single state" was never literally one point, but always a small interval (wave? hump?). An interval can break into two parts, and those can travel in different directions. There is no such thing as a single point in quantum physics.

(Disclaimer: I don't really understand quantum physics; I am just interpreting the impression I got from looking at Eliezer's drawings. If you have better knowledge, feel free to ignore this.)

Comment author: Brillyant 26 November 2014 09:32:38PM 0 points [-]

I see.

by threating or hypnotising a human

This is the gist of the AI Box experiment, no?

Comment author: Viliam_Bur 27 November 2014 09:20:51AM *  0 points [-]

The important aspect is that there are many different things the AI could try. (Maybe including those that can't be "ELI5". It is supposed to have superhuman intelligence.) Focusing on specific things is missing the point.

As a metaphor, imagine that a group of retarded people is trying to imprison MacGyver in a garden shed. Later MacGyver creates an explosive from his chewing gum, destroys a wall, and leaves. The moral of this story is not: "To imprison MacGyver reliably, you must take all the chewing gum from him." The moral is: "If you are retarded, and your enemy is MacGyver, you almost certainly cannot imprison him in the garden shed."

If you get this concept, then similar debates will feel like: "Let's suppose we make really really sure he has no chewing gum. We will even check his shoes, although, realistically, no one keeps chewing gum in their shoes. But we will be extra careful, and will check his shoes anyway. What could possibly go wrong?"

Comment author: Azathoth123 27 November 2014 03:38:06AM *  0 points [-]

Are you familiar with the various only automatic rant generators?

Comment author: Viliam_Bur 27 November 2014 09:07:55AM 0 points [-]

I have seen various random text generators on their own web pages, but never actively participating in a forum.

Comment author: Lumifer 26 November 2014 05:23:46PM 1 point [-]

I don't particularly agree with this quote, but the link it comes out of is excellent.

Comment author: Viliam_Bur 26 November 2014 10:16:46PM 4 points [-]

It's amazing!

We have to assess claims about oppression based on more than just what people say about themselves. If I took the idea of the infallibility of the oppressed seriously, I would have to trust that dragons exist. That is why it’s such an unreliable guide. (I half-expect the response, “Check your human privilege!”)

Comment author: Brillyant 26 November 2014 05:17:16PM 0 points [-]

Any sufficiently capable intelligent system will prefer to ensure its own continued existence and to acquire physical and computational resources – not for their own sake, but to succeed in its assigned task.

ELI5...

  • Why can't we program hard stops into AI, where it is required to pause and ask for further instruction?

  • Why is "spontaneous emergence of consciousness and evil intent" not a risk?

Comment author: Viliam_Bur 26 November 2014 09:21:14PM 4 points [-]

Why can't we program hard stops into AI, where it is required to pause and ask for further instruction?

If the AI is aware of the pauses, it can try to eliminate them (if the pauses are triggered by a circumstance X, it can find a clever way to technically avoid X), or to make itself receive the "instruction" it wants to receive (e.g. by threating or hypnotising a human, or by doing something that technically counts as human input).

Comment author: NancyLebovitz 26 November 2014 02:48:16PM -1 points [-]

Yeah, I'd say motivated thinking.

Not all forms of threatening are equal, but "I'm having extremely violent fantasies about you and I know where you (and your children) live" isn't a tiny thing, and it goes rather beyond "I hope you die". (Is there a name for the rhetorical trick of choosing, not just a non-central example, but a minimized non-central example?)

Part of the point is that women are sometimes the target of harassment campaigns online. Some of the attackers may have an interest in the ostensible issue, some may be pure trolls. It seems as though a lot of the attackers are male.

I doubt that there are a number of women who left their homes because of nothing in particular.

When I mentioned above that people underestimate the effect of the worst people on their own side, I meant that just as I tend to underestimate the way feminism can add up, I think you're underestimating the number and forcefulness of the vicious people on your side.

I'm still incredibly angry at the way Kathy Sierra was driven out of public life.

Comment author: Viliam_Bur 26 November 2014 05:13:59PM *  3 points [-]

Would this qualify as a sufficiently scary threat? Both men and women receive various kinds of abuse online. I would guess that most of the aggressors are men, but victims are of both genders. Being a victim of online harrassment is not a uniquely female experience, although some specific forms of harrassment may be, mostly of sexual kind. I would also guess that victims of "swatting" are typically men, but I have no data about it.

Now I feel it would be good to split the debate into two completely separated topics: feminism and GamerGate. Debating them as if they are the same thing would make this all extremely confusing. Framing GamerGate as "angry white men against feminists" is merely a propaganda of one side; in reality, both sides include angry white men, and both sides include feminists.

1) I believe I have read a few stories about violent behavior of feminists, but I usually don't keep records of things I read online. If my memory is reliable, the complaints about abuse from feminists usually came from LGBT people, although officially the feminists are supposed to be on their side. Googling for "violent feminists" mostly brings false positives, but also this.

I admit I am confused about the phenomenon of online SJWs. Are they supposed to be a part of feminism, or is that a separate thing? Because their opinions seem similar to some extreme feminist opinions. Seems to me these people do a lot of online harrassment, although on internet it is difficult to prove something isn't merely trolling. And generally, even if someone is a feminist, that doesn't mean that everything they do is done in the name of feminism.

2) Here is a collection of abuse towards pro-Gamergate people. Again, it's difficult to prove who did that. We would have to debate each piece of evidence individually, but I'd rather avoid that.

View more: Next