Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: gwern 17 August 2017 03:03:15PM 0 points [-]

(/r/scholar)

Comment author: gwern 17 August 2017 03:03:08PM 0 points [-]

(/r/scholar)

Comment author: cludos 17 August 2017 02:26:05PM *  0 points [-]

Thanks so much for replying!

I'm still reading Dan Dennett's intentional stance now so I won't address that right now, but in terms of /not/ applying the intentional stance, I think we can be considered different from the "blue minimizer" since the blue minimizer assumes it has no access to its source code--we do actually have access to our source code so can see what laws govern us. Since we "want" to do things, we should be able to figure out why we "want" anything or really, why we "do" anything. To be clear, are you saying that instead of the equations being X="good points" and Y="good points" and the law is "maximize good points" the law might just be DO X and Y? If so I still don't think things like "survival" and "friendship" are terminal values or laws of the form "SURVIVE" and "MAKE FRIENDS". When these two are in conflict we still are able to choose a course of action therefore there must be some lower level law that determines the thing we "want" to do (or more accurately, just do if you don't want to assign intention to people).

I also want to address the point that you said there are answers and boundaries good enough for what people usually mean--I think what we should really be going for is "answers and boundaries good enough to get what we really /want/." I think a common model of humans in this community is somewhat effective optimizers upon a set of terminal values, if that's really true, in order to optimize our terminal value(s) we should be trying to know them, and as I said I think the current idea that we can have multiple changeable terminal values contradicts the definition of a terminal value.

Comment author: pepe_prime 17 August 2017 12:51:54PM 0 points [-]

Minor nit: The emotional intelligence example seems to have no effect.

Comment author: Manfred 17 August 2017 11:46:08AM 0 points [-]

I think writing something like this is a bit like a rite of passage. So, welcome to LW :P

When we talk about someone's values, we're using something like Dan Dennett's intentional stance. You might also enjoy this LW post about not applying the intentional stance.

Long story short, there is no "truly true" answer to what people want, and no "true boundary" between person and environment, but there are answers and boundaries that are good enough for what people usually mean.

Comment author: cousin_it 17 August 2017 11:38:57AM *  0 points [-]

I know several reasonable algorithms for O(n log n) stable sorting in O(sqrt(n)) extra space, like Mrrl's SqrtSort. That's okay for all practical purposes, because anyone who wants to sort a billion elements can afford an extra array of length 30000. And all known algorithms using less extra space essentially emulate O(sqrt(n)) bits by doing swaps inside the array, which is clearly a hack.

Radix sort has its own rabbit hole. If you're sorting strings that often have common prefixes, comparison sorting isn't the best way, because it needs to look at the initial characters over and over. There are many faster algorithms for string sorting based on ideas from radix sort: American flag sort, three-way radix quicksort, etc. The Haskell package Data.Discrimination generalizes the idea from strings to arbitrary algebraic datatypes, allowing you to sort them in almost linear time (in terms of total size, not number of elements).

Comment author: MrMind 17 August 2017 10:28:51AM 0 points [-]

"Once" does exactly what you have described.

Comment author: TheAncientGeek 17 August 2017 09:59:43AM 0 points [-]

Why would one care about correspondence to other maps?

Comment author: ChristianKl 17 August 2017 05:25:32AM 0 points [-]

I think it's highly likely that an App like Tinder doesn't do the matching completely random but optimizes for some factor.

Your analysis ignores the fact that Tinder principle is about woman only getting messages from guys on whom they previously swipped left and thus signaled that they want to receive messages from the guy. That ritual has psychological value.

If you do want a more explicit recommendation system, sites like eharmony can provide for that need.

Comment author: ChristianKl 17 August 2017 04:16:31AM 0 points [-]

In Science and Sanity "The map is not the territory" isn't a statement that means that the map can never be correct but is always wrong but that it's not meaningful to call a map right or wrong. Instead of being right or wrong different maps have more or less correspondence to reality and to other maps.

Comment author: ChristianKl 17 August 2017 04:00:55AM 1 point [-]

I don't think MWI is an exception to Eliezer's other stated views about epistomology. He isn't naive about epistomology and thinks that the fact that MWI is coherent in some sense is reason to believe in it even when there's no experiment that could be run to prove it.

Comment author: entirelyuseless 17 August 2017 01:28:15AM 0 points [-]

The observation that you yourself are part of reality is trivial. Of course anything trivial can be spun as important.

It may be trivial, but it is also important, and in practice it is a triviality that people very often ignore, to the detriment of their understanding. Let me give some examples:

  1. What we are talking about. People often make plans which take no account of the fact that they have feelings and desires and beliefs that may entirely prevent those plans from taking place. So they are acting like those things are not part of reality.

  2. The reason people object to the idea that they do not have a soul which is completely separate from their body. It is easy to see that there is nothing specifically horrible about bodily parts which could prevent them from taking care of the functions of a soul. Suppose there was a spiritual part that had those functions: any objection that you could make to the bodily parts doing that function, could be made to the spiritual part doing that function. So the real reason is wanting to think that you are not a part of reality.

  3. In the discussion of the Smoking Lesion, the reason people think it is important to "change the probability" that they are going to get cancer is that they think that they themselves and their decision are not part of reality, but something coming in from outside and changing it. In reality they are just a part of what is there, so there is no need to change anything, but it is fine not to have cancer, by choosing not to smoke.

  4. Yudkowsky used to talk about imposing his goals on a supposedly indifferent universe. This of course is impossible: he himself is a PART of the universe, and any goal that he seeks, the universe is seeking, just in that part which happens to be him. It also follows that the universe is not indifferent, since its parts are not indifferent.

  5. In a similar way, people on LW talked about "resisting entropy" and supposedly resisting the "goals" of the universe. But again since people are part of the universe, they can resist it in no way: whatever they do, the universe does. And since they are physical parts of it, resisting entropy is impossible, since they will follow the second law of thermodynamics just like everything else. It is true that people occasionally lower the entropy of some things, but only by increasing the entropy of the whole system more than ever. In other words, far from resisting the universe's supposed goal of entropy, they themselves promote it by everything they do, since they are parts of the universe.

Evidence?

Principle of indifference. I see it very often in both and nothing has convinced me it is more prevalent in one place than in the other.

Comment author: Lumifer 16 August 2017 08:22:16PM 1 point [-]

Eh. "All models are wrong but some are useful".

Do you happen to have alternatives?

Comment author: Lumifer 16 August 2017 08:21:07PM *  1 point [-]

Yeah... that might be interesting for Tinder. "Users who fucked this one also fucked this, this, and that one" X-D

Comment author: Viliam 16 August 2017 07:16:52PM 1 point [-]

Or maybe some kind of recommendation system: "Users who dated this person also dated these: ..."

Comment author: TheAncientGeek 16 August 2017 06:46:40PM 0 points [-]

It's worse than that , and they're not widely enough know

Comment author: Thomas 16 August 2017 05:51:44PM 0 points [-]

Yes. And any sufficiently advanced technology is already assumed as (an impossible) magic.

I am quite sure, that a human with an IQ of 1000 or above would appear very much like a wizard Merlin or something like that. An impossible magic for the majority of people I know.

Comment author: tadasdatys 16 August 2017 05:10:40PM 0 points [-]

We can't compare experiences qua experiences using a physicalist model, because we don't have a model that tells us which subset or aspect of neurological functioning corresponds to which experience.

We can derive that model by looking at brain states and asking the brains which states are similar to which.

Even if it is an irrational personal pecadillo of someone to not deliberately cause pain , they still need to know about robot pain.

They only need to know about robot pain if "robot pain" is a phrase that describes something. They could also care a lot about the bitterness of colors, but that doesn't make it a real thing or an interesting philosophical question.

It's interesting that you didn't reply directly about morality. I was already mentally prepared to drop the whole consciousness topic and switch to objective morality, which has many of the same problems as consciousness, and is even less defensible.

Comment author: tadasdatys 16 August 2017 05:10:28PM 0 points [-]

But you could not have used it to make a point about links between meaning, detectabiity, and falsehood.

No, but I can use it to make a point about how low your bar for meaningfulness is. Does that not count for some reason? I asked you before to propose a meaningless statement of your own. Do none exist? Are none of them grammatically correct?

???

Now you imply that they possible could be detected, in which case I withdraw my original claim

Yes, the unicorns don't have to be undetectable be definition. They're just undetectable by all methods that I'm aware of. If "invisible unicorns" have too much undetectability in the title, we can call them "ghost unicorns". But, of course, if you do detect some unicorns, I'll say that they aren't the unicorns I'm talking about and that you're just redefining this profound problem to suit you. Obviously this isn't a perfect analogue for your "robot pain", but I think it's alright.

So, what you're saying, is that you don't know if "ghost unicorns" exist? Why would Occam's razor not apply here? How would you evaluate the likelihood that they exist?

Comment author: gbear605 16 August 2017 05:03:45PM 1 point [-]

I suggest, as a corollary to Clarke's third law: any sufficiently advanced technology will be assumed to be magic.

Comment author: Lumifer 16 August 2017 04:56:06PM 2 points [-]

Right.

There are caveats because e.g. you can never prove that a map is (entirely) correct, you can only prove that one is wrong -- but these are not new and well-known.

Comment author: TheAncientGeek 16 August 2017 04:15:30PM *  0 points [-]

Usually, predictive accuracy is used as a proxy for correspondence to reality, because one cannot check map-territory correspondence by standing outside the map-territory relationship and observing (in)congreuence directly.

Comment author: Lumifer 16 August 2017 04:13:26PM 2 points [-]

One tests. Operationally.

Think science/engineering/Popper/the usual stuff.

Comment author: Lumifer 16 August 2017 04:10:10PM 1 point [-]

an important fact about reality

The observation that you yourself are part of reality is trivial. Of course anything trivial can be spun as important.

It might be equally prevalent.

Evidence?

Comment author: tadasdatys 16 August 2017 03:50:15PM 0 points [-]

I doubt that's a good thing. It hasn't been very productive so far.

Comment author: Thomas 16 August 2017 03:33:37PM *  0 points [-]

It's an old problem, cousin_it has posted:

Here's another problem that might be easier. Make an O(n log n) sorting algorithm that's simple, stable, and in place.

Radix. Except that it's not in place.

Comment author: TheAncientGeek 16 August 2017 03:20:21PM 1 point [-]

If "like" refers to similarity of some experiences, a physicalist model is fine for explaining that

We can't compare experiences qua experiences using a physicalist model, because we don't have a model that tells us which subset or aspect of neurological functioning corresponds to which experience.

If it refers to something else, then I'll need you to paraphrase.

If you want to know what "pain" means, sit on a thumbtack.

You can say "torture is wrong", but that has no implications about the physical world

That is completely irrelevant. Even if it is an irrational personal pecadillo of someone to not deliberately cause pain , they still need to know about robot pain. Justifying morality form the ground up is not relevant.

Comment author: Lumifer 16 August 2017 03:18:52PM *  2 points [-]

some of us don't know a single Young Earth Creationist or Trump fan (or hardly do)

We might not know any personally, but the elven magic of the intertubes makes it easy to read what they say, both as thoughtful essays and as casual chat.

let's not forget that the people we don't hang out with still exist

Sure, but so what?

they're the cool ones that wield all the power

No, I don't think so.

An old EY post was recently mentioned here and there is a relevant quote in it:

One of the major surprises I received when I moved out of childhood into the real world, was the degree to which the world is stratified by genuine competence.

People who wield all the power can usually speak the social language, but those who merely speak that language do not wield the power. It's those who both can speak the language AND deal with reality that do.

Humanity's current expenditure of resources is pathetically misaligned with its goals, and fixing that means power.

I am not quite sure I will trust the rationalists to define the goals of the humanity. I am quite sure I will not trust them with enough power to upend the society to end this "pathetic misalignment".

Comment author: TheAncientGeek 16 August 2017 03:08:05PM 0 points [-]

That's just another word for the same thing? What does one do operationally?

Comment author: TheAncientGeek 16 August 2017 03:06:47PM *  1 point [-]

I can also use"ftoy ljhbxd drgfjh"

But you could not have used it to make a point about links between meaning, detectabiity, and falsehood.

If you have no arguments, then don't respond.

The implicit argument is that meaning/communication is not restricted to literal truth.

Let me answer that differently. You said invisible unicorns don't exist. What happens if an invisible unicorn detector is invented tomorrow?

What would happen is that you are changing the hypothesis. Originally, you stipulated an invisible unicvorn as undetectable in any possible way, in relation to which I agreed that one could use an armchair argument like occam's razor against their existence. Now you imply that they possible could be detected, in which case I withdraw my original claim, because if something could be detected, then armchair arguments are not appropriate.

Comment author: entirelyuseless 16 August 2017 03:02:54PM 0 points [-]

Don't be silly.

It's not silly, but an important fact about reality. The mood you are in influences your behavior and therefore the consequences of your behavior, and thus the future state of the world and all the facts about it. This is directly related to the other point I made about ignoring moods.

I hear what you are saying, but that's more prevalent among normies, if anything.

It might be equally prevalent.

I suspect the nerds are more likely to fight the influence of mood on facts (and usually lose)

Yes, but quite often with a good deal of ignorance about the mood, and this contributes to the losing.

But "facts" and "beliefs" are very different things.

Sure.

Comment author: Lumifer 16 August 2017 03:00:21PM 1 point [-]

many environments where being the top monkey gives you nothing except more mediocrity

First, what's the alternative? Getting more mediocrity compared to getting nothing doesn't sound too horrible.

Second, your biological hardwiring will automatically provide some hedons just for being the top monkey, even if your minions aren't all that impressive.

Comment author: Lumifer 16 August 2017 02:56:48PM 2 points [-]

In the usual way: by testing the congruence with reality.

Comment author: entirelyuseless 16 August 2017 02:55:26PM 0 points [-]

"Seriously, if you have no arguments, then don't respond."

People who live in glass houses shouldn't throw stones.

Comment author: Lumifer 16 August 2017 02:54:47PM 1 point [-]

if you are in a sad mood, it is a fact that you are in a sad mood ... So moods influence facts

Don't be silly.

The result is not that the moods do not influence their beliefs and actions, but that they do not notice the influence of their moods on their beliefs and actions.

I hear what you are saying, but that's more prevalent among normies, if anything. I suspect the nerds are more likely to fight the influence of mood on facts (and usually lose), while the normies just wholeheartedly embrace it.

other elements of your utility function in fact influence your beliefs

Yes, of course they do. But "facts" and "beliefs" are very different things.

Comment author: cousin_it 16 August 2017 01:55:52PM *  1 point [-]

300th comment! My post only had 40 before you showed up. LW has been having some persistent people lately, but you (and the people replying to you) take the cake.

Comment author: Bound_up 16 August 2017 11:52:26AM 0 points [-]

That's my greatest fear about this.

We're all in our social bubbles, such that some of us don't know a single Young Earth Creationist or Trump fan (or hardly do), so we reject out of turn the idea that so many humans might work this way simply because the ones we hang out with don't.

I could find a better social circle; it sounds like you have, and I don't doubt it's more enjoyable for you. But, either way, let's not forget that the people we don't hang out with still exist, and there's a reason we don't enjoy hanging out with them as much as we do people like us. There's a reason they enjoy politics and partisanship and don't want to hear about your market functions unless what you really mean is how cool some group is.

And there's a reason they're the cool ones that wield all the power.

Humanity's current expenditure of resources is pathetically misaligned with its goals, and fixing that means power.

Comment author: TheAncientGeek 16 August 2017 10:25:54AM 0 points [-]

My take is that the LP is the official doctrine, and the MWI is an unwitting exception.

Comment author: Bound_up 16 August 2017 10:09:49AM 0 points [-]

I've classically been a literalist super-honest guy, and now intend to be super-honest about what I make the other person hear.

I think them knowing I'm being honest about what they hear is sufficient to grant me all the benefits I've enjoyed in the past, while avoiding some of the disadvantages

Comment author: Bound_up 16 August 2017 10:03:16AM 0 points [-]

Try for a while. You might be surprised how easy the game becomes once you explicitly understand the rules

Comment author: LawrenceC 16 August 2017 09:47:30AM 0 points [-]

Why do you think this doesn't exist?

Comment author: LawrenceC 16 August 2017 09:42:16AM *  0 points [-]

For what it's worth, though, as far as I can tell we don't have the ability to create an AI that will reliably maximize the number of paperclips in the real world, even with infinite computing power. As Manfred said, model-based goals seems to be a promising research direction for getting AIs to care about the real world, but we don't currently have the ability to get such an AI to reliably actually "value paperclips". There are a lot of problems with model-based goals that occur even in the POMDP setting, let alone when the agent's model of the world or observation space can change. So I wouldn't expect anyone to be able to propose a fully coherent complete answer to your question in the near term.

It might be useful to think about how humans "solve" this problem, and whether or not you can port this behavior over to an AI.

If you're interested in this topic, I would recommend MIRI's paper on value learning as well as the relevant Arbital Technical Tutorial.

Comment author: bogus 16 August 2017 07:49:10AM 2 points [-]

Another 'old classic' by esr, that's quite related to Eliezer's point.

Comment author: Manfred 16 August 2017 07:35:45AM 1 point [-]

Well, if the acronym "POMDP" didn't make any sense, I think we should start with a simpler example, like a chessboard.

Suppose we want to write a chess-playing AI that gets its input from a camera looking at the chessboard. And for some reason, we give it a button that replaces the video feed with a picture of the board in a winning position.

Inside the program, the AI knows about the rules of chess, and has some heuristics for how it expects the opponent to play. Then it represents the external chessboard with some data array. Finally, it has some rules about how the image in the camera is generated from the true chessboard and whether or not it's pressing the button.

If we just try to get the AI to make the video feed be of a winning position, then it will press the button. But if we try to get the AI to get its internal representation of the data array to be in a winning position, and we update the internal representation to try to track the true chessboard, then it won't press the button. This is actually quite easy to do - for example, if the AI is a jumble of neural networks, and we have a long training phase in which it's rewarded for actually winning games, not just seeing winning board states, then it will learn to take into account the state of the button when looking at the image.

Comment author: JEB_4_PREZ_2016 16 August 2017 07:35:29AM 0 points [-]

Well that was one hell of a read. For some reason it reminded me of this old classic by Eliezer. Thanks for sharing.

Comment author: bogus 16 August 2017 07:02:50AM *  0 points [-]

I suspect that the real skill is knowing when the mating^H^H^H^H^H^H signaling dance is worth the effort. Among your fellow geniuses at the IAS? That's a clear 'yes' even if emulating monkey-level neural circuitry does require some effort. In politically-relevant settings as mentioned in the OP? That's another 'yes'. However there are many, many environments where being the top monkey gives you nothing except more mediocrity!

Comment author: TheAncientGeek 16 August 2017 07:01:12AM 0 points [-]

Everyone builds their own maps and yes, they can be usefully ranked by how well do they match the territory.

How do you detect that?

Comment author: bogus 16 August 2017 06:41:06AM 0 points [-]

I agree. I also like your username!

Comment author: bogus 16 August 2017 06:38:56AM 0 points [-]

I for one didn't find issue with the "mood" of OP's post. Of course, the content is not exactly news either - we all are familiar with the 'X is not about Y' pattern! But it's nice to see a well-written reminder of this every once in a while.

Comment author: lmn 16 August 2017 03:28:21AM 1 point [-]

Clearly Trump tells lies that lead to people believing simple factual falsehoods.

I don't think this is clear at all. At least the statements of his that people object to the loudest aren't lies.

Comment author: RomeoStevens 16 August 2017 03:12:09AM 0 points [-]

This is my favorite comment in a long while.

Comment author: Elo 16 August 2017 01:57:37AM 0 points [-]

You need to notice them to then decide what to do about them.

Comment author: moridinamael 16 August 2017 01:40:03AM 1 point [-]

Possibly they mean more than one thing, but the primary concept that jumps to mind is known as the "dark night". The aim of many meditation practices is to become aware of the contents of consciousness to the extent that those contents lose any emotional valence and become meaningless objects. In the long term this makes the meditator extremely equanimous and calm and detached, in a good way. In the medium term, before the changes have properly sunk in, it can result in a semi-detachment from reality where everything seems meaningless but in a very bad way.

I think I may have touched the edges of such phenomena. It is indeed unpleasant, and probably contributed to my cutting down my meditation by a lot.

Comment author: entirelyuseless 16 August 2017 01:34:43AM 0 points [-]

moods do not influence facts

First of all, if you are in a sad mood, it is a fact that you are in a sad mood, and if you are in a happy mood, it is a fact that you are in a happy mood. So moods influence facts, and these cannot be separate magisteria, but overlapping.

Second, many nerds do ignore their moods, even though this is a bad idea. The result is not that the moods do not influence their beliefs and actions, but that they do not notice the influence of their moods on their beliefs and actions.

In a similar way, if you think that only facts influence your beliefs, and not other elements of your utility function, it will not mean that your beliefs are not influenced by other elements of your utility function. It will just mean that you will not notice that influence. You are better off admitting the truth, namely that other elements of your utility function in fact influence your beliefs.

Comment author: WalterL 16 August 2017 01:33:37AM 1 point [-]

I think I speak for all of us when I say that I did not know about this.

Comment author: entirelyuseless 16 August 2017 01:30:51AM 0 points [-]

"according to basically any model of the world, fooling yourself does not result in more paperclips."

Paul Almond at one time proposed that every interpretation of a real thing is a real thing. According to that theory, fooling yourself that there are more paperclips does result in more paperclips (although not fooling yourself also has that result.)

Comment author: entirelyuseless 16 August 2017 01:27:43AM 0 points [-]

I have parts of my brain that are like that, and I suspect that most people do. But if "meditation only causes you to pay attention to those thoughts" then you are probably doing it wrong.

Comment author: cludos 16 August 2017 01:10:52AM *  0 points [-]

But what does the code for that look like. It looks like maximize(# of paperclips in world), but how does it determine (# of paperclips in world)? You just said it has a model. But how can it distinguish between real input that leads to the perception of paperclips and fake input that leads to the perception of paperclips?

Comment author: MattG2 16 August 2017 01:00:33AM 1 point [-]

Agree, this was my thought as well.

Comment author: Elo 15 August 2017 10:28:37PM 0 points [-]

What if you discovered that a part of your brain doesn't like when your friends are happier than you?

What if you discovered a part of your brain just wants to wirehead itself?

What if you discovered a part of your brain that likes to come up with ideas about how horrible you are and then meditation only causes you to pay attention to those thoughts?

Comment author: Manfred 15 August 2017 10:16:34PM 0 points [-]

To our best current understanding, it has to have a model of the world (e.g. as a POMDP) that contains a count of the number of paperclips, and that it can use to predict what effect its actions will have on the number of paperclips. Then it chooses a strategy that will, according to the model, lead to lots of paperclips.

This won't want to fool itself because, according to basically any model of the world, fooling yourself does not result in more paperclips.

Comment author: ChristianKl 15 August 2017 08:50:41PM 0 points [-]

That seems to be a reasonable reading and I think we are in agreement.

Comment author: jimmy 15 August 2017 08:40:12PM 0 points [-]

I'm not sure I follow all the details of what you're saying, but it seems like your main point is along the lines of "That's no the kind of lie that someone who simply wants to do persuasion tells", and with that I completely agree.

Comment author: ChristianKl 15 August 2017 08:10:09PM 0 points [-]

I think what you meant is probably that Trump says things that lead people to be mislead on the things that actually matter (as judged by you) and that he’s not actually a great example of saying the “truest” things, in this strange but important sense.

I don't think the issue of whether or not Trump was invited by Last Week Tonight is an issue that "actually matters".

But lets go to an issue that matters. "Do vaccines cause autism" It's factually wrong but I also think that a majority of Trump followers don't. The demographics of vaccine denailism is not equivalent with Trumps supporters.

If you take a Trump belief like "exercise is bad for your health" it's even more clear. That's not the kind of lie that someone who simply wants to do persuasion tells. It's also a very strange lie to tell for a person who learned their persuasion skills from Tony Robbins.

Comment author: SquirrelInHell 15 August 2017 07:04:29PM 0 points [-]

I think it's pretty smart to NOT follow LW discussion right now... you could suggest the rss feeds (by deluks or me) instead.

Comment author: ChristianKl 15 August 2017 06:31:34PM 1 point [-]

an LW person

Not just "an person" the author is jimmy towards whom I replied above.

For myself reading literature and hearing audio books didn't give my any skills in the subject. I learned the largest chunk of my skills from Chris Mulzer. I also went to other people and read afterwards about the subject but I'm not an autodidact in it. jimmy on the other hand is an autodidact. In http://lesswrong.com/lw/pbt/social_insight_when_a_lie_is_not_a_lie_when_a/dw9g?context=3 , both I and jimmy consider the strategy of getting Reality Is Plastic: The Art of Impromptu Hypnosis and doing the exercises in it with a willing subject to be a good starting point for developing actual skill.

At the moment there's an idea in my head that it would be possible to create a better course for this learning hypnosis from the beginning than what's out there. If you find someone who wants to practice with you hypnosis in person, I would be willing to do more specific guidance about what to do. Maybe jimmy also wants to pitch in and we can create a kind of course together.

I don't think reading blog posts or forum posts is enough to develop actual skill but if your goal is just information there's the forum http://www.uncommonforum.com/viewforum.php?f=16 where jimmy, myself and a bunch of other people had a few long discussions about hypnosis in the past.

Comment author: jimmy 15 August 2017 06:26:31PM *  0 points [-]

Ah, I didn't realize you were focused on large scale politics and figured you were using it as merely one example.

I'm not so sure I agree on that completely. Certainly it's more in that direction, and you aren't going to be able to explain complex models to large electorates, and I don't have time to coherently express my reasoning here, but it certainly appears to me that teaching is possible on the margin and that this strategy still works on larger scales with more of those inherent limitations.

I agree that "you're the best" isn't dishonest so long as the person knows what you mean. My point wasn't about honesty so much as whether you want to dilute your message. I should be clear that it doesn't always apply here and I don't claim to have the full answer about exactly how to do it, but I have found value in avoiding certain types of these "honest literal-untruths" or whatever you'd like to call them. In cases where one might want to say "you got this!" as normal encouragement, abstaining from normal encouragement makes it easier to convey real confidence in the person when you know for a fact that they can do it. Both have value, but I do feel like the latter is often undervalued while the former is overvalued.

Comment author: jimmy 15 August 2017 06:11:02PM 0 points [-]

After this post of yours I think you might be really interesting to talk to on the subject. Let me know if you want to chat sometime (I'm that LW person mentioned).

Comment author: jimmy 15 August 2017 06:09:12PM 0 points [-]

I totally agree that he doesn't look like "trained hypnotist that thinks things through and has a nuanced plan for what he's trying to communicate". Looking at Trump and concluding "don't worry guys, get him in a private room and he'll drop the act and explain exactly how this all makes sense" would be a mistake.

At the same time, what he's doing is effective, and largely for similar reasons. The important difference is that you can't really trust him to be doing anything other than emotional gradient following, and he's a reason to get serious and step up your game to make sure that important things aren't underrepresented, rather than to sit back and trust that things are in the hands of an expert.

Comment author: tadasdatys 15 August 2017 05:59:40PM 0 points [-]

Well, you used it,.

I can also use"ftoy ljhbxd drgfjh". Is that not meaningless either? Seriously, if you have no arguments, then don't respond.

What happens if a robot pain detector is invented tomorrow?

Let me answer that differently. You said invisible unicorns don't exist. What happens if an invisible unicorn detector is invented tomorrow? To make a detector for a thing, that thing has to have known properties. If they did invent a robot pain detector tomorrow, how would you check that it really detects robot pain? You're supposed to be able to check that somehow.

Comment author: tadasdatys 15 August 2017 05:59:37PM 0 points [-]

You keep saying it s a broken concept.

Yes. I consider that "talking about consciousness". What else is there to say about it?

That anything should feel like anything,

If "like" refers to similarity of some experiences, a physicalist model is fine for explaining that. If it refers to something else, then I'll need you to paraphrase.

Circular as in

"Everything is made of matter. matter is what everything is made of." ?

Yes, if I had actually said that. By the way, matter exists in you universe too.

Yes: it's relevant because "tortruing robots is wrong" is a test case of whether your definitons are solving the problem or changing the subject.

Well, if we must. It should be obvious that my problem with morality is going to be pretty much the same as with consciousness. You can say "torture is wrong", but that has no implications about the physical world. What happens if I torture someone?

Comment author: jimmy 15 August 2017 05:58:11PM 0 points [-]

You don’t think it’s the same thing as what Trump is doing, or the same thing that Scott Adams is referring to when he says trump is doing it?

There are a bunch of things that are getting mixed up here. Clearly Trump tells lies that lead to people believing simple factual falsehoods. That much doesn’t even contradict that main thesis here, and it also applies to anyone that believed Bernie when he said that America is the richest country on earth.

I think what you meant is probably that Trump says things that lead people to be mislead on the things that actually matter (as judged by you) and that he’s not actually a great example of saying the “truest” things, in this strange but important sense. I actually agree with you there too, though I think I blame Trump less for this than you do because I think he’s legitimately bad at figuring out what is true and so when he might say something about vaccines causing autism, for example, it’s more about him being genuinely wrong than knowing the right answer and maliciously lying about it. Hanlon’s razor, basically.

Additionally, I think you’d argue that Trump isn’t doesn’t seem to care enough about the truth and is reckless in that way, and I’d probably agree with you there too. None of this challenges Adam’s main point here though, which is that Trump’s messages, despite being easily fact-checked as false, contain (other) things which Trump does not actively disbelieve and are evaluated as both important and true by his followers - even if Christian (or Jimmy, or anyone else) thinks that those things are false as well.

It’s important to look at how people respond to proof that his statements don’t pass the fact checks. If they feel betrayed by trump or if there’s cognitive disonnance induced, then your criticism is valid and it’s simple lying and pandering to wishful thinking. If, on the other hand, you get “lol, don’t care” then you’re missing the point and aren’t actually addressing what they think is important and true. I see both in Trump’s followers, but the interesting part is that I see far more of the latter than I have with any other politician. In other words, I think Adams has a point.

Comment author: kvas 15 August 2017 04:11:15PM 0 points [-]

Thank you, this is awesome! I've just convinced my wife to pay more attention to LW discussion forum.

Comment author: MaryCh 15 August 2017 04:02:54PM 0 points [-]

When I first began working as a bookseller, I had to run to the stores thinking "[Name of the publishing house] - [school subject] - [year] - [kind of workbook, part] - [to what textbook] - [edition] - [amount]". Nine months later, I run to the stores thinking "[this sequence of turns (as a kind of wriggly line)] - [subject] - [year] - [not that one! The other one!] - [unspecified; grab both] - [more]". Must be professional growth...

Comment author: cludos 15 August 2017 03:46:36PM 0 points [-]

As I said though, how do you program it to want REAL paperclips as opposed to just perceiving that it is getting paperclips.

Comment author: Lumifer 15 August 2017 03:41:51PM 2 points [-]

The situation changes if you regularly travel, though.

Comment author: drethelin 15 August 2017 03:39:43PM 3 points [-]

There aren't that many people, so the benefits would be minor. Once you've swiped a couple thousand times you're probably through most of the tinder users within your demographic preferences.

Comment author: drethelin 15 August 2017 03:38:00PM *  0 points [-]

Why would it hack itself to think it's getting paperclips if it's originally programmed to want real paperclips? It would not be incentivized to make that hack because that hack would make it NOT get paperclips.

Comment author: Lumifer 15 August 2017 03:30:42PM 2 points [-]

Anyone who is 'us' enough to understand us isn't 'them' enough to do the signalling dance well.

Counterpoint from esr:

Once you realize this it’s easy to understand why the incidence of socially-inept nerdiness doesn’t peak at the extreme high end of the IQ bell curve, but rather in the gifted-to-low-end-genius region closer to the median. I had my nose memorably rubbed in this one time when I was a guest speaker at the Institute for Advanced Study. Afternoon tea was not a nerdfest; it was a roomful of people who are good at the social game because they are good at just about anything they choose to pay attention to and the monkey status grind just isn’t very difficult. Not compared to, say, solving tensor equations.

I recommend the whole post.

Comment author: Lumifer 15 August 2017 03:24:53PM 1 point [-]

We nerds have trained ourselves to disregard moods in favor of facts

Nope. These are just separate magisteria: we understand that moods do not influence facts, but it does not follow that moods are not important and can freely be ignored.

View more: Next