Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: ignoranceprior 23 July 2016 06:38:46PM 0 points [-]

It might be that downvote troll everyone keeps talking about. Eugine?

Comment author: Stuart_Armstrong 23 July 2016 04:56:32PM 0 points [-]

Take a game with a mixed strategy Nash equilibrium. If you and the other player follow this, using source of randomness that remain random for the other player, then it is never to your advantage to deviate from this. You play this game, again and again, against another player or against the environment.

Consider an environment in which the opponent's strategies are in an evolutionary arms race, trying to best beat you; this is an environmental model. Under this, you'd tend to follow the Nash equilibrium on average, but, at (almost) any given turn, there's a deterministic choice that's a bit better than being stochastic, and it's determined by the current equilibrium of strategies of the opponent/environment.

However, if you're facing another player, and you make deterministic choices, you're vulnerable if ever they figure out your choice. This is because they can peer into your algorithm, not just track your previous actions. To avoid this, you have to be stochastic.

This seems like a potentially relevant distinction.

Comment author: Stuart_Armstrong 23 July 2016 04:43:16PM 0 points [-]

At least as an informal definition, it seems pretty good.

Comment author: Algon 23 July 2016 03:18:29PM 1 point [-]

Why the down votes? Decent article by the way.

Comment author: Crux 23 July 2016 02:35:36PM 0 points [-]

What hysterics are you thinking of, specifically?

I've noticed that Sam Harris has been rather vocal about his deep concern about a possible Trump presidency, saying that it would be extremely dangerous and so on. Who else relevant to the rationality movement has been overdoing the hysterics? Or were you referring to the mainstream media?

Comment author: Crux 23 July 2016 02:22:32PM *  0 points [-]

For a stable society to exist, at some level everyone has to agree upon some central authority with final say over disputes and superlative enforcement ability. Do you agree with this or not?

I'm not completely sure what you mean, but my guess is that I don't agree with you.

In each possible situation, it's useful to have an authority available who has final say over disputes. But it's not necessarily for every process in society to depend on the same authority.

Comment author: The_Jaded_One 23 July 2016 12:58:00PM 0 points [-]

Just commenting to point out that I'm having a fabulous day, and have a very painless, enjoyable life. I struggle to even understand what suffering is, to be honest, so make a note of that any negative utilitarians who may be listening!

Comment author: kilobug 23 July 2016 07:19:52AM 0 points [-]

Well, I would consider it worrying if a major public advocate of antideathism were also publically advocating a sexuality that is considered disgusting by most people - like say pedophilia or zoophilia.

It is an unfortunate state of the world, because sexual (or political) preference shouldn't have any significant impact on how you evaluate their position on non-related topics, but that's how the world works.

Consider someone who never really thought about antideathism, open the newspaper the morning, reads about that person who publically advocate disgusting political/sexual/whatever opinions, and then learn in that article that he also "considers death to be a curable disease". What will happen ? The person will bundle "death is a curable disease" has the kind of opinions disgusting persons have, and reject it. That's why I'm worried about - it's bad in term of PR when the spokeperson of something unusual you support also happen to be considered "disgusting" by many.

The same happens, for example, when Dawkins takes positions that are disgusting for many people about what he calls "mild pedophilia" - unrelated to whatever Dawkins is right or wrong about it, it does reflect badly on atheism, that a major public advocate of atheism also happens to be a public advocate of something considered "disgusting" by many. Except that it's even worse in the Thiel case, because atheism is relatively mainstream, so it's unlikely people will learn about atheism and that Dawkins defends "mild pedophilia" the same day.

And btw, I'm not saying I've a solution to that problem - that Peter Thiel shouldn't be "allowed" to express his political view (how much I dislike them) is neither possible nor even desirable, but it's still worrying, for the cause of antideathism.

Comment author: Pablo_Stafforini 23 July 2016 07:14:31AM *  0 points [-]

McGuire, W. J. (1969), The nature of attitudes and attitude change, in Elliot Aronson & Gardner Lindzey (eds.), The Handbook of Social Psychology, 2nd ed., Massachusetts: Addison-Wesley, vol. 3, pp. 136-314

Comment author: Crux 23 July 2016 03:29:03AM 0 points [-]

What's the difference? I don't see a distinction between the phrases "faulty reasoning" and "mistake in thinking".

Comment author: entirelyuseless 23 July 2016 03:11:02AM 1 point [-]

Yes, I noticed he overlooked the distinction between "I know I am conscious because it's my direct experience" and "I know I am conscious because I say 'I know I am conscious because it's my direct experience.'" And those are two entirely different things.

Comment author: Tem42 23 July 2016 02:41:59AM 0 points [-]

A simple justification of a slightly less extreme position is easy enough: there were many sane people who did not predict the value of the internet, indicating that being sane and smart are not sufficient to predict such things.

There are plenty of quotes from people who were supposed to be experts (or at least well-educated) saything that heavier than air flight was impossible, computers would always be room-sized monstrosities of limited use, etc. I assume that this quote is pretty much the same idea (that future technology is unpredictable), but using a technology that is 1. more recent, and thus more relatable, and 2. not simply a matter of technology, but of adapted use; that is, most smart people might have guessed that the early internet could be made faster, webpages better, and the network more comprehensive. They simply didn't see the value that this would produce, and so assumed that technology would not move in that direction.

Comment author: John_Maxwell_IV 23 July 2016 02:30:27AM 0 points [-]

Indeed.com has an interesting salary tool that lets you do things like figure out what topics to study as a software developer.

Comment author: buybuydandavis 22 July 2016 10:56:22PM 1 point [-]

I always had the informal impression that the optimal policies were deterministic

So an impression that optimal memoryless polices were deterministic?

That seems even less likely to me. If the environment has state, and you're not allowed to, you're playing at a disadvantage. Randomness is one way to counter state when you don't have state.

But it really does seem that there is a difference between facing an environment and another player - the other player adapts to your strategy in a way the environment doesn't. The environment only adapts to your actions.

I still don't see a difference. Your strategy is only known from your actions by both another player and the environment, so they're in the same boat.

Labeling something the environment or a player seems arbitrary and irrelevant. What capabilities are we talking about? Are these terms of art for which some standard specifying capability exists?

What formal distinctions have been made between players and environments?

Comment author: Lumifer 22 July 2016 09:03:31PM 1 point [-]

So in which way are you different from someone who, say, thinks that Peter Thiel has disgusting (to him and a lot of other people) tastes in sex and so will end up associating antideathism with being a moral degenerate?

Comment author: kilobug 22 July 2016 08:52:06PM 0 points [-]

"Infinite" is only well-defined as the precise limit of a finite process. When you say "infinite" in absolute, it's a vague notion that is very hard to manipulate without making mistakes. One of my university-level maths teacher kept saying that speaking of "infinite" without having precise limit of something finite is equivalent to dividing by zero.

Comment author: kilobug 22 July 2016 08:49:13PM 0 points [-]

I am, and not just MIRI/AI safety, also for other topics like anti-deathism. Just today I read in a major French newspaper an article explaining how Peter Thiel is the only one from the silicon valley to support the "populist demagogue Trump" and in the same article that he also has this weird idea that death might ultimately be a curable disease...

I know that reverse stupidity isn't intelligence, and about the halo effect, and that Peter Thiel having disgusting (to me, and to most French citizen) political tastes have no bearing on him being right or wrong about death, but many people will end up associating antideathism with being a Trump-supporting lunatic :/

Comment author: RomeoStevens 22 July 2016 08:34:33PM 1 point [-]

If we get only one thing right I think a plausible candidate is right to exit. (if you have limited optimization power narrow the scope of your ambition blah blah)

Comment author: UmamiSalami 22 July 2016 07:47:28PM *  1 point [-]

You should take a look at the last comment he made in reply to me, where he explicitly ascribed to me and then attacked (at length) a claim which I clearly stated that I didn't hold in the parent comment. It's amazing how difficult it is for the naive-eliminativist crowd to express cogent arguments or understand the positions which they attack, and a common pattern I've noticed across this forum as well as others.

Comment author: Lumifer 22 July 2016 07:44:20PM 1 point [-]

The environment only adapts to your actions.

Is this how you define environment?

Comment author: Stuart_Armstrong 22 July 2016 06:51:28PM 1 point [-]

ABABABABABAB...

It's deterministic, but not memoryless.

But it really does seem that there is a difference between facing an environment and another player - the other player adapts to your strategy in a way the environment doesn't. The environment only adapts to your actions.

I think for unbounded agents facing the environment, a deterministic policy is always optimal, but this might not be the case for bounded agents.

Comment author: pcm 22 July 2016 06:18:58PM 3 points [-]

No, mainly because Elon Musk's concern about AI risk added more prestige than Thiel had.

Comment author: Gunnar_Zarncke 22 July 2016 05:31:30PM 0 points [-]

Granted.

In response to comment by dxu on Zombies Redacted
Comment author: UmamiSalami 22 July 2016 05:21:13PM *  0 points [-]

Not too long ago, it would also have been quite easy to conceive of a world in which heat and motion were two separate things. Today, this is no longer conceivable.

But it is conceivable for thermodynamics to be caused by molecular motion. No part of that is (or ever was, really) inconceivable. It is inconceivable for the sense qualia of heat to be reducible to motion, but that's just another reason to believe that physicalism is wrong. The blog post you linked doesn't actually address the idea of inconceivability.

If something seems conceivable to you now, that might just be because you don't yet understand how it's actually impossible.

No, it's because there is no possible physical explanation for consciousness (whereas there are possible kinetic explanations for heat, as well as possible sonic explanations for heat, and possible magnetic explanations for heat, and so on. All these nonexistent explanations are conceivable in ways that a physical description of sense datum is not).

By stipulation, you would have typed the above sentence regardless of whether or not you were actually conscious, and hence your statement does not provide evidence either for or against the existence of consciousness.

And I do not claim that my statement is evidence that I have qualia.

This exact statement could have been emitted by a p-zombie.

See above. No one is claiming that claims of qualia prove the existence of qualia. People are claiming that the experience of qualia proves the existence of qualia.

In particular, for a piece of knowledge to have epistemic value to me (or anyone else, for that matter), I need to have some way of acquiring that knowledge.

We're not talking about whether a statement has "epistemic value to [you]" or not. We're talking about whether it's epistemically justified or not - whether it's true or not.

There exists a mysterious substance called "consciousness" that does not causally interact with anything in the physical universe.

Neither I nor Chalmers describe consciousness as a substance.

Since this substance does not causally interact with anything in the physical universe, and you are part of the physical universe, said substance does not causally interact with you.

Only if you mean "you" in the reductive physicalist sense, which I don't.

This means, among other things, that when you use your physical fingers to type on your physical keyboard the words, "we are conscious, and know this fact through direct experience of consciousness", the cause of that series of physical actions cannot be the mysterious substance called "consciousness", since (again) that substance is causally inactive. Instead, some other mysterious process in your physical brain is occurring and causing you to type those words, operating completely independently of this mysterious substance.

Of course, although physicalists believe that the exact same "some other mysterious process in your physical brain" causes us to type, they just happen to make the assertion that consciousness is identical to that other process.

Nevertheless, for some reason you appear to expect me to treat the words you type as evidence of this mysterious, causally inactive substance's existence.

As I have stated repeatedly, I don't, and if you'd taken the time to read Chalmers you'd have known this instead of writing an entirely impotent attack on his ideas. Or you could have even read what I wrote. I literally said in the parent comment,

The confusion in your post is grounded in the idea that Chalmers or I would claim that the proof for consciousness is people's claims that they are conscious. We don't (although it could be evidence for it, if we had prior expectations against p-zombie universes which talked about consciousness). The claim is that we know consciousness is real due to our experience of it.

Honestly. How deliberately obtuse could you be to write an entire attack on an idea which I explicitly rejected in the comment to which you replied. Do not waste my time like this in the future.

In response to comment by dxu on Zombies Redacted
Comment author: UmamiSalami 22 July 2016 05:07:20PM *  1 point [-]

I claim that it is "conceivable" for there to be a universe whose psychophysical laws are such that only the collection of physical states comprising my brainstates are conscious, and the rest of you are all p-zombies.

Yes. I agree that it is conceivable.

Now then: I claim that by sheer miraculous coincidence, this universe that we are living in possesses the exact psychophysical laws described above (even though there is no way for my body typing this right now to know that), and hence I am the only one in the universe who actually experiences qualia. Also, I would say this even if we didn't live in such a universe.

Sure, and I claim that there is a teapot orbiting the sun. You're just being silly.

Comment author: pepe_prime 22 July 2016 04:48:42PM 0 points [-]

This is a really great link!

Comment author: Brillyant 22 July 2016 04:16:11PM 0 points [-]

Most people don't use probability for their beliefs. They use mental processes such as the availability heuritistic, that doesn't correspond directly to probabilities.

I meant "personal probability" as the confidence at which people intuit a belief as actually anticipatory (vs. a belief they merely assent to as an association.) This level of confidence is on a sliding scale (vs. all or nothing).

Comment author: Lumifer 22 July 2016 04:00:46PM 1 point [-]

Not me.

In general, I think hysterics over Trump are much overdone.

Comment author: Wei_Dai 22 July 2016 03:54:22PM 1 point [-]

Anyone else worried about Peter Thiel's support for Donald Trump discrediting Thiel in a lot of people's eyes, and MIRI and AI safety/risk research in general by association?

Comment author: Wei_Dai 22 July 2016 03:49:22PM 5 points [-]

That's funny. :) But these people actually sound remarkably sane. See here and here for example.

Comment author: Lumifer 22 July 2016 03:03:49PM 0 points [-]

Generally speaking, for this you need a meta-model, that is, a model of how your model will change (e.g. become outdated) with the arrival of new information. Plus, if you want to compare costs, you need a loss function which will tell you how costly the errors of your model are.

Comment author: Mac 22 July 2016 01:41:50PM *  2 points [-]

Foundational Research Institute promotes compromise with other value systems. See their work here, here, here, and quoted section in the OP.

Rest easy, negative utilitarians aren't coming for you.

Comment author: MrMind 22 July 2016 01:14:04PM 0 points [-]

Unfortunately to pull this off you need to look closely to both your model and the model of the error, there's no general method AFAIK.

Comment author: SherkanerUnderhill 22 July 2016 10:02:42AM *  0 points [-]

I have also started a pursuit of learning useful concepts/models explicitly.

Some useful resources:

Comment author: ChristianKl 22 July 2016 09:56:14AM 2 points [-]

It seems like you lack in your list that moving usually destroys a lot of established habits and provides room for new habits.

Comment author: ChristianKl 22 July 2016 09:32:56AM 0 points [-]

Answering the question that's asked instead of giving the answer that someone seeks can increase the clarity about the nature of the question that's asked.

Comment author: Kaj_Sotala 22 July 2016 07:24:13AM 0 points [-]

The final output of this project will be a long article, either on FRI's website or a peer-reviewed publication or both; we haven't decided on that yet.

In response to comment by dxu on Zombies Redacted
Comment author: entirelyuseless 22 July 2016 03:04:38AM 1 point [-]

Those are not the only possibilities (that either zombies are impossible or that qualia are the result of magic), but even if they were, your reasons for disbelieving in magic are inductive.

In response to Crazy Ideas Thread
Comment author: bwasti 21 July 2016 11:17:24PM *  0 points [-]

I define intelligence as the ability to make optimal decisions to achieve some goal. The goal, clearly, is left undefined. This extends beyond the typical application of the word on humans, although I believe it fits nicely. A conventionally labeled intelligent person is capable of achieving conventionally defined "smart" goals such as performing well on tests and solving problems. However, things that are not seen as conventionally intelligent, such as the ability to distinguish between colors, would also fall under this definition.

One implication of this thought is that most people (may) have roughly the same amount of intelligence. Our brains and their biological neural networks can be trained in various ways to do certain tasks "better" and it is a matter of luck if those tasks align with conventional views of intelligence.

This isn't too revolutionary, perhaps the rough equality argument is somewhat controversial, but it got me thinking about how this definition extends to move complex entities. Specifically, I've been thinking about groups of people. In light of the recent British exit from the EU, many people argued that democracy had failed. A quote attributed to Winston Churchill was tossed around frequently: "The best argument against democracy is a five-minute conversation with the average voter." Democracy attempts to find the average decision of each voter, so obviously it wouldn't be more intelligent than the average voter. In terms of raw intelligence, therefore, I would argue that a single voter picked at random is just as effective for decision making. I am of course ignoring the goal of democracy, which is fairness.

Another entity worth exploring with this definition is the economy. My thought is that the economy is very intelligent. I haven't been able to boil down exactly why, but the premise I've considered is that it works evolutionarily: the fittest companies survive. This is much unlike a democracy because each mind participating in the economy is effectively competing with every other mind. Each decision that is ultimately made is collaborative in nature, and I would argue that in the economy we don't see an average intelligence but rather a summation (to be vague with the mathematical model) of all the intelligences interacting with it.

I haven't explored any of the immediate parallels too much. An example would be neurons in neural networks functioning similarly (competitively) in their contribution to a full decision. It seems consistent with how most neural nets are set up: a correct decision backpropogates to increase that neuron's weight for future decisions.

Comment author: Manfred 21 July 2016 09:48:54PM 6 points [-]

Oh my gosh, the negative utilitarians are getting into AI safety. Everyone play it cool and try not to look like you're suffering.

Comment author: Clarity 21 July 2016 09:34:41PM 0 points [-]

When the man doesn't fit the narrative change the narrative to fit the man

Our Brand is Crisis - a movie about political campaign management

Comment author: Clarity 21 July 2016 09:34:36PM -1 points [-]

Please stop throwing rocks because you have already broken my windshield

Our Brand is Crisis - a movie about political campaign management

Comment author: Gunnar_Zarncke 21 July 2016 09:02:18PM 0 points [-]

Well. I kind assume that the set of answers he intended with his question didn't contain your answer either ;-)

Comment author: Gunnar_Zarncke 21 July 2016 09:00:38PM 0 points [-]

Yeah. both.

Comment author: Riothamus 21 July 2016 08:44:23PM 1 point [-]

Is there a procedure in Bayesian inference to determine how much new information in the future invalidates your model?

Say I have some kind of time-series data, and I make an inference from it up to the current time. If the data is costly to get in the future, would I have a way of determining when cost of increasing error exceeds the cost of getting the new data and updating my inference?

Comment author: ChristianKl 21 July 2016 08:30:25PM 0 points [-]

He didn't ask for it being bijective.

Comment author: Alexander230 21 July 2016 08:27:55PM 0 points [-]

They are fallacy cards. Fallacy can be explained as "faulty reasoning" or "bad argument", and cognitive bias is "mistake in thinking". They have many similarities and intersections, though.

Comment author: Gunnar_Zarncke 21 July 2016 08:17:18PM 0 points [-]

Great bias cards!

Comment author: Gunnar_Zarncke 21 July 2016 08:16:49PM *  0 points [-]

But that isn't bijective. You can't recover the original structure.

Comment author: Romashka 21 July 2016 06:27:56PM 0 points [-]

How likely is it that polls on happiness, subjective well-being, self-worth, subjective productivity etc. are influenced by the position of the date of the poll relative to school year? (School has dictated my plans for sixteen years, and with my kid enrolled in the kindergarten we enter the same pattern. That's half my life, optimistically speaking.)

I suppose in people whose work is built upon different seasonalities (like seashore resort employees, or long-distance delivery workers, probably?), ratings should differ from the rest of the population.

Comment author: Soothsilver 21 July 2016 06:04:24PM 0 points [-]

Thank you.

Comment author: Algernoq 21 July 2016 05:56:10PM 1 point [-]

Thank you for the kind words.

Comment author: pepe_prime 21 July 2016 05:30:07PM 0 points [-]

If you dig down 3 links you find the Commuter's Paradox. I found this paper to use very reasonable controls and explain itself well. Sadly, it doesn't address your question about different modes of transportation.

Comment author: qmotus 21 July 2016 05:14:13PM 0 points [-]

Will your results ultimately take the form of blog posts such as those, or peer-reviewed publications, or something else?

I think FRI's research agenda is interesting and that they may very well work on important questions that hardly anyone else does, but I haven't yet supported them as I'm not certain about their ability to deliver actual results or the impact of their research, and find it a tad bit odd that it's supported by effective altruism organizations, since I don't see any demonstration of effectiveness so far. (No offence though, it looks promising.)

Comment author: dxu 21 July 2016 04:18:54PM *  1 point [-]

...Your comment, paraphrased:

"You think I'm wrong, but actually you're the one who's wrong. I'm not going to give any reasons you're wrong, because this margin is too narrow to contain those reasons, but rest assured I know for a fact that I'm right and you're wrong."

This is, frankly, ridiculous and a load of drivel. Sorry, but I have no intention of continuing to argue with someone who doesn't even bother to present their side of the argument and insults my intelligence on top of that. Tapping out.

Comment author: dxu 21 July 2016 04:13:58PM *  0 points [-]

No, I don't believe zombies are impossible because of some nebulously defined "inductive argument". I believe zombies are impossible because I am experiencing qualia, and I don't believe those qualia are the result of some magical consciousness substance that can be added or subtracted from a universe at will.

Comment author: Dagon 21 July 2016 02:52:40PM 0 points [-]

Or on healthcare or architecture or garbage collection or any of the billion things humans do for each other.

Some thought to far-mode issues is worthwhile, and you might be able to contribute a bit as a funder or hobbyist, but for most people, including most rationalists, it shouldn't be your primary drive.

Comment author: ChristianKl 21 July 2016 02:36:57PM 0 points [-]

There are guys who primarily car about having sex with hot woman and there are woman who primarily care about having sex with hot man.

In both cases that's not the whole population.

Furthermore for many woman having sex with a man with whom they are in a love relationship is better than having sex with man with whom they aren't.

Comment author: buybuydandavis 21 July 2016 12:21:25PM 1 point [-]

I always had the informal impression that the optimal policies were deterministic

Really? I wouldn't have ever thought that at all. Why do you think you thought that?

when facing the environment rather that other players. But stochastic policies can also be needed if the environment is partially observable

Isn't kind of what a player is? Part of the environment with a strategy and only partially observable states?

Although for this player, don't you have an optimal strategy, except for the first move? The Markov "Player" seems to like change.

Isn't this strategy basically optimal? ABABABABABAB... Deterministic, just not the same every round. Am I missing something?

Comment author: John_Maxwell_IV 21 July 2016 11:13:53AM 0 points [-]

Do you get the impression that Japan has numerous benevolent and talented researchers who could and would contribute meaningfully to AI safety work? If so, it seems possible to me that your comparative advantage is in evangelism rather than research (subject to the constraint that you're staying in Japan indefinitely). If you're able to send multiple qualified Japanese researchers west, that's potentially more than you'd be able to do as an individual.

You'd still want to have thorough knowledge of the issues yourself, if only to convince Japanese researchers that the problems were interesting.

Comment author: hg00 21 July 2016 10:55:34AM *  3 points [-]

Thanks for your work.

I wouldn't be so sure that no one is reading what you write. Powerful people have little incentive to let it be known that they read odd websites like Less Wrong, but I assume they sometimes waste time browsing the internet like the rest of us. And insofar as high IQ and rationality are related to business success, it makes sense that wealthy people would disproportionately have LWish cognitive profiles and be interested in reading things LWers are interested in. There are a number of wealthy software entrepreneurs who have given large amounts to MIRI, for instance (Thiel, Tallin, McCaleb).

Comment author: hg00 21 July 2016 10:36:14AM *  0 points [-]

assuming infidelity is legal

http://lesswrong.com/lw/l0/adaptationexecuters_not_fitnessmaximizers/

Anyway, it sounds like you've gone through a lot. I'm sorry to hear of your suffering. I hope that someday you will have joyful experiences that help you put your current suffering in perspective.

In response to comment by dxu on Zombies Redacted
Comment author: entirelyuseless 21 July 2016 10:36:12AM -2 points [-]

Also, regarding the personal things here, I am not surprised that you find it hard to understand me, for two reasons. First, as I have said, I haven't been trying to lay out an entire position anyway, because it is not something that would fit into a few comments on Less Wrong. Second, you are deeply confused about a large number of things.

Of course, you suppose that I am the one who is confused. This is normal for disagreements. But I have good evidence that it is you who are confused, rather than me. You admit that you do not understand what I am saying, calling it "vague hand-waving." In contrast, I understand both what I am saying, and what you are saying. I understand your position quite well, and all of its reasons, along with the ways that you are mistaken. This is a difference that gives me a reason to think that you are the one who is confused, not me.

I agree that it would not be productive to continue a discussion along those lines, of course.

In response to comment by dxu on Zombies Redacted
Comment author: entirelyuseless 21 July 2016 10:28:47AM 1 point [-]

"I do not believe etc."

That is my point. It is a question of your beliefs, not of proofs. In essence, in your earlier comment, you asserted that you do not depend on an inductive argument to tell you that other people are conscious, because zombies are impossible. But my point is that without the inductive argument, you would have no reason to believe that zombies are impossible.

Comment author: ChristianKl 21 July 2016 09:35:20AM 1 point [-]

You could also simply transform everything into "mu".

Comment author: ChristianKl 21 July 2016 09:34:19AM 0 points [-]

More crucially it would require permission.

Comment author: Cariyaga 21 July 2016 09:29:31AM 0 points [-]

Well, I'd try to find a more accurate estimate of mortality and hospitalization for your age group; if you're younger than 30, I'd be very surprised to find the mortality rate that high. You could also take Acetaminophen instead, as it it is a pain reliever which is NOT an NSAID, and does not seem to cause any stomach bleeding, which should cut it down to the 4x margin. IANAD, though, so take that with a grain of salt, and speak to your GP if you have any particular questions about interactions.

I can follow your maths, but I'm also not a stat major or anything.

Comment author: ChristianKl 21 July 2016 08:50:46AM 0 points [-]

Cal Newport in Deep Work (his own word for flow work)

I'm not sure that's an accurate description for Cal Newport's Deep Work. High intensity deliberate practice that you can only do for short amounts of time per session is Deep Work in Newport's model.

Comment author: Viliam 21 July 2016 08:14:46AM 0 points [-]

Well, there already exists a version of this -- a few orders of magnitude slower -- where some groups of people decide to reproduce faster than other groups.

In real life the limiting factor is often that human children require resources, so too much reproduction may actually harm them by not leaving enough resources per child, so they are later unable to compete with those who received more resources. Another limiting factor is war, often over the resources. And yet another approach is to just let it happen and hope the problem will somehow solve itself, which actually sometimes happens (for example as groups of people get richer, they start valuing comfortable life more, which gets in the way of maximizing the number of their children). Also, sometimes the group actually wins, and then in the future its various subgroups have to compete against each other.

Also, there is the fact of human sexual reproduction, which means that the winning group does not have to exterminate the losing groups; it can also assimilate them. Here is probably the greatest difference compared with the Em scenario, which is like a return to asexual reproduction.

Comment author: Viliam 21 July 2016 07:55:50AM 1 point [-]

Yeah, this is what I suspect, too. "Hyperbolic" is just a metaphor, a simple example of a curve that has the required properties. There is probably no hyperbolometer in the emotional center of the human brain.

Comment author: Viliam 21 July 2016 07:52:39AM 0 points [-]

I suspect it could be cheaper if someone would print multiple copies and then sell them... but of course, that requires a volunteer, and an investment.

Comment author: dxu 21 July 2016 06:06:37AM 0 points [-]

just because a zombie world is impossible, does not mean that we have a syllogistic proof from first principles that it is impossible. We do not.

True.

And so if someone thinks it is possible, you can never refute that.

False.

You can only give reasons, that is, non-conclusive reasons, for thinking that it is probably impossible. And the reasons for thinking that are very similar to the reason I gave for thinking that other people are conscious. Your comment confuses two different ideas, namely whether zombies are possible, and what we know about zombies and how we know it, which are two different things.

This is not a matter of knowledge, but of expectation. Basically, the question boils down to whether I, personally, believe that consciousness will eventually be explained in reductionistic, lower level terms, just as heat was explained in reductionistic, lower level terms, even if such an explanation is currently unavailable. And the answer to that question is yes. Yes, I do.

I do not believe that consciousness is magic, and I do not believe that it will remain forever inexplicable. I believe that although we do not currently have an explanation for qualia, we will eventually discover such an explanation, just as I believe there exists a googol-th digit of pi, even if we have not yet calculated that digit. And finally, I expect that once such an explanation is discovered, it will make the entire concept of "p-zombies" seem exactly as possible as "heat" somehow being different from "motion", or biology being powered by something other than chemistry, or the third digit of pi being anything other than 4.

This is, it seems to me, the only reasonable position to take; anything else would, in my opinion, require a massive helping of faith. I have attempted to lay out my arguments for why this is so on multiple occasions, and (if you'll forgive my immodesty) I think I've done a decent job of it. I have also asked you several questions in order to help clarify your objections so that I might be able to better address said objections; so far, these questions of mine have gone unanswered, and I have instead been presented with (what appears to me to be) little more than vague hand-waving in response to my carefully worded arguments.

As this conversation has progressed, all of these things have served to foster a feeling of increasing frustration on my part. I say this, not to start an argument, but to express my feelings regarding this discussion directly in the spirit of Tell Culture. Forgive me if my tone in this comment seems a bit short, but there is only so much dancing around the point I am willing to tolerate before I deem the conversation a frustrating and fruitless pursuit. I don't mean to sound like I'm giving an ultimatum here, but to put it bluntly: unless I encounter a point I feel is worth addressing in detail, this will likely be my last reply to you on this topic. I've laid out my case; I leave the task of refuting it to others.

Comment author: Kaj_Sotala 21 July 2016 05:58:13AM 1 point [-]

I don't have a PhD either, and I know of at least one other person who'd been discussing working for them who was also very far from having that level of experience.

Comment author: dxu 21 July 2016 05:37:02AM 0 points [-]

And yet it seems really quite easy to conceive of a p zombie. Merely claiming that consciousness is emergent doesn't change our ability to imagine the presence or absence of the phenomenon.

Not too long ago, it would also have been quite easy to conceive of a world in which heat and motion were two separate things. Today, this is no longer conceivable. If something seems conceivable to you now, that might just be because you don't yet understand how it's actually impossible. To make the jump from "conceivability" (a fact about your bounded mind) to "logically possible" (a fact about reality) is a misstep, and a rather enormous one at that.

But clearly we do have such a reason: that we are conscious, and know this fact through direct experience of consciousness.

By stipulation, you would have typed the above sentence regardless of whether or not you were actually conscious, and hence your statement does not provide evidence either for or against the existence of consciousness. If we accept the Zombie World as a logical possibility, our priors remain unaltered by the quoted sentence, and continue to be heavily weighted toward the Zombie World. (Again, we can easily get out of this conundrum by refusing to accept the logical possibility of the Zombie World, but this seems to be something you refuse to do.)

The claim is that we know consciousness is real due to our experience of it.

This exact statement could have been emitted by a p-zombie. Without direct access to your qualia, I have no way of distinguishing the difference based on anything you say or do, and as such this sentence provides just as much evidence that you are conscious as the earlier quoted statement does--that is to say, no evidence at all.

The fact that this knowledge is causally inefficacious does not change its epistemic value.

Oh, but it does. In particular, for a piece of knowledge to have epistemic value to me (or anyone else, for that matter), I need to have some way of acquiring that knowledge. For me to acquire that knowledge, I must causally interact with it in some manner. If that knowledge is "causally inefficacious", as you put it, by definition I have no way of knowing about it, and it can hardly be called "knowledge" at all, much less have any epistemic value.

Allow me to spell things out for you. Your claims, interpreted literally, would imply the following statements:

  1. There exists a mysterious substance called "consciousness" that does not causally interact with anything in the physical universe.
  2. Since this substance does not causally interact with anything in the physical universe, and you are part of the physical universe, said substance does not causally interact with you.
  3. This means, among other things, that when you use your physical fingers to type on your physical keyboard the words, "we are conscious, and know this fact through direct experience of consciousness", the cause of that series of physical actions cannot be the mysterious substance called "consciousness", since (again) that substance is causally inactive. Instead, some other mysterious process in your physical brain is occurring and causing you to type those words, operating completely independently of this mysterious substance. Moreover, this physical process would occur and cause you to type those same words regardless of whether the mysterious epiphenomenal substance called "consciousness" was actually present.
  4. Nevertheless, for some reason you appear to expect me to treat the words you type as evidence of this mysterious, causally inactive substance's existence. This, despite the fact that those words and that substance are, by stipulation, completely uncorrelated.

...Yeah, no. Not buying it, sorry. If you can't seeing the massive improbabilities you're incurring here, there's really not much left for me to say.

Comment author: Lumifer 21 July 2016 05:18:33AM 1 point [-]

I am pretty cynical already and I don't see the point of this quote. I am not saying you should be a loyal friend to the whole world.

You, I presume, have been recently burned and so your sense of risk-reward is skewed at the moment. Yes, you can arrange your life to be almost entirely safe from emotional harm, but I suspect it will be a barren and highly unsatisfying life.

Comment author: rmoehn 21 July 2016 04:53:52AM 0 points [-]

So it would be better to work on computer security? Or on education, so that we raise fewer unfriendly natural intelligences?

Also, AI safety research benefits AI research in general and AI research in general benefits humanity. Again only marginal contributions?

Comment author: Algernoq 21 July 2016 04:52:43AM 1 point [-]

"What good is life experience to someone who plays Quidditch?" said Professor Quirrell, and shrugged. "I think you will change your mind in time, after every trust you place has failed you, and you have become cynical."

"You have to get seriously burnt by friends/employers/family members (ideally all three) over women/money/jobs (again ideally all three) before you realise that you create more hassle for yourself and crush opportunities if people perceive you to be smart/rich/well connected. Most people simply are not worth knowing and are too insecure to be good friends with."

Comment author: rmoehn 21 July 2016 04:44:31AM 0 points [-]

I thought online marketing businesses were powerful enough…

Comment author: Lumifer 21 July 2016 04:41:48AM 0 points [-]

It's safest to assume that any woman will dump/manipulate/cheat me the second it's in her best interest to do so.

Ask and ye shall receive.

You're setting yourself up for an unhappy life.

Comment author: Lumifer 21 July 2016 04:40:47AM 0 points [-]

I would say that before your current needs, uncertainty, opportunity cost, and changes in yourself the answer is debatable, that is, I can see it coming down to individual preferences.

But I still don't see practical applications. For actual calculations you need some reasonable numbers and I don't see how you are going to come up with them.

View more: Next