Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: Manfred 05 November 2017 07:46:32PM 0 points [-]

Hm, the format is interesting. The end product is, ideally, a tree of arguments, with each argument having an attached relevance rating from the audience. I like that they didn't try to use the pro and con arguments to influence the rating of the parent argument, because that would be too reflective of audience composition.

Comment author: curi 01 November 2017 06:55:29PM 2 points [-]

Well, no - it's a set of explanations. A very large set, consisting of every explanation other than ‘the sun is powered by nuclear fusion’, but smaller than T | ~T, and therefore somewhat useful, however slightly.

Infinity minus one isn't smaller than infinity. That's not useful in that way.

It may be useful in some way. But just ruling a single thing out, when dealing with infinity, isn't a road to progress.

indeed idea that quantum theory and relativity are both true is nonsense

He's saying we use them both, and that has value, even though we know there must be some mistake somewhere. Saying "or" misrepresents the current situation. Both of them seem to be partly right. The situation (our current understanding which has value) looks nothing like we'll end up keeping one and rejecting the other.

Comment author: Manfred 02 November 2017 09:07:28PM 0 points [-]

Infinity minus one isn't smaller than infinity. That's not useful in that way.

The thing being added or subtracted is not the mere number of hypotheses, but a measure of the likelihood of those hypotheses. We might suppose an infinitude of mutually exclusive theories of the world, but most of them are extremely unlikely - for any degree of unlikeliness, there are an infinity of theories less likely than that! A randomly-chosen theory is so unlikely to be true, that if you add up the likelihoods of every single theory, they add up to a number less than infinity.

It is for this reason that it is important when we divide our hypotheses between something likely, and everything else. "Everything else" contains infinite possibilities, but only finite likelihood.

Comment author: Dagon 01 November 2017 09:54:23PM 0 points [-]

(1) the objective of science is, or should be, to increase our ‘credence’ for true theories

Well, no. Theories are maps, and are by necessity simpler than the territory (the universe is it's own best model). There is no such thing as a "true" theory. There are only theories which predict a larger or smaller subset of future states better or worse than others.

Comment author: Manfred 02 November 2017 08:55:20PM 0 points [-]

I think this neglects the idea of "physical law," which says that theories can be good when they capture the dynamics and building-blocks of the world simply, even if they are quite ignorant about the complex initial conditions of the world.

Comment author: Stuart_Armstrong 25 October 2017 04:07:17PM 1 point [-]

I suggest you check with Nate what exactly he thinks, but my opinion is:

If two decision algorithms are functionally equivalent, but algorithmically dissimilar, you'd want a decision theory that recognises this.

I think Nate agrees with this, and any lack of functional equivalence is due to not being able to fully specify that yet.

f and f' are functionally correlated, but not functionally equivalent. FDT does not recognise this.

Can't this be modelled as uncertainty over functional equivalence? (or over input-output maps)?

Comment author: Manfred 25 October 2017 09:41:14PM *  0 points [-]

Can't this be modelled as uncertainty over functional equivalence? (or over input-output maps)?

Hm, that's an interesting point. Is what we care about just the brute input-output map? If we're faced with a black-box predictor, then yes, all that matters is the correlation even if we don't know the method. But I don't think any sort of representation of computations as input-output maps actually helps account for how we should learn about or predict this correlation - we learn and predict the predictor in a way that seems like updating a distribution over computations. Nor does it seem to help in the case of trying to understand to what extend two agents are logically dependent on one another. So I think the computational representation is going to be more fruitful.

Comment author: Manfred 18 October 2017 10:59:35PM 1 point [-]

Interesting that resnets still seem state of the art. I was expecting them to have been replaced by something more heterogeneous by now. But I might be overrating the usefulness of discrete composition because it's easy to understand.

Comment author: root 17 October 2017 03:28:53PM 2 points [-]

Is LW 1.0 dead?

Comment author: Manfred 17 October 2017 04:45:14PM 3 points [-]

Plausibly? LW2 seems to be doing okay, which is gonna siphon off posts and comments.

Comment author: MrMind 06 October 2017 10:11:48AM 0 points [-]

That's interesting... is the dust size still consistent with artificial objects?

Comment author: Manfred 06 October 2017 08:04:35PM *  1 point [-]

The dust probably is just dust - scattering of blue light more than red is the same reason the sky is blue and the sun looks red at sunset (Rayleigh scattering / Mie scattering). It comes from scattering off of particles smaller than a few times the wavelength of the light - so if visible light is being scattered less than UV, we know that lots of the particles are of size smaller than ~2 um. This is about the size of a small bacterium, so dust with interesting structure isn't totally out of the question, but still... it's probably just dust.

Comment author: Erfeyah 05 October 2017 07:09:39PM *  0 points [-]

Hmm.. I do not think that is what I mean, no. I lean towards agreeing with Searle's conclusion but I am examining my thought process for errors.

Searle's argument is not that consciousness is not created in the brain. It is that it is not based on syntactic symbol manipulation in the way a computer is and for that reason it is not going to be simulated by a computer with our current architecture (binary, logic gates etc.) as the AI community thought (and thinks). He does not deny that we might discover the architecture of the brain in the future. All he does is demonstrate through analogy how syntactic operations work.

In the Chinese gym rebuttal the issues is not really addressed. There is no denial by Searle that the brain is a system, with sub components, through which structure, consciousness emerges. That is a different discussion. He is arguing that the system must be doing something, different or in addition to, syntactic symbol manipulation.

Since the neuroscience does not support the digital information processing view where is the certainty of the community coming from? Am I missing something fundamental here?

Comment author: Manfred 05 October 2017 09:58:26PM *  1 point [-]

I think people get too hung up on computers as being mechanistic. People usually think of symbol manipulation in terms of easy-to-imagine language-like models, but then try to generalize their intuitions to computation in general, which can be unimaginably complicated. It's perfectly possible to simulate a human on an ordinary classical computer (to arbitrary precision). Would that simulation of a human be conscious, if they matched the behavior of a flesh and blood human almost perfectly, and could output to you via text channel and output things like "well, I sure feel conscious"?

The reason LWers are so confident that this simulation is conscious is because we think of concepts like "consciousness," to the extent that they exist, as having something to do with the cause of us talking and thinking about consciousness. It's just like how the concept of "apples" exists because apples exist, and when I correctly think I see an apple, it's because there's an apple. Talking about "consciousness" is presumed to be a consequence of our experience with consciousness. And the things we have experience with that we can label "consciousness" are introspective phenomena, physically realized as patterns of neurons firing, that have exact analogies in the simulation. Demanding that one has to be made of flesh to be conscious is not merely chauvinism, it's a misunderstanding of what we have access to when we encounter consciousness.

Comment author: Manfred 04 October 2017 07:54:06PM *  0 points [-]

Neat paper about the difficulties of specifying satisfactory values for a strong AI. h/t Kaj Sotala.

The design of social choice AI faces three sets of decisions: standing, concerning whose ethics views are included; measurement, concerning how their views are identified; and aggregation, concerning how individual views are combined to a single view that will guide AI behavior. [] Each set of decisions poses difficult ethical dilemmas with major consequences for AI behavior, with some decision options yielding pathological or even catastrophic results.

I think it's slightly lacking in sophistication about aggregation of numerical preferences, and in how revealed preferences indicate that we don't actually have incommensurable or infinitely-strong preferences, but is overall pretty great.

On the subject of the problem, I don't think we should program in values that are ad-hoc on the object level (what values to use - trying to program this by hand is destined for failure), or even the meta level (whose values to use). But I do think it's okay to use an ad-hoc process to try to learn the answers to the meta-level questions. After all, what's the worst that could happen? (irony). Of course, the ability to do this assumes the solution of other, probably more difficult philosophical/AI problems, like how to refer to peoples' values in the first place.

Comment author: entirelyuseless 04 October 2017 02:28:48PM 1 point [-]

"our neurons don't indiviually understand English, and my behavior is just the product of a bunch of neurons following the simple laws of chemistry"

The question is what the word "just" means in that sentence. Ordinarily it means to limit yourself to what is said there. The implication is that your behavior is explained by those simple laws, and not by anything else. But as I pointed out recently, having one explanation does not exclude others. So your behavior can be explained by those simple laws, and at the same time by the fact that you were seeking certain goals, or in other ways. In other words, the argument is false because the word "just" here implies something false.

Comment author: Manfred 04 October 2017 06:32:45PM 0 points [-]

Yeah, whenever you see a modifier like "just" or "merely" in a philosophical argument, that word is probably doing a lot of undeserved work.

View more: Next