Comment author: ahbwramc 21 May 2015 02:43:53PM 2 points [-]

I feel like there are interesting applications here for programmers, but I'm not exactly sure what. Maybe you could link up a particular programming language's syntax to our sense of grammar, so programs that wouldn't compile would seem as wrong to you as the sentence "I seen her". Experienced programmers probably already have something like this I suppose, but it could make learning a new programming language easier.

Comment author: JonahSinick 21 May 2015 04:25:26AM 1 point [-]

You are making the assumption that one's self-worth needs to be tied to one's status. Status is a part of what you are. This is not correct. You can keep your ego separate from it. Status can be a tool, it is what you have, not what you are.

No, I wasn't making such an assumption, I was trying to guess what was going on in your mind: a lot of people do attach their self-worth to their social status. I'm trying to get calibrated.

At first, I thought "LWers will be like me and not care about their relative status on an emotional level " then I thought "LWers care a huge amount about their relative status, that's why they all got angry when I wrote a strong criticism of Eliezer and SIAI in 2010, then I thought "maybe LWers don't care that much about their status after all."

If LWers weren't emotionally invested in relative status, we wouldn't be having this conversation :-). There's clearly some sort of issue of self-worth being tied to status. I just don't know how large the effect size is, and in what contexts I should and shouldn't expect it to show up. Can you help me understand?

The initial clash on LW wasn't really even directly about status. It was about rudeness. Regardless of whether one wants to play status games or not, there are social norms of politeness and etiquette.

I'm aware of this, I was intentionally departing from these norms, in an attempt to support Less Wrong's stated purpose as A community blog devoted to refining the art of rationality.

Up until recently, my attitude had been "these people are all hypocrites who don't actually care about rationality." I now know that I had been overly cynical. But taken seriously, the view "when Jonah writes things on Less Wrong, he should be careful to refrain from saying true true things when they might offend other participants" corresponds to "Less Wrong is not a community for some like Jonah whose focus is on refining the art of rationality."

Note that I do adhere to standards of polite discourse except to the extent that I express my views when I think that they're important.

No, you are mistaken about that. You would become very useful and possibly well-compensated, but just by itself the possession of valuable information will not grant you much status. It just doesn't work this way.

I meant in expectation, not necessarily.

And untangle your own ego from you ability to freely say "I'm smarter than all y'all, peasants!"

You're doing it again :D. You seem to think that I'm coming across as arrogant because I'm egotistical. This isn't at all the case – it would be a relief for me if someone else was writing about the things that I want to communicate. I've found myself in the difficult position of having important information to communicate that other people aren't communicating.

Ok, here's the situation. I believe that I know how people in our broad reference class can systematically increase their productivity by 10x-100x. I've done this by using what I learned in data science to aggregate the common wisdom of great historical figures, the best mathematicians in the world, the most knowledgable LWers and the most knowledgable people in the EA movement. Just saying "you can make yourselves ~10x more productive" pattern matches very heavily with a crackpot.

I have a cold start problem: in order for people to understand the importance of the information that I have to convey, they need to spend a fair amount of time thinking about it, but without having seen the importance of the information, they're not able to distinguish me from being a crackpot.

That's why I've been pushing for the importance off putting a lot of time into understanding substantive things: because I've had the perception that people have dug themselves into a sort of epistemic rabbit hole where it's in principle impossible for me to signal that I'm right, independently of whether or not I am.

What I want to convey is really hard (and perhaps impossible) to convey succinctly: that's why nobody's been able to do it successfully before! There are tens or hundreds of thousands of people who have known it. Bill Gates knows it, Warren Buffett knows it, Bill Clinton knows it, Freeman Dyson knows it. But it comes close to being impossible to externalize –historically people have learned how to do it by carefully observing others who can do it, generally as mediated through in-person interactions, and failing that, very careful reading of historical documents by great thinkers from the past.

Certainly the odds are against me being able to communicate it, when nobody else has been able to :D. But I still think that there's some hope. I'm at something of a loss as to how to proceed.

Comment author: ahbwramc 21 May 2015 01:44:37PM *  1 point [-]

I have a cold start problem: in order for people to understand the importance of the information that I have to convey, they need to spend a fair amount of time thinking about it, but without having seen the importance of the information, they're not able to distinguish me from being a crackpot.

For what it's worth, these recent comments of yours have been working on me, at least sort of. I used to think you were just naively arrogant, but now it's seeming more plausible that you're actually justifiably arrogant. I don't know if I buy everything you're saying, but I'll be paying more attention to you in the future anyway.

I've tried to convey certain hard-to-explain LessWrong concepts to people before and failed miserably. I'm recognizing the same frustration in you that I felt in those situations. And I really don't want to be on the wrong side of another LW-sized epistemic gap.

Comment author: JonahSinick 21 May 2015 03:21:48AM 1 point [-]

I have no particular suggestions for you, but it's clear that it's at least possible to convey valuable information to LW without giving off a status-grabbing impression, because plenty of people have done it (eg lukeprog, Yvain, etc)

Certainly, they've done a very good job, and I commend them for it. But people who are so talented as them at communicating are rare.

Comment author: ahbwramc 21 May 2015 03:53:13AM 2 points [-]

Fair.

So, random anecdote time: I remember when I was younger my sister would often say things that would upset my parents; usually this ended up causing some kind of confrontation/fight. And whenever she would say these upsetting things, the second the words left her mouth I would cringe, because it was extremely obvious to me that what she had said was very much the wrong thing to say - I could tell it would only make my parents madder. And I was never quite sure (and am still not sure) whether she also recognized that what she was saying would only worsen the situation (but she still couldn't resist saying it because she was angry or whatever) or whether she was just blind to how her words would make my parents feel. So my question to you would be: can you predict when your LW comments will get a negative reaction? Do you think "yeah, this will probably get negative karma but I'm going to say it anyway"? Or are you surprised when you get downvoted?

(Not to say that it's irrational to post something you expect to be downvoted, of course, whereas it would be sort of irrational for my sister to say something in a fit of anger that she knew would only make things worse. I'm just trying to get a sense of how you're modelling LWer's)

Comment author: JonahSinick 21 May 2015 02:37:50AM 1 point [-]

LOL. The style of my writing is not actually a direct function of my emotional agitation. If anything, the more fun I see in a situation, the more rant-y my writing gets. About things of deep emotional concern to me I would probably just shut up.

Ok, thanks for clarifying, this is helpful.

Yes, it's possible, but are you actually saying that I should become like you in the sense of not caring about the status? That seems a fairly radical thing to demand.

Where I went wrong is in having the model "most people aren't like me, but a few are. The people who aren't like me might not be able to, but the people who are like me can."

I didn't have social difficulties with the people who I saw as different from me. I had social difficulty with the people who I saw as similar to me, because my implicit premise was in the direction "they can easily turn off their concern for relative status," which was almost never true. So the set of people who I saw as "like me" became smaller and smaller, and I became more and more isolated, until ~6 months ago, when I finally started to figure out what was had happened.

Ok, so when it comes to you: Where I was coming from was "doesn't everyone want to be free of feelings of jealousy and resentment?" It didn't occur to me that it's something that you might not want. Is it something that you like having even though it sometimes hurts you?

Wouldn't it be much simpler and... more robust to not send out the problematic signal in the first place?

For the sake of argument, suppose that I know things that would greatly improve LWers' lives if they knew them, that they can't learn anywhere else. In this hypothetical, if the situation became widely known, it would result in me being very high status, because lots of people would pay attention to what I said, and lots of people would want to be around me. In this hypothetical, I don't see how I could communicate the important information without signaling very high status.

Of course you and everyone else might have good reason to doubt whether the information that I want to share would in fact greatly improve LWers' lives.

But my focus here is on the meta-level: I perceive a non-contingency about the situation, where even if I did have extremely valuable information to share that I couldn't share without signaling high status, people would still react negatively to me trying to share it. My subjective sense is that to the extent that people doubt the value of what I have to share, this comes primarily from a predetermined bottom line of the type "if what he's saying were true, then he would get really high status: it's so arrogant of him to say things that would make him high status if true, so what he's saying must not be true."

Do you have suggestions for how I could go about things differently in a way that would be less triggering, while remaining in sync with my goal of communicating valuable information? A key point that might be relevant is that I don't actually care about getting credit – for example, I would be completely fine with Scott Alexander blogging about what I want to write about, people learning that way, and people associating it with him him rather than me.

Comment author: ahbwramc 21 May 2015 03:08:14AM 3 points [-]

But my focus here is on the meta-level: I perceive a non-contingency about the situation, where even if I did have extremely valuable information to share that I couldn't share without signaling high status, people would still react negatively to me trying to share it. My subjective sense is that to the extent that people doubt the value of what I have to share, this comes primarily from a predetermined bottom line of the type "if what he's saying were true, then he would get really high status: it's so arrogant of him to say things that would make him high status if true, so what he's saying must not be true."

I have no particular suggestions for you, but it's clear that it's at least possible to convey valuable information to LW without giving off a status-grabbing impression, because plenty of people have done it (eg lukeprog, Yvain, etc)

Comment author: ahbwramc 05 May 2015 03:27:29PM *  10 points [-]

I've been trying to be more "agenty" and less NPC-ish lately, and having some reasonable success. In the past month I've:

-Gone to a SlateStarCodex meetup

This involved taking a greyhound bus, crossing the border into a different country, and navigating my way around an unfamiliar city - all things that would have stopped me from even considering going a few years ago. But I realized that none of those things were actually that big of a deal, that what was really stopping me was that it just wasn't something I would normally do. And since there was no real reason I couldn't go, and because I knew I really wanted to go, I just up and did it.

(had a great time btw, no regrets)

-Purchased a used (piano) keyboard

I used to just kind of vaguely wish that I had a keyboard, because it seemed like it would be a fun thing to learn. I would think this resignedly, as if it were an immutable fact of the universe that I couldn't have a keyboard - for some reason going out and buying one didn't really occur to me. Now that I have one I'm enjoying it, although I'm mostly just messing around and it's clear that I'll need more structure if I'm really going to make progress.

-Signed up for an interview for the MIRI Summer Fellows program

Working at MIRI would be amazing, a dream come true. But I always just sort of assumed I wasn't cut out for it. And that may well be true, but here's a practically zero-cost chance to find out. Why not take it? (Of course, there's always the possibility that I'm just wasting Anna Salamon's time, which I wouldn't want to do. But I don't think I'm so obviously underqualified that that would be the case). Again, I don't think is something I would have done even a year ago.

I've also been having much more success consistently writing for my blog, which I used to always enjoy but rarely do.

Basically I've gotten a ton of mileage out of just having the concept of agency installed in my brain. Knowing that I can just do the things I want, even if they're weird or I haven't done them before, is pretty freeing and pretty cool. The whole "Roles" arc of HPMOR really drove this idea home for me I think.

Comment author: Vaniver 04 May 2015 09:25:44PM 2 points [-]

the weirdness of Euler's identity isn't exactly subtle, after all.

But... it's just rotation! I think the thing that's weird about Euler's identity is that the symbology looks odd (especially if you're more used to degrees than radians), not that the underlying reality is odd. (Maybe I've just dealt with exponentials of complex numbers for so long that I can't be surprised by them anymore, but I don't remember being surprised by it before.)

Comment author: ahbwramc 04 May 2015 09:37:21PM 0 points [-]

Sure, I understand the identity now of course (or at least I have more of an understanding of it). All I meant was that if you're introduced to Euler's identity at a time when exponentiation just means "multiply this number by itself some number of times", then it's probably going to seem really odd to you. How exactly does one multiply 2.718 by itself sqrt(-1)*3.14 times?

Comment author: ahbwramc 04 May 2015 07:19:48PM 2 points [-]

I remember my mom, who was a math teacher, telling me for the first time that e^(i*pi) = -1. My immediate reaction was incredulity - I literally said "What??!" and grabbed a piece of paper to try to work out how that could be true. Of course I had none of the required tools to grapple with that kind of thing, so I got precisely nowhere with it. But that's the closest I've come to having a reaction like you describe with Scott and quintics. I consider the quintic thing far more impressive of course - the weirdness of Euler's identity isn't exactly subtle, after all.

So do you think you could predict mathematical ability by simply giving students a list of "deep" mathematical facts and seeing which ones (if any) they're surprised by or curious about?

Comment author: ahbwramc 04 May 2015 05:02:38PM 7 points [-]

Since much of this sequence has focused on case studies (Grothendiek, Scott Alexander), I'd be curious as to what you think of Douglas Hofstadter. How does he fit into this whole picture? He's obviously a man of incredible talent in something - I don't know whether to call it math or philosophy (or both). Either way it's clear that he has the aesthetic sense you're talking about here in spades. But I distinctly remember him writing something along the lines of how, upon reaching graduate mathematics he hit a "wall of abstraction" and couldn't progress any further. Does your picture of mathematical ability leave room for something like that to happen? I mean, this is Douglas freakin' Hofstadter we're talking about - it's hard to picture someone being more of a mathematical aesthete than he is. And even he ran into a wall!

Comment author: brainmaps 03 May 2015 11:50:06AM *  0 points [-]

No, but as others pointed out, an animated GIF is not a simulation of the thing it represents.

The animated GIF, as I originally described it, is an "imitation of the operation of a real-world process or system over time", which is the verbatim definition (from Wikipedia) of a simulation. Counterfactual dependencies are not needed for imitation.

Just to be clear, when we are talking of simulations of a computational system, we mean something that computes the same input to output mapping of the system that is simulated, the same mathematical function

Ok, let's go with this definition. As I understand it then, machine functionalism is not about simulation (as imitation) per se but rather about recreating the mathematical function that the human brain is computing. Is this correct?

An animated GIF doesn't respond to inputs, therefore it doesn't compute the same function that the brain computes.

A brain doesn't necessarily respond to inputs, but sure, we can require that the simulation responds to inputs, though I find this requirement a bit strange.

"Being a video game" is a property of certain patterns of input-output mappings, and this property is invariant (up to a performance overhead) under simulation, it is independent on the physical substrate.

It sounds like a beautiful idea, being invariant under a simulation that is independent of substrate.

There are claims now and then that some chatbot passed the Turing test, but if you look past the hype, all these claims are fundamentally false.

I agree.

About updating posterior beliefs, I would have to know the basis for consciousness, which I acknowledge uncertainty over.

I'm asking how you understand the term at operational level right now.

In short, it's a combination of a Turing test and the possession of a functioning human brain-like structure. If an entity exhibits awake human-like behavior (i.e, by passing the Turing test or suitable approximation) and possesses a living human brain (inferred from visual inspection of their biological form) or human brain-like equivalent (which I've yet to see, except possibly in some non-human primates), then I generally conclude it has human or human-like consciousness.

When I consider your comment here with your previous comment above that "definitions of consciousness which are not invariant under simulation have little epistemic usefulness", I think I understand your argument better. However the epistemic argument you're advancing is a fallacy because you're demonstrating what you assume: If I run an accurate simulation of a human brain on a computer and ask it whether it has human consciousness, of course it will say 'yes' and it will even pass the Turing test because we're assuming it's an accurate simulation of a human brain. The reasoning is circular and does not actually inform us whether the simulation is conscious. So your "epistemic usefulness" appears irrelevant to the question of whether machine functionalism is correct. Or am I missing something?

My general question to the machine functionalists here is, why are you assuming it is sufficient to merely simulate the human brain to recreate its conscious experience? The human brain is a chemico-physical system and such systems are generally explained in terms of causal structures involving physical or chemical entities, though such explanations (including simulations) are never mistaken for the thing itself. So why should human consciousness, which is a part of the natural world and whose basis we know first-hand involves the human brain, be any different?

If the question here is, is consciousness a substrate-independent function that the brain computes or is it associated with a unique type of physico-chemical causal (space-time) structure, then I would say the latter is more likely due to the past successes in physics and chemistry in explaining natural phenomena. In any event, our knowledge of the basis of consciousness is still highly speculative. I can attempt further reductio ad absurdums with machine functionalism involving ever more ridiculous scenarios but will probably not convince anyone who has taken the requisite leap of faith.

Comment author: ahbwramc 03 May 2015 08:03:08PM 1 point [-]

You seem to be discussing in good faith here, and I think it's worth continuing so we can both get a better idea of what the other is saying. I think differing non-verbal intuitions drive a lot of these debates, and so to avoid talking past one another it's best to try to zoom in on intuitions and verbalize them as much as possible. To that end (keeping in mind that I'm still very confused about consciousness in general): I think a large part of what makes me a machine functionalist is an intuition that neurons...aren't that special. Like, you view the China Brain argument as a reductio because it seems so absurd. And I guess I actually kind of agree with that, it does seem absurd that a bunch of people talking to one another via walkie-talkie could generate consciousness. But it seems no more absurd to me than consciousness being generated by a bunch of cells sending action potentials to one another. Why should we have expected either of those processes to generate consciousness? In both cases you just have non-mental, syntactical operations taking place. If you hadn't heard of neurons, wouldn't they also seem like a reductio to you?

What it comes down to is that consciousness seems mysterious to me. And (on an intuitive level) it kind of feels like I need to throw something "special" at consciousness to explain it. What kind of special something? Well, you could say that the brain has the special something, by virtue of the fact that it's made of neurons. But that doesn't seem like the right kind of specialness to me, somehow. Yes, neurons are special in that they have a "unique" physico-chemical causal structure, by why single that out? To me that seems as arbitrary as singling out only specific types of atoms as being able to instantiate consciousness (which some people seem to do, and which I don't think you're doing, correct?). It just seems too contingent, too earth-specific an explanation. What if you came across aliens that acted conscious but didn't have any neurons or a close equivalent? I think you'd have to concede that they were conscious, wouldn't you? Of course, such aliens may not exist, so I can't really make an argument based on that. But still - really, the answer to the mystery of consciousness is going to come down to the fact that particular kinds of cells evolved in earth animals? Not special enough! (or so say my intuitions, anyway)

So I'm led in a different direction. When I look at the brain and try to see what could be generating consciousness, what pops out to me is that the brain does computations. It has a particular pattern, a particular high-level causal structure that seems to lie at the heart of its ability to perform the amazing mental feats it does. The computations it performs are implemented on neurons, of course, but that doesn't seem central to me - if they were implemented on some other substrate, the amazing feats would still get done (Shakespeare would still get written, Fermat's Last Theorem would still get proved). What does seem central, then? Well, the way the neurons are wired up. My understanding (correct me if I'm wrong) is that in a neural network such as the brain, any given neuron fires iff all the inhibitory and excitatory inputs feeding into the neuron exceed some threshold. So roughly speaking, any given brain can be characterized by which neurons are connected to which other neurons, and what the weights of those connections are, yes? In that case (forgetting consciousness for a moment), what really matters in terms of creating a brain that can perform impressive mental feats is setting up those connections in the right way. But that just amounts to defining a specific high-level causal structure - and yes, that will require you to define a set of counterfactual dependencies (if neurons A and B had fired, then neuron C wouldn't have fired, etc). I was kind of surprised that you were surprised that we brought up counterfactual dependence earlier in the discussion. For one I think it's a standard-ish way of defining causality in philosophy (it's at least the first section in the wikipedia article, anyway, and it's the definition that makes the most sense to me). But even beyond that, it seems intuitively obvious to me that your brain's counterfactual dependencies are what make your brain, your brain. If you had a different set of dependencies, you would have to have different neuronal wirings and therefore a different brain.

Anyway, this whole business of computation and higher-level causal structure and counterfactual dependencies: that does seem to have the right kind of specialness to me to generate consciousness. It's hard for me to break the intuition down further than that, beyond saying that it's the if-then pattern that seems like the really important thing here. I just can't see what else it could be. And this view does have some nice features - if you wind up meeting apparently-conscious aliens, you don't have to look to see if they have neurons. You can just look to see if they have the right if-then pattern in their mind.

To answer your question about simulations not being the thing that they're simulating: I think the view of consciousness as a particular causal pattern kind of dissolves that question. If you think the only thing that matters in terms of creating consciousness is that there be a particular if-then causal structure (as I do), then in what sense are you "simulating" the causal structure when you implement it on a computer? It's still the same structure, still has the same dependencies. That seems just as real to me as what the brain does - you could just as easily say that neurons are "simulating" consciousness. Essentially machine functionalists think that causal structure is all there is in terms of consciousness, and under that view the line between something being a "simulation" versus being "real" kind of disappears.

Does that help you understand where I'm coming from? I'd be interested to hear where in that line of arguments/intuitions I lost you.

Comment author: brainmaps 01 May 2015 02:39:43PM *  3 points [-]

Thanks for the replies. I will try to answer and expand on the points raised. There are a number of reductio ad absurdums that dissuade me from machine functionalism, including Ned Block's China brain and also the idea that a Turing machine running a human brain simulation would possess human consciousness. Let me try to take the absurdity to the next level with the following example:

Does an animated GIF possess human consciousness?

Imagine we record the activity of every neuron in a human brain at every millisecond; at each millisecond, we record whether each of the 100 billion neurons in the human brain is firing an action potental or not. We record all of this for a 1 second duration. Now, for each of the 1000 milliseconds, we represent the neural firing state of all neurons as a binary GIF image of about 333,000 pixels in height and width (this probably exceeds GIF format specifications, but who cares), where each pixel represents the firing state of a specific neuron. We can make 1000 of these GIFs for each millisecond over the 1 second duration. With these 1000 GIFs, we concatenate them to form an animated GIF and then play the animated GIF on an endless loop. Since we are now "simulating" the neural activities of all the neurons in the human brain, we might expect that the animated GIF possesses human consciousness... But this view is absurd and this exercise suggests there is more to consciousness than reproducing neural activities in different substrates.

To V_V, I don't think it has human consciousness. If I answer otherwise, I'm pressed to acknowledge that well-coded chatbots have human consciousness, which is absurd. With regard to what "conscious" means in epistemic terms, I don't know, but I do know that the Turing test is insufficient because it only deals with appearances and it's easy to be duped. About updating posterior beliefs, I would have to know the basis for consciousness, which I acknowledge uncertainty over.

To Kyre, you hit the crux in your second example. The absurdity of China brain and the Turing machine with human consciousness stems from the fact that the causal structures (i.e., space-time diagrams) in these physical systems are completely different from the causal structure of the human brain. As you describe, in a typical computer there are honest-to-god physical cause and effect in the voltage levels in the memory gates, but the causal structure is completely different from wetware, and this is where the absurdity of attributing consciousness to computations (or simulation) comes from, at least for me. Consciousness is not just computational. Otherwise you have absurdities like China brain and animated GIFs with human consciousness. It seems more likely to be physico-computational, as reflected in the causal structure of interactions of the physical system which underlies the computations and simulations.

There may be a computer architecture that reproduces the correct causal structure, but Von Neumann and related architectures do not. And to your last question, yes! A simulation is just an image. If you think it is the real thing, then you must accept that an animated GIF can possess human consciousness. Personally, this conclusion is too absurd for me to accept.

To jacob_cannell, thanks for the congrats. Sure, consciousness has baggage but using self-awareness instead already commits one to consciousness as a special type of computation, which the reductio ad absurdums above try to disprove. I agree it's likely that "Self-awareness is just a computational capability", depending on what you mean by 'Self' and 'awareness'. You state that "The 'causal structure' is just the key algorithmic computations" but this is not quite right. The algorithmic computations can be instantiated in many different causal structures but only some will resemble those of the human brain and presumably possess human consciousness.

TLDR: The basis of consciousness is very speculative and there is good reason to believe it goes beyond computation to the physico-computational and causal (space-time) structure.

Comment author: ahbwramc 01 May 2015 03:37:31PM 3 points [-]

I think we might be working with different definitions of the term "causal structure"? The way I see it, what matters for whether or not two things have the same causal structure is counterfactual dependency - if neuron A hadn't have fired, then neuron B would have fired. And we all agree that in a perfect simulation this kind of dependency is preserved. So yes, neurons and transistors have different lower-level causal behaviour, but I wouldn't call that a different causal structure as long as they both implement a system that behaves the same under different counterfactuals. That's what I think is wrong with your GIF example, btw - there's no counterfactual dependency whatsoever. If I delete a particular pixel from one frame of the animation, the next frame wouldn't change at all. Of course there was the proper dependency when the GIF was originally computed, and I would certainly say that that computation, however it was implemented, was conscious. But not the GIF itself, no.

Anyway, beyond that, we're obviously working from very different intuitions, because I don't see the China Brain or Turing machine examples as reductio's at all - I'm perfectly willing to accept that those entities would be conscious.

View more: Prev | Next