Interesting thought experiment. Do we know an AI would enter a different mental state though?
I am finding it difficult to imagine the difference between software "knowing all about" and "seeing red"
Interesting thought experiment. Do we know an AI would enter a different mental state though?
I am finding it difficult to imagine the difference between software "knowing all about" and "seeing red"
Arguably it could simulate itself seeing red and replace itself with the simulation.
I think the distinction between 'knowing all about' and 'seeing' red is captured in my box analogy. The brain state is a box. There is another box inside it, call this 'understanding'. We call something inside the first box 'experienced'. So the paradox hear is the two distinct states [experiencing (red) ] and [experiencing ( [understanding (red) ] ) ] are both brought under the header [knowing (red)], and this is really confusing.
I'd highly recommend reading the ]original paper.](http://home.sandiego.edu/~baber/analytic/Jackson.pdf)
I am not following the box analogy. What kinds of knowledge do the boxes represent?
The big box is all knowledge, including the vague 'knowledge of experience' that people talk about in this thread. The box-inside-the-box is verbal/declarative/metaphoric/propositional/philosophical knowledge, that is anything that is fodder for communication in any way.
The metaphor is intended to highlight that people seem to conflate the small box with the big box, leading to confusion about the situation. Inside the metaphor, perhaps this would be people saying "well maybe there are objects inside the box which aren't inside the box at all". Which makes little since if you assume 'inside the box' has a single referent, which it does not.
Edit: I read your link, thanks for that. I can't say I got much of anything out of it, though. I haven't changed my mind, and my epistemic status regarding my own arguments hang changed; which is to say there is likely something subtle I'm not getting about your position and I don't know what it is.
I don't understand what the point of that point is.
Do you think you argung against the intended conclusion of the Knowledge Argumemt in some way? If so, you are not...the loophole you have found s quite irrelevant,
I have no idea what your position even is and you are making no effort to elucidate it. I had hoped this line
I don't understand what disagreement is occurring here, hopefully I've given someone enough ammunition to explain.
Was enough to clue you in to the point of my post.
I'd highly recommend this sequence to anyone reading this: http://lesswrong.com/lw/5n9/seeing_red_dissolving_marys_room_and_qualia/
This thrust of the argument, applied to this situation, is simply that 'knowledge' is used to mean two completely different things here. On one hand, we have knowledge as verbal facts and metaphoric understanding. On the other we have averbal knowledge, that is, the superset containing verbal knowledge and non-verbal knowledge.
To put it as plainly as possible: imagine you have a box. Inside this box, there is a another, smaller box. We can put a toy inside the box inside the larger box. We can alternatively put a toy inside the larger box but outside the box inside this box. These situations are not equivalent. What paradox!
The only insight need here is simply noting that something can be 'inside the box' without being inside the box inside the box. Since both are referred to as 'inside the box' the confusion is not surprising.
It seems like a significant number of conventional aporia can be understood as confusions of levels.
M's R is about what it says its about, the existence of non physical facts. Finding a loophole where Mary can instantiate the brain state without having the perceptual stimulus doesn't address that...indeed it assumes that an instantiation of the red-seeing is necessary, which is tantamount to conceding that something subectve is going on, which is tantamount to conceding the point.
I think the argument is asserting that Mary post-brain surgery is a identical to Mary post-seeing-red. There is no difference; the two Mary's would both attest to having access some ineffable quality of red-ness.
To put it bluntly, both Marys say the same things, think the same things, and generally are virtually indistinguishable. I don't understand what disagreement is occurring here, hopefully I've given someone enough ammunition to explain.
Trying to think of what's not on this list:
The EA forum sometimes has insightful posts, mostly EA news
Givewell and Open Philanthropy Project blogs
/r/slatestarcodex, /r/LessWrong, /r/HPMoR, /r/smartgiving, /r/effectivealtruism, /r/rational
Thing of Things (Ozy's blog)
Topher's blog URL is now http://topherhallquist.wordpress.com
http://thefutureprimaeval.net is a group blog by a few ex-LWers I believe
You could probably dig up more by looking through the blogrolls of the blogs you've already identified. For example, Scott Aaronson considers himself part of the rationalist blogosphere and is listed on the SlateStarCodex sidebar.
Andrew Critch's blog is great
Of course there are lots of Facebook groups (especially EA-related Facebook groups) and Facebook personalities, notably Eliezer
Somewhere I got the impression that HBD Chick and Sarah Perry of Ribbonfarm were LWers at some point. There's also the "post-rationalist" community which includes sites like Melting Asphalt.
Many of these update infrequently, making it bothersome to check all of them. I'll bet it wouldn't be very hard to create a single site that lets you see what's new across the entire diaspora (including LW) by combining all these RSS feeds in to something like http://lesswrong.com/r/all/recentposts/ Could be a fun webdev project. Register a domain for it and put it on your resume.
Somewhere I got the impression that ... Sarah Perry of Ribbonfarm were LWers at some point.
She was/is. Her (now dead) blog, The View From Hell, is on the lesswrong wiki list of blogs. She has another blog, at https://theviewfromhellyes.wordpress.com which updates, albeit at a glacial pace.
Your scheme may well he more powerful than a Turing machine (i.e., if there were something in the world that behaves according to your model then it could do computations impossible to a mere Turing machine) but much of what you write seems to indicate that you think you have implemented your scheme. In Python. On an actual computer in our universe.
Obviously that is impossible (unless Python running on an actual computer in our universe can do things beyond the capabilities of Turing machines, which it can't).
Could you clarify explicitly whether you think what you have implemented is "more powerful than every supercomputer in the world" in any useful sense? What do you expect to happen if you feed your code a problem that has no Turing-computable solution? (What I expect to happen: either it turns out that you have a bug and your code emits a wrong answer, or your code runs for ever without producing the required output.)
I'm sorry that I over estimated my achievements. Thank you for being civil.
What do you expect to happen if you feed your code a problem that has no Turing-computable solution?
I'm actually quite interested in this. For something like the busy beaver function, it just runs forever with the output being just fuzzy and gets progressively less fuzzy but never being certain.
Although I wonder about something like super-tasks somehow being described for my model. You can definite get input from arbitrarily far in the future, but you can do even crazier things if you can achieve a transfinite number of branches.
If you're still interested in this (I doubt you are, there are more important things you can do with you are time, but still) you glance at this reply I gave to taryneast describing how it checks if a turing machine halts. (I do have an ulterior motive in pointing you there, seeing as I want to find that one flaw I'm certain is lurking in my model somewhere)
I'm willing to suspend judgement pending actual results. Demonstrate it does what you claim and I'll be very interested.
Note you probably already know this, but in case you don't: AFAIK the Halting problem has a mathematical proof... you will require the same to prove that your system solves it. ie just showing that it halts on many programs won't be enough (Turing machines do this too). You'll have to mathematically prove that it halts for all possible problems.
For some strange reason, your post wasn't picked up by my RSS feed and the little mail icon wasn't orange, Sorry to keep you waiting for a reply for so long.
The Halting proof is for Turing machines. My model isn't a turing machine, it's supposed to be more powerful.
You'll have to mathematically prove that it halts for all possible problems.
Not to sound condescending, but this is why I'm posting it on a random internet forum and not sending it to a math professor or something.
I don't think this is revolutionary, and I think there is very good possibility there is something wrong with my model.
I'll tell you what convinced me that this is a hyper-computer though., and I'll go a ahead and say I'm not overly familiar with the Halting inasmuch as I don't understand the inner workings as well as I can parrot facts about it. I'll let more experienced people tell me if this breaks some sort of conditional.
What my model essentially does is graft a time-travel formalism onto a something turing-complete. Since the turing -complete model of your choice is a special case of the model we just constructed, it's already turing complete. And the formalism itself already specifies that information can travel backwards through time, what has to be proven is that an algorithm can be constructed that solves the halting problem.
With all of that, we can construct an algorithm based off of the following assumptions about time travel
Inconsistent timelines "don't exist"*
A timeline is inconsistent if it sends back different information than it receives
If more than one timeline is consistent, then all are equally realized.
I have no idea if you read through the ramblings I linked, but the gist was that to simulate the model, at any given timestep the model receives all possible input from the future, organized into different branches. 'possible' is a important qualitfier, because the difference between the model being exponential in the size of the memory and exponential in an arbitrary quantity constrained to be smaller the size of the memory is whether you can tell if a given bit of memory is dependent on the future by looking at only the current state.
Next, I'll point out that because the model allows computation to be carried out between receiving and sending messages, you can use the structure of the model to do computation. An illustration:
Suppose X is a turing machine you are interested in whether or not it halts
1. Receive extra input from the future (in the form of "X will halt after n timesteps" )
2. Is it properly formatted?
3. Simulate X for exactly n timesteps
If it halts before then, output "X will halt after m timesteps" where m is the number of cycles before it halted. Halt.
If it doesn't halt after n timesteps, output "X will halt after n+1 timesteps". Halt
I'll note this algorithm only appeared to me after writing my posts.
Here's how it works.
We can number each timeline branch based off of what it outputs to the past. If it outputs "X will halt after y timesteps" then it is machine y.
If X doesn't halt, machine y will simulate X up until y-1 timesteps, and output that it wil halt after y timesteps.
The above point should be emphasized. Recall above how a timeline is inconsistent and therefore "non-existent" if it's input doesn't match it's output. Thus, for a non-halting X, every machine y will be inconsistent, and the hyper-computer will halt immediately (things are a bit fuzzy here, I am 80% confident that if there is a problem with my model, it lies in this part).
If it halts at t=z, then y=z is that only consistent timeline. For timelines y>z, they output y+1, for timelines y<z, they output z. Making y=z a kind of attractor.
My problems with this timeline are as follows:
I haven't been able to formulate the algorithm, without a) having every timeline inconsistent when the machine halts or b) the actual output uncertain (if it says it halts at z, you know for a fact it does, but if it says it doesn't halt, then you can't be sure)
"non-existent" timelines have casual weight.
I'm talking about putting certain people diagnosed with antisocial personality disorder down before they do anything.
Our medical/legal establishment routinely tortures people so that everyone else can feel slightly better about certain aspects of mortality.
It's probably stupid to reply to comment from more than three years ago, but Antisocial personality disorder does not imply violence. There are examples of psychopaths who were raised in good homes that grew up to become successful assholes.
I'm disagreeing that you have a valid refutation of the KA. However, I don't know if you even think you have, since you haven't responded to my hints that you should clarify.
"I think you're wrong" is not a position.
They way you're saying this, it makes it seem like we're both in the same boat. I have no idea what position you're even holding.
I feel like I'm doing the same thing over and over and nothing different is happening, but I'll quote what I said in another place in this thread and hope I was a tiny bit clearer.
http://lesswrong.com/lw/nnc/the_ai_in_marys_room/day2