No, and argument from authority can be a useful heuristic in certain cases, but at least you'd want to take away the one or two arguments you found most compelling and check them out later. In that sense, this is borderline.
Usually, however, this tactic is employed by people who are just looking for an excuse to flee into the warm embrace of an unassailable authority, often after scores of arguments they made were easily refuted. It is a mistake to give a low value to p(my position is mistaken | 10 arguments I have made have been refuted to my satisfaction in short order).
I've had forms of this said to me; it basically means "I'm losing the debate because you personally are smart, not because I'm wrong. Whichever authority I listen to in order to reinforce my existing beliefs would surely crush all your arguments. So stop assailing me with logic..."
It's Dark Side because it surrenders personal understanding to authority, and treats it as a default epistemological position.
Wouldn't this only be correct if similar hardware ran the software the same way? Human thinking is highly associative and variable, and as language is shared amongst many humans, it means that it doesn't, as such, have a fixed formal representation.
I agree on the basic point, but then my deeper point was that somewhere down the line you'll find the intelligence(s) that created a high-fidelity converter for an arbitrary amount of information from one format to another. Sarle is free to claim that the system does not understand Chinese, but its very function could only have been imparted by parties who collectively speak Chinese very well, making the room at very least a medium of communication utilizing this understanding.
And this is before we mention the entirely plausible claim that the room-person ...
Wouldn't such a GLUT by necessity require someone possessing immensely fine understanding of Chinese and English both, though? You could then say that the person+GLUT system as a whole understands Chinese, as it combines both the person's symbol-manipulation capabilities and the actual understanding represented by the GLUT.
You might still not possess understanding of Chinese, but that does not mean a meaningful conversation has not taken place.
Interestingly, my first reaction to this post was that a great deal of it reminds me of myself, especially near that age. I wonder if this is the result of ingrained bias? If I'm not mistaken, when you give people a horoscope or other personality description, about 90% of them will agree that it appears to refer to them, compared to the 8.33% we'd expect it to actually apply to. Then there's selection bias inherent to people writing on LW (wannabe philosophers and formal logic enthusiasts posting here? A shocker!). And yet...
I'm interested to know, did you...
I suspect that with memory on the order of 10^70 bytes, that might involve additional complications; but you're correct, normally this cancels out the complexity problem.
I didn't consider using 3 bits for pawns! Thanks for that :) I did account for such variables as may castle and whose turn it is.
This is more or less what computers do today to win chess matches, but the space of possibilities explodes too fast; even the strongest computers can't really keep track of more than I think 13 or 14 moves ahead, even given a long time to think.
Merely storing all the positions that are unwinnable - regardless of why they are so - would require more matter than we have in the solar system. Not to mention the efficiency of running a DB search on that...
The two are not in conflict.
A-la Levinthal's paradox, I can say that throwing a marble down a conical hollow at different angles and force can have literally trillions of possible trajectories; a-la Anfinsen's dogma, that should not stop me from predicting that it will end up at the bottom of the cone; but I'd need to know the shape of the cone (or, more specifically, its point's location) to determine exactly where that is - so being able to make the prediction once I know this is of no assistance for predicting the end position with a different, unknown ...
When I was studying under Amotz Zahavi (originator of the handicap principle theory, which is what you're actually discussing), he used to make the exact same points. In fact, he used to say that "no communication is reliable unless it has a cost".
Having this outlook on life in the past 5 years made a lot of things seem very different - small questions like why some people don't use seatbelts and brag about it, or why men on dates leave big tips; but also bigger questions like advertizing, how hierarchical relationships really work, etc.
Also expl...
These questions seem decidedly UNfair to me.
No, they don't depend on the agent's decision-making algorithm; just on another agent's specific decision-making algorithm skewing results against an agent with an identical algorithm and letting all others reap the benefits of an otherwise non-advantageous situation.
So, a couple of things:
While I have not mathematically formulated this, I suspect that absolutely any decision theory can have a similar scenario constructed for it, using another agent / simulation with that specific decision theory as the basis f
For a while now, I've been meaning to check out the code for this and heavily revise it to include things like data storage space, physical manufacturing capabilities, non-immediately-lethal discovery by humans (so you detected my base in another dimension? Why should I care, again?), and additional modes of winning. All of which I will get around to soon enough.
But, I'll tell you this. Now when I revise it, I am going to add a game mode where your score is in direct proportion to the amount of office equipment in the universe, with the smallest allowed being a functional paperclip. I am dead serious about this.
I have likewise adjusted down my confidence that this would be as easy or as inevitable as I previously anticipated. Thus I would no longer say I am "vastly confident" in it, either.
Still good to have this buffer between making an AI and total global catastrophe, though!
The way I see it, there's no evidence that these problems require additional experimentation to resolve, rather than find an obscure piece of experimentation that has already taken place and whose relevance may not be immediately obvious.
Sure, that more experimentation is needed is probable; but by no means certain.
My point was that the AI is likely to start performing social experiments well before it is capable of even that conversation you depicted. It wouldn't know how much it doesn't know about humans.
I don't see how that would be relevant to the issue at hand, and thus, why they "need to assume [this] possibility". Whether they assume the people they talk to can be more intelligent than them or not, so long as they engage them on an even intellectual ground (e.g. trading civil letters of argumentation), is simply irrelevant.
What I was expressing skepticism about was that a system with even approximately human-level intelligence necessarily supports a stack trace that supports the kind of analysis you envision performing in the first place, without reference to intentional countermeasures.
Ah, that does clarify it. I agree, analyzing the AI's thought process would likely be difficult, maybe impossible! I guess I was being a bit hyperbolic in my earlier "crack it open" remarks (though depending on how seriously you take it, such analysis might still take place, hard...
Actually, I don't know that this means it has to perform physical experiments in order to develop nanotechnology. It is quite conceivable that all the necessary information is already out there, but we haven't been able to connect all the dots just yet.
At some point the AI hits a wall in the knowledge it can gain without physical experiments, but there's no good way to know how far ahead that wall is.
That's why the rule says challengable inductive inference. If in the context of the discussion this is not obvious then maybe yes, but in almost every other instance it's fine to make these shortcuts, so long as you'reunderstood.