No, and argument from authority can be a useful heuristic in certain cases, but at least you'd want to take away the one or two arguments you found most compelling and check them out later. In that sense, this is borderline.
Usually, however, this tactic is employed by people who are just looking for an excuse to flee into the warm embrace of an unassailable authority, often after scores of arguments they made were easily refuted. It is a mistake to give a low value to p(my position is mistaken | 10 arguments I have made have been refuted to my satisfaction in short order).
I've had forms of this said to me; it basically means "I'm losing the debate because you personally are smart, not because I'm wrong. Whichever authority I listen to in order to reinforce my existing beliefs would surely crush all your arguments. So stop assailing me with logic..."
It's Dark Side because it surrenders personal understanding to authority, and treats it as a default epistemological position.
It's Dark Side because it surrenders personal understanding to authority, and treats it as a default epistemological position.
Dark side or not it is quite often valid. People who do not trust their ability to filter bullshit from knowledge should not defer to whatever powerful debater attempts to influence them.
It is no error to assign a low value to p(the conclusion expressed is valid | I find the argument convincing).
Wouldn't this only be correct if similar hardware ran the software the same way? Human thinking is highly associative and variable, and as language is shared amongst many humans, it means that it doesn't, as such, have a fixed formal representation.
I agree on the basic point, but then my deeper point was that somewhere down the line you'll find the intelligence(s) that created a high-fidelity converter for an arbitrary amount of information from one format to another. Sarle is free to claim that the system does not understand Chinese, but its very function could only have been imparted by parties who collectively speak Chinese very well, making the room at very least a medium of communication utilizing this understanding.
And this is before we mention the entirely plausible claim that the room-person ...
Wouldn't such a GLUT by necessity require someone possessing immensely fine understanding of Chinese and English both, though? You could then say that the person+GLUT system as a whole understands Chinese, as it combines both the person's symbol-manipulation capabilities and the actual understanding represented by the GLUT.
You might still not possess understanding of Chinese, but that does not mean a meaningful conversation has not taken place.
Interestingly, my first reaction to this post was that a great deal of it reminds me of myself, especially near that age. I wonder if this is the result of ingrained bias? If I'm not mistaken, when you give people a horoscope or other personality description, about 90% of them will agree that it appears to refer to them, compared to the 8.33% we'd expect it to actually apply to. Then there's selection bias inherent to people writing on LW (wannabe philosophers and formal logic enthusiasts posting here? A shocker!). And yet...
I'm interested to know, did you...
I suspect that with memory on the order of 10^70 bytes, that might involve additional complications; but you're correct, normally this cancels out the complexity problem.
I didn't consider using 3 bits for pawns! Thanks for that :) I did account for such variables as may castle and whose turn it is.
This is more or less what computers do today to win chess matches, but the space of possibilities explodes too fast; even the strongest computers can't really keep track of more than I think 13 or 14 moves ahead, even given a long time to think.
Merely storing all the positions that are unwinnable - regardless of why they are so - would require more matter than we have in the solar system. Not to mention the efficiency of running a DB search on that...
The two are not in conflict.
A-la Levinthal's paradox, I can say that throwing a marble down a conical hollow at different angles and force can have literally trillions of possible trajectories; a-la Anfinsen's dogma, that should not stop me from predicting that it will end up at the bottom of the cone; but I'd need to know the shape of the cone (or, more specifically, its point's location) to determine exactly where that is - so being able to make the prediction once I know this is of no assistance for predicting the end position with a different, unknown ...
When I was studying under Amotz Zahavi (originator of the handicap principle theory, which is what you're actually discussing), he used to make the exact same points. In fact, he used to say that "no communication is reliable unless it has a cost".
Having this outlook on life in the past 5 years made a lot of things seem very different - small questions like why some people don't use seatbelts and brag about it, or why men on dates leave big tips; but also bigger questions like advertizing, how hierarchical relationships really work, etc.
Also expl...
These questions seem decidedly UNfair to me.
No, they don't depend on the agent's decision-making algorithm; just on another agent's specific decision-making algorithm skewing results against an agent with an identical algorithm and letting all others reap the benefits of an otherwise non-advantageous situation.
So, a couple of things:
While I have not mathematically formulated this, I suspect that absolutely any decision theory can have a similar scenario constructed for it, using another agent / simulation with that specific decision theory as the basis f
For a while now, I've been meaning to check out the code for this and heavily revise it to include things like data storage space, physical manufacturing capabilities, non-immediately-lethal discovery by humans (so you detected my base in another dimension? Why should I care, again?), and additional modes of winning. All of which I will get around to soon enough.
But, I'll tell you this. Now when I revise it, I am going to add a game mode where your score is in direct proportion to the amount of office equipment in the universe, with the smallest allowed being a functional paperclip. I am dead serious about this.
I have likewise adjusted down my confidence that this would be as easy or as inevitable as I previously anticipated. Thus I would no longer say I am "vastly confident" in it, either.
Still good to have this buffer between making an AI and total global catastrophe, though!
The way I see it, there's no evidence that these problems require additional experimentation to resolve, rather than find an obscure piece of experimentation that has already taken place and whose relevance may not be immediately obvious.
Sure, that more experimentation is needed is probable; but by no means certain.
My point was that the AI is likely to start performing social experiments well before it is capable of even that conversation you depicted. It wouldn't know how much it doesn't know about humans.
I don't see how that would be relevant to the issue at hand, and thus, why they "need to assume [this] possibility". Whether they assume the people they talk to can be more intelligent than them or not, so long as they engage them on an even intellectual ground (e.g. trading civil letters of argumentation), is simply irrelevant.
What I was expressing skepticism about was that a system with even approximately human-level intelligence necessarily supports a stack trace that supports the kind of analysis you envision performing in the first place, without reference to intentional countermeasures.
Ah, that does clarify it. I agree, analyzing the AI's thought process would likely be difficult, maybe impossible! I guess I was being a bit hyperbolic in my earlier "crack it open" remarks (though depending on how seriously you take it, such analysis might still take place, hard...
Actually, I don't know that this means it has to perform physical experiments in order to develop nanotechnology. It is quite conceivable that all the necessary information is already out there, but we haven't been able to connect all the dots just yet.
At some point the AI hits a wall in the knowledge it can gain without physical experiments, but there's no good way to know how far ahead that wall is.
I think the weakest link here is human response to the AI revealing it can be deceptive. There is absolutely no guarantee that people would act correctly under these circumstances. Human negligence for a long enough time would eventually give the AI a consistent ability to manipulate humans.
I also agree that simulating relationships makes sense as it can happen in "AI time" without having to wait for human response.
The other reservations seem less of an issue to me...
That game theory knowledge coupled with the most basic knowledge about humans is...
It's not. Apparently I somehow replied to the wrong post... It's actually aimed at sufferer's comment you were replying to.
I don't suppose there's a convenient way to move it? I don't think retracting and re-posting would clean it up sufficiently, in fact that seems messier.
Presumably, you build a tool-AI (or three) that will help you solve the Friendliness problem.
This may not be entirely safe either, but given the parameters of the question, it beats the alternative by a mile.
That is indeed relevant, in that it describes some perverse incentives and weird behaviors of nonprofits, with an interesting example. But knowing this context without having to click the link would have been useful. It is customary to explain what a link is about rather than just drop it.
(Or at least it should be)
I really don't see why the drive can't be to issue predictions most likely to be correct as of the moment of the question, and only the last question it was asked, and calculating outcomes under the assumption that the Oracle immediately spits out blank paper as the answer.
Yes, in a certain subset of cases this can result in inaccurate predictions. If you want to have fun with it, have it also calculate the future including its involvement, but rather than reply what it is, just add "This prediction may be inaccurate due to your possible reaction to t...
after all, if "even a chance" is good enough, then all the other criticisms melt away
Not to the degree that SI could be increasing the existential risk, a point Holden also makes. "Even a chance" swings both ways.
That subset of humanity holds considerably less power, influence and visibility than its counterpart; resources that could be directed to AI research and for the most part aren't. Or in three words: Other people matter. Assuming otherwise would be a huge mistake.
I took Wei_Dai's remarks to mean that Luke's response is public, and so can reach the broader public sooner or later; and when examined in a broader context, that it gives off the wrong signal. My response was that this was largely irrelevant, not because other people don't matter, but because of other factors outweighing this.
It's a fine line though, isn't it? Saying "huh, looks like we have much to learn, here's what we're already doing about it" is honest and constructive, but sends a signal of weakness and defensiveness to people not bent on a zealous quest for truth and self-improvement. Saying "meh, that guy doesn't know what he's talking about" would send the stronger social signal, but would not be constructive to the community actually improving as a result of the criticism.
Personally I prefer plunging ahead with the first approach. Both in the abstr...
I see no reason for it to do that before simple input-output experiments, but let's suppose I grant you this approach. The AI simulates an entire community of mini-AI and is now a master of game theory.
It still doesn't know the first thing about humans. Even if it now understands the concept that hiding information gives an advantage for achieving goals - this is too abstract. It wouldn't know what sort of information it should hide from us. It wouldn't know to what degree we analyze interactions rationally, and to what degree our behavior is random. It wo...
I'm afraid not.
Actually, as someone with background in Biology I can tell you that this is not a problem you want to approach atoms-up. It's been tried, and our computational capabilities fell woefully short of succeeding.
I should explain what "woefully short" means, so that the answer won't be "but can't the AI apply more computational power than us?". Yes, presumably it can. But the scales are immense. To explain it, I will need an analogy.
Not that long ago, I had the notion that chess could be fully solved; that is, that you could si...
It is very possible that the information necessary already exists, imperfect and incomplete though it may be, and enough processing of it would yield the correct answer. We can't know otherwise, because we don't spend thousands of years analyzing our current level of information before beginning experimentation, but in the shift between AI-time and human-time it can agonize on that problem for a good deal more cleverness and ingenuity than we've been able to apply to it so far.
That isn't to say, that this is likely; but it doesn't seem far-fetched to me. If you gave an AI the nuclear physics information we had in 1950, would it be able to spit out schematics for an H-bomb, without further experimentation? Maybe. Who knows?
I would not consider a child AI that tries a bungling lie at me to see what I do "so safe". I would immediately shut it down and debug it, at best, or write a paper on why the approach I used should never ever be used to build an AI.
And it WILL make a bungling lie at first. It can't learn the need to be subtle without witnessing the repercussions of not being subtle. Nor would have a reason to consider doing social experiments in chat rooms when it doesn't understand chat rooms and has an engineer willing to talk to it right there. That is, assum...
An experimenting AI that tries to achieve goals and has interactions with humans whose effects it can observe, will want to be able to better predict their behavior in response to its actions, and therefore will try to assemble some theory of mind. At some point that would lead to it using deception as a tool to achieve its goals.
However, following such a path to a theory of mind means the AI would be exposed as unreliable LONG before it's even subtle, not to mention possessing superhuman manipulation abilities. There is simply no reason for an AI to first...
While the example given is not the main point of the article, I'd still like to share a bit of actual data. Especially since I'm kind of annoyed at having spouted this rule as gospel without having a source, before.
A study done at IBM shows a defect fixed during the coding stage costs about 25$ to fix (basically in engineer hours used to find and fix it).
This cost quadruples to 100$ during the build phase; presumably because this can bottleneck a lot of other people trying to submit their code, if you happen to break the build.
The cost quadruples again for...
Now I have seen some interesting papers that make expanded probability theories that include 0 and 1 as logical falsehood and truth respectively. But that still does not include a special value for contradictions.
Except, contradictions really are the only way you can get to logical truth or falsehood; anything other than that necessarily relies on inductive reasoning at some point. So any probability theory employing those must use contradictions as a means for arriving at these values in the first place.
I do think that there's not much room for contrad...
Or, you can still treat "heapness" as a boolean and still completely clobber this paradox just by being specific about what it actually means to have us call something a heap.
I'd like to mention that I had an entire family branch hacked off in the Holocaust, in fact have a great uncle still walking around with a number tattooed on his forearm, and have heard dozens of eye witness accounts of horrors I could scarce imagine. And I'm still not okay with Holocaust Denial laws, which do exist where I live.
In part, this is just my aversion to abandoning the Schelling point you mention; but lately, this is becoming more of an actual concern: My country is starting to legislate some more prohibitions on free speech, all of them targeti...
I don't understand why you think the graphs are not measuring a quantifiable metric, nor why it would not be falsifiable. Especially if the ratios are as dramatic as often depicted, I can think of a lot of things that would falsify it.
I also don't find it difficult to say what they measure: The cost of fixing a bug depending on which stage it was introduced in (one graph) or which stage it was fixed in (other graph). Both things seem pretty straightforward to me, even if "stages" of development can sometimes be a little fuzzy.
I agree with your po...
You are attributing to me things I did not say.
I don't think "truths" discovered under false assumptions are likely to be, in fact, true. I am not worried about them acquiring dangerous truths; rather, I am worried about people acquiring (and possibly acting on) false beliefs. I remind you that false beliefs may persist as cached thoughts even once the assumption is no longer believed in.
Nor do I want my political opponents to not search for truth; but I would prefer that they (and I) try to contend with each others' fundamental differences before focusing on how to fully realize their (or my) current position.
A costly, but simple way would be to gather groups of SW engineers and have them work on projects where you intentionally introduce defects at various stages, and measure the costs of fixing them. To be statistically meaningful, this probably means thousands of engineer hours just to that effect.
A cheap (but not simple) way would be to go around as many companies as possible and hold the relevant measurements on actual products. This entails a lot of variables, however - engineer groups tend to work in many different ways. This might cause the data to be l...
It's possible that I misconstrued the meaning of your words; not being a native English speaker myself, this happens on occasion. I was going off of the word "vibrant", which I understand to mean among other things "vital" and "energetic". The opposite of that is to make something sickly and weak.
But regardless of any misunderstanding, I would like to see some reference to the main point I was making: Do you want people to think on how best to do the opposite of what you are striving for (making the country less vibrant and diverse, whatever that means), or do you prefer to determine which of you is pursuing a non-productive avenue of investigation?
This strikes me as particularly galling because I have in fact repeated this claim to someone new to the field. I think I prefaced it with "studies have conclusively shown...". Of course, it was unreasonable of me to think that what is being touted by so many as well-researched was not, in fact, so.
Mind, it seems to me that defects do follow both patterns: Introducing defects earlier and/or fixing them later should come at a higher dollar cost, that just makes sense. However, it could be the same type of "makes sense" that made Aristotl...
Downvoted for conflating "not wanting to make your country more vibrant and diverse", and "wanting to destroy the country".
One that's already related to LW - commonsenseatheism.com; however that reinforces the thought that any LW regular who also frequents other places could discuss or link to it there.
I've up-voted several lists containing statements with which I disagree (some vehemently so), but which were thought provoking or otherwise helpful. So, even if this is just anecdotal evidence, the process you described seems to be happening.
An interesting thought, but as a practical idea it's a bad idea.
A lot of the problems with how people debate is that the underlying assumptions are different, but this goes unnoticed. So two people can argue on whether it's right or wrong to fight in Iraq when their actual disagreement is on whether Arabs count as people, and could actually argue for hours before realizing this disagreement exists (Note: This is not a hypothetical example). Failing to target the fundamental assumption differences leads to much of the miscommunication we so often see.
By hav...
I came to this thread by way of someone discussing a specific comment in an outside forum.
I'm finding it difficult to think of an admission criterion to the conspiracy that would not ultimately result in even larger damage than discussing matters openly in the first place.
To clarify: It's only a matter of time before the conspiracy leaks, and when it does, the public would take its secrecy as further damning evidence.
Perhaps the one thing you could do is keep the two completely separate on paper (and both public). Guilt by association would still be easy to invoke once the overlapping of forum participants is discovered, but that is much weaker than actually keeping a secret society discussing such issues.
I'm fairly convinced (65%) that Lalonde appearified the Sassacre book in such a way that it killed Jaspers, which is why she had to leave so abruptly.
I somehow never thought to combine Homestuck wild mass guessing with prediction markets. And didn't really expect this on LW, for some reason. Holy cow.
Hm, let's try my two favorite pet theories...
In a truly magnificent Moebius double reacharound, The troll universe will turn out to have been created by the kids' session (either pre- or post- scratch): 40% (used to be higher, but now we have some asymmetries between the sessions, like The Tumor, so.)
In an even more bizarre mindscrew that echoes paradox cloning, the various kids and guardians will turn
That's why the rule says challengable inductive inference. If in the context of the discussion this is not obvious then maybe yes, but in almost every other instance it's fine to make these shortcuts, so long as you'reunderstood.