All of siodine's Comments + Replies

siodine20

In light of the following comment by jim, I think we do disagree:

Please be careful about exposing programmers to ideology; it frequently turns into politics kills their minds. This piece in particular is a well-known mindkiller, and I have personally witnessed great minds acting very stupid because of it. The functional/imperative distinction is not a real one, and even if it were, it's less important to provability than languages' complexity, the quality of their type systems and the amount of stupid lurking in their dark corners.

And while I would nor... (read more)

2loup-vaillant
Let's see: Oops. If I got it, you are saying that he perceived the surface features (dealing with collections), but not their deeper cause (avoid mutable state). Sounds about right. Re-oops, I guess. Now it occurred to me that I may have forced you to. Sorry.
siodine10

Functional programming isn't an idiomatic approach to container manipulation, it's a paradigm that avoids mutable state and data. Write a GUI in Haskell using pure functions to see how different the functional approach is and what it is at its core. Or just compare a typical textbook on imperative algorithms with one on functional algorithms. Container manipulation with functions is just an idiom.

And sure you can write functional code in C++, for example (which by the way has map, filter, fold, and so on), but you can also write OO code in C. But few peop... (read more)

0loup-vaillant
I'm not sure you and jimrandomh actually disagree on that point. I mean, avoiding mutable state is bound to change your approach to container manipulation. You described the root cause, he described the surface differences. Also, jimrandomh knows that: I personally tried functional style in C++, and did feel the pain ( is unusable before C++11). There are ways however to limit the damage. And languages such as Python and Javascript do not discourage the functional style so much. Their libraries do. Now sure, languages do favour a style over the other, like Ocaml vs Lua. It does let us classify them meaningfully. On the other hand, it is quite easy to go against the default. I'm pretty sure both you and jimrandomh agree with that as well. From the look of it, you just talked past each other, while there was no real disagreement to begin with…
siodine00

-

[This comment is no longer endorsed by its author]Reply
siodine00

-

[This comment is no longer endorsed by its author]Reply
siodine10

Laziness can muddy the waters, but it's also optional in functional programming. People using haskell in a practical setting usually avoid it and are coming up with new language extensions to make strict evaluation the default (like in records for example).

What you're really saying is the causal link between assembly and the language is less obvious, which is certainly true as it is a very high level language. However, if we're talking about the causality of the language itself, then functional languages enforce a more transparent causal structure of the ... (read more)

siodine20

Why do you use diigo and pocket? They do the same thing. Also, with evernote's clearly you can highlight articles.

You weren't asking me, but I use diigo to manage links to online textbooks and tutorials, shopping items, book recommendations (through amazon), and my less important online article to read list. Evernote for saving all of my important read content (and I tag everything). Amazon's send to kindle extension to read longer articles (every once and a while I'll save all my clippings from my kindle to evernote). And then I maintain a personal wiki ... (read more)

1mapnoterritory
I used diigo for annotation before clearly had highlighting. Now, just as you, use diigo for link storage and Evernote for content storage. Diigo annotation has still the advantage that it excerpts the text you highlight. With Clearly if I want to have the highlighted parts I have to find and manually select them again... Also tagging from clearly requires 5 or so clicks which is ridiculous... But I hope it will get fixed. I plan to use pocket once I get a tablet... it is pretty and convenient, but the most likely to get cut out of the workflow. Thanks for the evernote import function - I'll look into it, maybe it could make the Evenote - org-mode integration tighter. Even then, having 3 separate systems is not quite optimal...
siodine20

He's stating that it will invoke arguments and distract from the thrust of the point - and guess what, he's right. Look at what you're doing, right here.

No. "It" didn't invoke this thread, jimrandomh's fatuous comment combined with it being at the top of the comment section did (I don't care that it was a criticism of functional programming). You keep failing to understand the situation and what I'm saying, and because of this I've concluded that you're a waste of my time and so I won't be responding to you further.

2[anonymous]
It's really a pity that everyone (= the three or four people who downvoted everything you wrote in the thread) seems to have missed your point. I largely agree with your take on the situation, for what it's worth.
siodine-20

There are two major branches of programming: Functional and Imperative. Unfortunately, most programmers only learn imperative programming languages (like C++ or python). I say unfortunately, because these languages achieve all their power through what programmers call "side effects". The major downside for us is that this means they can't be efficiently machine checked for safety or correctness. The first self-modifying AIs will hopefully be written in functional programming languages, so learn something useful like Haskell or Scheme.

Comes fro... (read more)

-3OrphanWilde
Yes, it's his comment about imperative languages, in the main post. He's stating that it will invoke arguments and distract from the thrust of the point - and guess what, he's right. Look at what you're doing, right here. You're not merely involved in the holy war, you're effectively arguing, here, that the holy war is more important than the point Louie was -actually- trying to make in his post, which he distracted some users from with an entirely unnecessary-to-his-post attack on imperative programming languages.
siodine-30

No. Jimrandomh just says functional programming, imperative programming, ect are "ideologies" (importing the negative connotation). Just says it kills minds. Just says it's a well-known mindkiller. Just says it's not a real distinction. Just puts it in a dichotomy between being more or less important than "languages' complexity, the quality of their type systems and the amount of stupid lurking in their dark corners." What Louie says is more reasonable given that it's a fairly standard position within academia and because it's a small part of a larger post. (I'd rather Louie had sourced what he said, though.)

-1OrphanWilde
No, what he's saying is that Louie's -comments- about imperative programming amount to ideology. Being a standard ideology doesn't make it less of an ideology.
-1jimrandomh
What criticism of functional programming? The grandparent comment (and my other comment in the thread) said nothing whatsoever about whether functional programming is good or bad. I only said that it was bad to present it as ideology - as opposed to, say, teaching in SML and leaving the whole functional/imperative thing unremarked.
-5OrphanWilde
siodine00

the entire point of functional programming is to hide the causality of the program from the human

Why? I would say it's the opposite (and really the causality being clear and obvious is just a corollary of referential transparency). The difficulty of reasoning about concurrent/parallel code in an imperative language, for example, is one of the largest selling points of functional programming languages like erlang and haskell.

4IlyaShpitser
The causality in a functional language is far from obvious. Consider Haskell, a language that is both purely functional and lazy, and is considered somewhat of a beautiful poster child of the functional approach. Say you write a program and it has a bug -- it's not doing what it's supposed to. How would you debug it? Some alternatives: (a) Use a debugger to step through a program until it does something it's not supposed to (this entails dealing with a causal order of evaluation of statements -- something Haskell as a lazy and functional language is explicitly hiding from you until you start a debugger). (b) Use good ol' print statements. These will appear in a very strange order because of lazy evaluation. Again, Haskell hides the true order -- the true order has nothing to do with the way the code appears on the page. This makes it difficult to build a causal model of what's going on in your program. A causal model is what you need if you want to use print statements to debug. (c) Intervene in a program by changing some intermediate runtime value to see what would happen to the output. As a functional language, Haskell does not allow you to change state (ignoring monads which are a very complicated beast, and at any rate would not support a straightforward value change while debugging anyways). ---------------------------------------- My claim is that causality is so central to how human beings think about complex computer programs that it is not possible to write and debug large programs written in functional style without either building a causal model of your program (something most functional language will fight with you about to the extent that they are functional), or mostly sticking to an imperative "causal" style, and only use simple functional idioms that you know work and that do not require further thinking (like map and reduce, and simple closure use). Note that even Haskell, a language committed to "wearing the hair shirt" (http://research.micros
siodine40

I don't think you understand functional programming. What background are you coming from?

0loup-vaillant
Where did that come from? I didn't spot anything wrong in his comment, and I'm pretty knowledgeable myself (I'm no authority, but I believe my grasp of FP is quite comprehensive). (Edit: Retracted: I now see it did came from somewhere: several comments, including the top one)
siodine60

From SEP:

Judith Thomson provided one of the most striking and effective thought experiments in the moral realm (see Thomson, 1971). Her example is aimed at a popular anti-abortion argument that goes something like this: The foetus is an innocent person with a right to life. Abortion results in the death of a foetus. Therefore, abortion is morally wrong. In her thought experiment we are asked to imagine a famous violinist falling into a coma. The society of music lovers determines from medical records that you and you alone can save the violinist's life b

... (read more)

The thought experiment functions as an informal reductio ad absurdum of the argument 'Fetuses are people. Therefore abortion is immoral.' or 'Fetuses are conscious. Therefore abortion is immoral.' That's all it's doing. If you didn't find the arguments compelling in the first place, then the reductio won't be relevant to you. Likewise, if you think the whole moral framework underlying these anti-abortion arguments is suspect, then you may want to fight things out at the fundaments rather than getting into nitty-gritty details like this. The significance of... (read more)

siodine00

I prefer to think of 'abstract' as 'not spatially extended or localized.'

I prefer to think of it as anything existing at least partly in mind, and then we can say we have an abstraction of an abstraction or that something something is more abstract (something from category theory being a pure abstraction, while something like the category "dog" being less abstract because it connects with a pattern of atoms in reality). By their nature, abstractions are also universals, but things that actually exist like the bee hive in front of me aren't par... (read more)

2Rob Bensinger
That's problematic, first, because it leaves mind itself in a strange position. And second because, if mathematical platonism (for example) were true, then there would exist abstract objects that are mind-independent. You seem to be assuming the pattern-matching of this sort is a vice. If it's useful to mark the pattern in question, and we recognize that we're doing so for utilitarian reasons and not because there's a transcendent Essence of Scienceyness, then the pattern-matching is benign. It's how humans think, and we can't become completely inhuman if our goal is to take the rest of mankind with us into the future. Not yet, anyway. Religions are also feedback loops. The more I believe, the more my belief gets confirmed. Remarkable! The primary problem with this ultra-attenuated notion of what we want is that all the work is being done by the black-box normative terms like 'improvement' and 'better' and 'optimal.' Everything we're actually trying to concretely teach is hidden behind those words. We also need more content than 'working with a feedback loop from reality'; that kind of metaphorical talk might fly on LessWrong, but it's really a summary of some implicit intuitions we already share, not instruction we could in those words convey to someone who doesn't already see what we're getting at. After all, everything exists in a back-and-forth with reality, and everything is for that matter part of reality. Perhaps my formulations of what we want are too concrete; but yours are certainly too abstract and underdetermined.
siodine00

For you, if I'm understanding you right, they're professions, institutions, social groups, population-wide behaviors. Sociology is generally considered more abstract or high-level than psychology.

You're kind of understanding me. Abstractly, bee hives produce honey. Concretely, this bee hive in front of me is producing honey. Abstractly, science is the product of professions, institutions, ect. Concretely, science is the product of people on our planet doing stuff.

I'm literally trying to not talk about abstractions or concepts but science as it actually... (read more)

0Rob Bensinger
It sounds like you're conflating abstract/concrete with general/particular. But a universal generalization might just be the conjunction of a lot of particulars. I prefer to think of 'abstract' as 'not spatially extended or localized.' Societies are generally considered more abstract than mental states because mental states are intuitively treated as more localized. But 'lots of mental states' is not more abstract than 'just one mental state,' in the same way that thousands of bees (or 'all the bees,' in your example) can be just as concrete as a single bee. We're back at square one. I still don't see why reasoning is more abstract than professions, institutions, etc. We agree that it all reduces to human behaviors on some level. But the 'abstract vs. concrete' discussion is a complete tangent. What's relevant is whether it's useful to have separate concepts of 'the practice of science' vs. 'professional science,' the former being something even laypeople can participate in by adopting certain methodological standards. I think both concepts are useful. You seem to think that only 'professional science' is a useful concept, at least in most cases. Is that a fair summary? Counterfactuals don't make sense if you think of things as they are? I don't think that's true in any nontrivial sense.... 'Scientific' is not any more guilty of essentializing than are any of our other fuzzy, ordinary-language terms. There are salient properties associated with being a scientist; I'm suggesting that many of those clustered properties, in particular many of the ones we most care about when we promote and praise things like 'science' and 'naturalism,' can occur in isolated individuals. If you don't like calling what I'm talking about 'scientific,' then coin a different word for it; but we need some word. We need to be able to denote our exemplary decision procedures, just to win the war of ideas. 'Professional science' is not an exemplary decision procedure, any more than 'the bui
siodine00

I thought you were saying that the distinctions have become less blurred?

Yup, my bad. You caught me before my edit.

Do you think these would be useful fast-and-ready definitions for everyday promotion of scientific, philosophical, and mathematical literacy? Would you modify any of them?

I think you're reifying abstraction and doing so will introduce pitfalls when discussing them. Math, science, and philosophy are the abstracted output of their respective professions. If you take away science's competitive incentive structure or change its mechanism of... (read more)

0Rob Bensinger
I think your definitions are more abstract than mine. For me, mathematics, philosophy, and science are embodied brain behaviors — modes of reasoning. For you, if I'm understanding you right, they're professions, institutions, social groups, population-wide behaviors. Sociology is generally considered more abstract or high-level than psychology. (Of course, I don't reject your definitions on that account; denying the existence of philosophizing or of professional philosophy because one or the other is 'abstract' would be as silly as denying the existence of abstractions like debt, difficulty, truth, or natural selection. I just think your abstraction is of somewhat more limited utility than mine, when our goal is to spread good philosophizing, science, and mathematics rather than to treat the good qualities of those disciplines as the special property of a prestigious intellectual elite belonging to a specific network of organizations.) Feedback cycles are great, but we don't need to build them into our definition of 'science' in order to praise science for happening to possess them; if we put each scientist on a separate island, their work might suffer as a result, but it's not clear to me that they would lose all ability to do anything scientific, or that we should fail to clearly distinguish the scientifically-minded desert-islander for his unusual behaviors. Also, it's not clear in what sense mathematics has a self-improving recursive feedback cycle with reality. Actually, mathematics and philosophy seem to function very analogously in terms of their relationship to reality and to science. I'm not sure that's the best approach. Telling people to find a recursively self-improving method is not likely to be as effective as giving them concrete reasoning skills (like how to perform thought experiments, or how to devise empirical hypotheses, or how to multiply quantities) and then letting intelligent society-wide behaviors emerge via the marketplace of ideas (or
siodine10

I agree, but the problems remain and the arguments flourish.

siodine00

I didn't say they don't overlap. I said the distinctions have become less blurred (I think because of the need for increased specialization in all intellectual endeavours as we accumulate more knowledge). I define philosophy, math, and science by their professions. That is, their university departments, their journals, their majors, their textbooks, and so on.

Hence, I think the best way to ask if "philosophy" is a worthwhile endeavour is to asked "why should we fund philosophy departments?" A better way to ask that question is "why... (read more)

0Rob Bensinger
I thought you were saying that the distinctions have become less blurred? Now I'm confused. That's fine for some everyday purposes. But if we want to distinguish the useful behaviors in each profession from the useless ones, and promote the best behaviors both among laypeople and among professionals, we need more fine-grained categories than just 'everything that people who publish in journals seen as philosophy journals do.' I think it would be useful to distinguish Professional Philosophy, Professional Science, and Professional Mathematics from the basic human practices of philosophizing, doing science, or reasoning mathematically. Something in the neighborhood of these ideas would be quite useful: mathematics: carefully and systematically reasoning about quantity, or (more loosely) about the quantitative properties and relationships of things. philosophy: carefully reasoning about generalizations, via 'internal' reflection (phenomenology, thought experiments, conceptual analysis, etc.), in a moderately (more than shamanic storytelling, less than math or logic) systematic way. science: carefully collecting empirical data, and carefully reasoning about its predictive and transparently ontological significance. Do you think these would be useful fast-and-ready definitions for everyday promotion of scientific, philosophical, and mathematical literacy? Would you modify any of them?
siodine00

Even though the wikipedia page for "meaning of life" is enormous, it boils all down to the very simple either/or statement I gave.

How do we know if something is answerable? Did a chicken just materialize 10 billion light years from Earth? We can't answer that. Is the color blue the best color? We can't answer that. We can answer questions that contact reality such that we can observe them directly or indirectly. Did a chicken just materialize in front me? No. Is the color blue the most preferred color? I don't know, but it can be well answered t... (read more)

0Peterdjones
Providing you ignore the enornous amount of substructure hanging off each option. We generally perform some sort of armchair conceptual analysis. Why not? Doesn't it need to decide which questions it can answer? First I've heard of it. Who did that? Where was it published? Or impossible, or the brain isn't solely or responsible, ro something else. It would have helped to have argued for your prefered option. Give another century or so. We've barely explored the brain. Philosophy generally can't solve scientific problems, and science generally can't solve philosophical ones. And what about my interactions with others? Am I entitled to snatch an orange from a starving man because I need a few extra milligrams of vitamin C?
siodine30

My first thought was "every philosophical thought experiment ever" and to my surprise wikipedia says there aren't that many thought experiments in philosophy (although, they are huge topics of discussion). I think the violinist experiment is uniquely bad. The floating man experiment is another good example, but very old.

3Rob Bensinger
What's your objection to the violinist thought experiment? If you're a utilitarian, perhaps you don't think the waters here are very deep. It's certainly a useful way of deflating and short-circuiting certain other intuitions that block scientific and medicinal progress in much of the developed world, though.
siodine-20

I'm a bit worried that your conception of philosophy is riding on the coat tails of long-past-philosophy where the distinction between philosophy, math, and science were much more blurred than they are now. Being generous, do you have any examples from the last few decades (that I can read about)?

I'll agree with you that having some philosophical training is better than none in that it can be useful in getting a solid footing in basic critical thinking skills, but then if that's a philosophy department's purpose then it doesn't need to be funded beyond that.

0Rob Bensinger
Could you taboo/define 'philosophy,' 'math,' and 'science' for me in a way that clarifies exactly how they don't overlap? It'd be very helpful. Is there any principled reason, for example, that theoretical physics cannot be philosophy? Or is some theoretical physics philosophy, and some not? Is there a sharp line, or a continuum between the two kinds of theoretical physics? If that's a philosophy department's purpose, and nothing else can fulfill the same purpose, then philosophy departments are vastly underfunded as it stands. (Though I agree the current funding could be better managed.) But the real flaw is that we think of philosophy as a college thing. Philosophical training should be fully integrated into quite early-age education in logical, scientific, mathematical, moral, and other forms of reasoning.
siodine00
  1. I'm not even remotely autistic.
  2. How is philosophy going to get us the correct conception of reality? How will we know it when it happens? (I think science will progress us to the point where philosophy can answer the question, but by then anyone could)
siodine60

You're both arguing over your impressions of philosophy. I'm more inclined to agree with Lukeprog's impression unless you have some way of showing that your impression is more accurate. Like, for example, show me three papers in meta-ethics from the last year that you think highlight what is representational of that area of philosophy.

From my reading of philosophy, the most well known philosophers (who I'd assume are representational of the top 10% of the field) do keep intuitions and conceptual analysis in their toolbox. But when they bring it out of the ... (read more)

-7myron_tho
siodine10

So? I can quote scientists saying all manner of stupid, bizarre, unintuitive things...but my selection of course sets up the terms of the discussion. If I choose a sampling that only confirms my existing bias against scientists, then my "quotes" are going to lead to the foregone conclusion. I don't see why "quoting" a few names is considered evidence of anything besides a pre-existing bias against philosophy.

Improving upon this: why care about what the worst of a field has to say? It's the 10% (stergeon's law) that aren't crap that w... (read more)

1Rob Bensinger
The best ethical philosophers give us the foundations of utility calculation, clarify when we can (and can't) derive facts and values from each other, generate heuristics and frameworks within which to do politics and resolve disputes over goals and priorities. The best metaphysicians give us scientific reasoning, novel interpretations of quantum mechanics, warnings of scientists becoming overreliant on some component of common sense, and new empirical research programs (Einstein's most important work consisted of metaphysical thought experiments). The best logicians and linguistic philosophers give us the propositional calculus, knowledge of valid and invalid forms, etc., etc. Even if you think the modalists and dialetheists are crazy, you can be very thankful to them for developing modal and paraconsistent logics that have valuable applications outside of traditional philosophical disputes. And, of course, philosophy in general is useful for testing the tools of our trade. We can be more confident of and skilled in our reasoning in specific domains, like physics and electrical engineering and differential calculus, when those tools have been put to the test in foundational disputes. A bad Philosophy 101 class can lead to hyperskepticism or metaphysical dogmatism, but a good Philosophy 101 class can lead to a healthy skepticism mixed with intellectual curiosity and dynamism. Ultimately, the reason to fund 'philosophy' departments is that there is no such thing as 'philosophy;' what the departments in question are really teaching is how to think carefully about the most difficult questions. The actual questions have nothing especially in common, beyond their difficulty, their intractability before our ordinary methods.
siodine00

Hopefully then someone will do a supplementary calibration test for prediction book users in the comments here or in a new post on the discussion board. (Apologies for not doing it myself)

2gwern
http://predictionbook.com/predictions displays an overall graph.
siodine60

How well calibrated were the prediction book users?

4ChristianKl
Unfortunately we lacked a question to track prediction book users.
siodine00

I've substituted problems that philosophy is actually working on (metaethics and conciousness) with one that analytic philosophy isn't (meaning of life). Meaning comes from mind. Either we create our own meaning (absurdism, existentialism, ect) or we get meaning from a greater mind that designed us with a purpose (religion). Very simple. How could computer science or science dissolve this problem? (1) By not working on it because it's unanswerable by the only methods we can have said to have answered something, or (2) making the problem answerable by opera... (read more)

2Peterdjones
Unless it is. Maybe the MoL breaks down into many of the other topics studied by philosophers. Maybe philosophy is in the process of reducing it. No, not simple You say it is "unanswerable" timelessly. How do you know that? It's unanswered up to present. As are a number of scientific questions. Maybe. But checking that you have correctly identified the intent, and not changed the subject, is just the sort of armchair conceptual analysis philosophers do. You say that timelsessly, but at the time of writing we have done where we have and we don't where we haven;t. But unless science can relate that back to the initial question , there is no need to consider it answered. That's necessary, sure. But if it were sufficient, would we have a Hard Problem of Consciousness? But I am not suggesting that science be shut down, and the funds transferred to philosophy. It seems actual to me. We don't have such an understanding at present. I don't know what that means for the future, and I don't how you are computing your confident statement of unlikelihood. One doens't even have to believe in some kind of non-physicalism to think that we might never. The philosopher Colin McGinn argues that we have good reason to believe both that consc. is physical, and that we will never understand it. We can't understand qualia through science now. How long does that have to continue before you give up? What's the harm in allowing philsophy to continue when it is so cheap compared to science? PS. I would be interested in hearing of a scientific theory of ethics that doens't just ignore the is-ought problem.
siodine00

I absolutely loathe the way you phrased that question for a variety of reasons (and I suspect analytic philosophers would as well), so I'm going to replace "meaning of life" with something more sensible like "solve metaethics" or "solve the hard problem of consciousness." In which case, yes. I think computer science is more likely to solve metaethics and other philosophical problems because the field of philosophy isn't founded on a program and incentive structure of continual improvement through feedback from reality. Oh, and computer science works on those kinds of problems (so do other areas of science, though).

-3Peterdjones
I don't think you have phrased "the question" differntly and better, I think you have substituted two differnt questions. Well, maybe you think the MoL is a ragbag of different questions, not one big one. Maybe it is. Maybe it isn't. That would be a philsophical question. I don't see how empiricsm could help. Speaking of which... What instruments do use to get feedback from reality vis a vis phenomenal consciousness and ethical values? I didn't notice and qualiometers or agathometers last time I was in a lab.
siodine40

To say the problem is "rampant" is to admit to a limited knowledge of the field and the debates within it.

Well, Lukeprog certainly doesn't have a limited knowledge of philosophy. Maybe you can somehow show that the problem isn't rampant.

-6myron_tho
siodine-30

Defund philosophy departments to the benefit of computer science departments?

-5Peterdjones
siodine00

Show me three of your favorite papers from the last year in ethics or meta-ethics that highlight the kind of the philosophy you think is representational of the field. (And if you've been following Lukeprog's posts for any length of time, you'd see that he's probably read more philosophy than most philosophers. His gestalt impression of the field is probably accurate.)

siodine10

Surely territory is just another name for reality?

I think you misinterpreted me. Territory is just another name for reality, but reality is just a name and so is territory. By nature of names coming from mind, they are maps because they can't perfectly represent whatever actually is (or more accurately, we can't confirm our representations as perfectly representational and we possibly can't form perfect representations). Also, by saying "actually is," I'm creating a map, too -- but I hope you infer what I mean. The methods by which we as human... (read more)

siodine00

The laws are in the map, of course (if it came from mind, it is necessarily of a map). And what we call the 'territory' is a map itself. The map/territory distinction is just a useful analogy for explaining that our models of reality aren't necessarily reality (whatever that actually is). Also, keep in mind that there are many incompatible meanings for 'reductionism'. A lot of LWers (like anonymous1) use it in a way that's not in line with EY, and EY uses it in a way that's not in line with philosophy (which is where I suspect most LWers get their definit... (read more)

1[anonymous]
I read this the other day...very thought-provoking.
0Shmi
I think a realist would take issue with this statement... Surely territory is just another name for reality? Indeed, what? Is there an underlying computing substrate, which is more "real" than the territory?
siodine00

Input->Black box->Desired output. "Black box" could be replaced with"magic." How would your black box work in practice?

siodine20

For example, abstract objects could be considered to exist in the minds of people imagining them, and consequently in some neuronal pattern, which may or may not match between different individuals, but considered to not exist as something independent of the conscious minds imagining them. While this is a version of nominalism, it is not nearly as clear-cut as "abstract objects do not exist".

That would be conceptualism and is a moderate anti-realist position about universals (if you're a physicalist). Nominalism and Platonism are two poles of a continuum about realism of universals. So, you probably lean towards nominalism if you're a physicalist and conceptualist.

-3Shmi
I only used the word "exist" in a sentence because TheOtherDave and I agree on the meaning of it, which I doubt that any of the -ists you mention (probably including you) would agree with.
siodine20

It won't prevent trolling but it will minimize its effects. As it stands, you can input numbers like 1e+19 which will seriously throw off the mean. If trolls can only give the highest or lowest reasonable bound then they're not going to have much of an effect individually and that makes going through the effort to troll less worthwhile.

siodine-20

I completely agree with you; there shouldn't be any problems discussing political examples where you're only restating a campaign's talking points rather than supporting one side or the other.

siodine50

The problem is that we don't know how influential the blind spot is. It could just fade away after a couple minutes and a "hey, wait a minute..." But assuming it sticks:

If I were a car salesmen, I would have potential customers tell me their ideal car and then I would tell them what I want their ideal car to be as though I were simply restating what they just said.

If I were a politician, I would target identities (e.g., latino, pro-life, low taxes, ect) rather than individuals because identities are made of choices and they're easier to target t... (read more)

siodine100

Specifying a lower and upper bound on the input should be required.

1ema
That doesn't really prevent trolling, so i'm not sure that it would be helpful.
4SilasBarta
You didn't build that. *ducks*
siodine40

Or, more meta-ly, you're not going to be very persuasive if you ignore pathos and ethos. I think this might be a common failure mode of aspiring rationalists because we feel we shouldn't have to worry about such things, but then we're living in the should-world rather than the real-world.

siodine20

Yeah, I can pretty much recall 10k LOC from experience. But it's not just about having written something before, it's about a truly fundamental understanding of what is best in some area of expertise which comes with having written something before (like a GUI framework for example) and improved upon it for years. After doing that, you just know what the architecture should look like, and you just know how to solve all the hard problems already, and you know what to avoid doing, and so really all you're doing is filling in the scaffolding with your hard won experience.

2datadataeverywhere
Not too long ago, I lost a week of work and was able to recompose it in the space of an afternoon. It wasn't the same line-for-line, but it was the same design and probably even used the same names for most things, and was roughly 10k LOC. So if I had recent or substantial experience, I can see expecting a 10x speedup in execution. That's pretty specific though; I don't think I have ever had the need to write something that was substantially similar to anything else I'd ever written. Domain experience is vital, of course. If you have to spend all your time wading through header files to find out what the API is or discover the subtle bugs in your use of it, writing just a small thing will take painfully long. But even where I never have to think about these things I still pause a lot. One thing that is different is that I make mistakes often enough that I wait for them; working with one of these people, I noticed that he practiced "optimistic coding"; he would compile and test his code, but by feeding it into a background queue. In that particular project, a build took ~10 minutes, and our test suite took another ~10 minutes. He would launch a build / test every couple of minutes, and had a dbus notification if one failed; once, it did, and he had to go back several (less than 10, I think) commits to fix the problem. He remembered exactly where he was, rebased, and moved on. I couldn't even keep up with him finding the bug, much less fixing it. The people around here who have a million lines of code in production seem to have that skill, of working without the assistance of a compiler or test harness; their code works the first time. Hell, Rob Pike uses ed. He doesn't even need to refer to his code often enough to make it worthwhile to easily see what he's already written (or go back and change things)---for him, that counts as an abnormal occurrence.
siodine50

I've done it, and it's not as impressive as it sounds. It's mostly just reciting from experience and not some savant-like act of intelligence or skill. Take those same masters into an area where they don't have experience and they won't be nearly as fast.

Actually, I think the sequences were largely a recital of experience (a post a day for a year).

0datadataeverywhere
I don't know about you, but I can't recall 10k LOC from experience even if I had previously written something before; seeing someone produce that much in the space of three hours is phenomenal, especially when I realize that I probably would have required two or three times as much code to do the same thing on my first attempt. If by "reciting from experience" you mean that they have practiced using the kinds of abstractions they employ many times before, then I agree that they're skilled because of that practice; I still don't think it's a level of mastery that I will ever attain.
siodine30

Reading the comments in here, I think I understand Will Newsome's actions a lot better.

siodine10

I am not supporting any of the assertions.

I don't think everyone wants to be more autonomous, either (subs in bsdm communities for example).

1A1987dM
That's what happens when I comment at 4 a.m. Better go to bed, now.
siodine90

What if the problem is "I want to oppress you, but I know individually being nicer would get me more of what I want, so instead I'm going to recruit allies that will help me oppress you because I think that will get me even more of what I want."

siodine20

So, assuming you're right, I think your conclusion then is that it's more productive to work towards uncovering what would be reflective extrapolated values than it is to bargain, but that's non-obvious given how political even LWers are. But OTOH I don't think we have anything to explicitly bargain with.

Load More