Functional programming isn't an idiomatic approach to container manipulation, it's a paradigm that avoids mutable state and data. Write a GUI in Haskell using pure functions to see how different the functional approach is and what it is at its core. Or just compare a typical textbook on imperative algorithms with one on functional algorithms. Container manipulation with functions is just an idiom.
And sure you can write functional code in C++, for example (which by the way has map, filter, fold, and so on), but you can also write OO code in C. But few peop...
-
-
Laziness can muddy the waters, but it's also optional in functional programming. People using haskell in a practical setting usually avoid it and are coming up with new language extensions to make strict evaluation the default (like in records for example).
What you're really saying is the causal link between assembly and the language is less obvious, which is certainly true as it is a very high level language. However, if we're talking about the causality of the language itself, then functional languages enforce a more transparent causal structure of the ...
Why do you use diigo and pocket? They do the same thing. Also, with evernote's clearly you can highlight articles.
You weren't asking me, but I use diigo to manage links to online textbooks and tutorials, shopping items, book recommendations (through amazon), and my less important online article to read list. Evernote for saving all of my important read content (and I tag everything). Amazon's send to kindle extension to read longer articles (every once and a while I'll save all my clippings from my kindle to evernote). And then I maintain a personal wiki ...
He's stating that it will invoke arguments and distract from the thrust of the point - and guess what, he's right. Look at what you're doing, right here.
No. "It" didn't invoke this thread, jimrandomh's fatuous comment combined with it being at the top of the comment section did (I don't care that it was a criticism of functional programming). You keep failing to understand the situation and what I'm saying, and because of this I've concluded that you're a waste of my time and so I won't be responding to you further.
There are two major branches of programming: Functional and Imperative. Unfortunately, most programmers only learn imperative programming languages (like C++ or python). I say unfortunately, because these languages achieve all their power through what programmers call "side effects". The major downside for us is that this means they can't be efficiently machine checked for safety or correctness. The first self-modifying AIs will hopefully be written in functional programming languages, so learn something useful like Haskell or Scheme.
Comes fro...
No. Jimrandomh just says functional programming, imperative programming, ect are "ideologies" (importing the negative connotation). Just says it kills minds. Just says it's a well-known mindkiller. Just says it's not a real distinction. Just puts it in a dichotomy between being more or less important than "languages' complexity, the quality of their type systems and the amount of stupid lurking in their dark corners." What Louie says is more reasonable given that it's a fairly standard position within academia and because it's a small part of a larger post. (I'd rather Louie had sourced what he said, though.)
the entire point of functional programming is to hide the causality of the program from the human
Why? I would say it's the opposite (and really the causality being clear and obvious is just a corollary of referential transparency). The difficulty of reasoning about concurrent/parallel code in an imperative language, for example, is one of the largest selling points of functional programming languages like erlang and haskell.
I don't think you understand functional programming. What background are you coming from?
From SEP:
...Judith Thomson provided one of the most striking and effective thought experiments in the moral realm (see Thomson, 1971). Her example is aimed at a popular anti-abortion argument that goes something like this: The foetus is an innocent person with a right to life. Abortion results in the death of a foetus. Therefore, abortion is morally wrong. In her thought experiment we are asked to imagine a famous violinist falling into a coma. The society of music lovers determines from medical records that you and you alone can save the violinist's life b
The thought experiment functions as an informal reductio ad absurdum of the argument 'Fetuses are people. Therefore abortion is immoral.' or 'Fetuses are conscious. Therefore abortion is immoral.' That's all it's doing. If you didn't find the arguments compelling in the first place, then the reductio won't be relevant to you. Likewise, if you think the whole moral framework underlying these anti-abortion arguments is suspect, then you may want to fight things out at the fundaments rather than getting into nitty-gritty details like this. The significance of...
I prefer to think of 'abstract' as 'not spatially extended or localized.'
I prefer to think of it as anything existing at least partly in mind, and then we can say we have an abstraction of an abstraction or that something something is more abstract (something from category theory being a pure abstraction, while something like the category "dog" being less abstract because it connects with a pattern of atoms in reality). By their nature, abstractions are also universals, but things that actually exist like the bee hive in front of me aren't par...
For you, if I'm understanding you right, they're professions, institutions, social groups, population-wide behaviors. Sociology is generally considered more abstract or high-level than psychology.
You're kind of understanding me. Abstractly, bee hives produce honey. Concretely, this bee hive in front of me is producing honey. Abstractly, science is the product of professions, institutions, ect. Concretely, science is the product of people on our planet doing stuff.
I'm literally trying to not talk about abstractions or concepts but science as it actually...
I thought you were saying that the distinctions have become less blurred?
Yup, my bad. You caught me before my edit.
Do you think these would be useful fast-and-ready definitions for everyday promotion of scientific, philosophical, and mathematical literacy? Would you modify any of them?
I think you're reifying abstraction and doing so will introduce pitfalls when discussing them. Math, science, and philosophy are the abstracted output of their respective professions. If you take away science's competitive incentive structure or change its mechanism of...
I agree, but the problems remain and the arguments flourish.
I didn't say they don't overlap. I said the distinctions have become less blurred (I think because of the need for increased specialization in all intellectual endeavours as we accumulate more knowledge). I define philosophy, math, and science by their professions. That is, their university departments, their journals, their majors, their textbooks, and so on.
Hence, I think the best way to ask if "philosophy" is a worthwhile endeavour is to asked "why should we fund philosophy departments?" A better way to ask that question is "why...
Even though the wikipedia page for "meaning of life" is enormous, it boils all down to the very simple either/or statement I gave.
How do we know if something is answerable? Did a chicken just materialize 10 billion light years from Earth? We can't answer that. Is the color blue the best color? We can't answer that. We can answer questions that contact reality such that we can observe them directly or indirectly. Did a chicken just materialize in front me? No. Is the color blue the most preferred color? I don't know, but it can be well answered t...
My first thought was "every philosophical thought experiment ever" and to my surprise wikipedia says there aren't that many thought experiments in philosophy (although, they are huge topics of discussion). I think the violinist experiment is uniquely bad. The floating man experiment is another good example, but very old.
I'm a bit worried that your conception of philosophy is riding on the coat tails of long-past-philosophy where the distinction between philosophy, math, and science were much more blurred than they are now. Being generous, do you have any examples from the last few decades (that I can read about)?
I'll agree with you that having some philosophical training is better than none in that it can be useful in getting a solid footing in basic critical thinking skills, but then if that's a philosophy department's purpose then it doesn't need to be funded beyond that.
You're both arguing over your impressions of philosophy. I'm more inclined to agree with Lukeprog's impression unless you have some way of showing that your impression is more accurate. Like, for example, show me three papers in meta-ethics from the last year that you think highlight what is representational of that area of philosophy.
From my reading of philosophy, the most well known philosophers (who I'd assume are representational of the top 10% of the field) do keep intuitions and conceptual analysis in their toolbox. But when they bring it out of the ...
So? I can quote scientists saying all manner of stupid, bizarre, unintuitive things...but my selection of course sets up the terms of the discussion. If I choose a sampling that only confirms my existing bias against scientists, then my "quotes" are going to lead to the foregone conclusion. I don't see why "quoting" a few names is considered evidence of anything besides a pre-existing bias against philosophy.
Improving upon this: why care about what the worst of a field has to say? It's the 10% (stergeon's law) that aren't crap that w...
Hopefully then someone will do a supplementary calibration test for prediction book users in the comments here or in a new post on the discussion board. (Apologies for not doing it myself)
How well calibrated were the prediction book users?
I've substituted problems that philosophy is actually working on (metaethics and conciousness) with one that analytic philosophy isn't (meaning of life). Meaning comes from mind. Either we create our own meaning (absurdism, existentialism, ect) or we get meaning from a greater mind that designed us with a purpose (religion). Very simple. How could computer science or science dissolve this problem? (1) By not working on it because it's unanswerable by the only methods we can have said to have answered something, or (2) making the problem answerable by opera...
I absolutely loathe the way you phrased that question for a variety of reasons (and I suspect analytic philosophers would as well), so I'm going to replace "meaning of life" with something more sensible like "solve metaethics" or "solve the hard problem of consciousness." In which case, yes. I think computer science is more likely to solve metaethics and other philosophical problems because the field of philosophy isn't founded on a program and incentive structure of continual improvement through feedback from reality. Oh, and computer science works on those kinds of problems (so do other areas of science, though).
To say the problem is "rampant" is to admit to a limited knowledge of the field and the debates within it.
Well, Lukeprog certainly doesn't have a limited knowledge of philosophy. Maybe you can somehow show that the problem isn't rampant.
Defund philosophy departments to the benefit of computer science departments?
Show me three of your favorite papers from the last year in ethics or meta-ethics that highlight the kind of the philosophy you think is representational of the field. (And if you've been following Lukeprog's posts for any length of time, you'd see that he's probably read more philosophy than most philosophers. His gestalt impression of the field is probably accurate.)
Surely territory is just another name for reality?
I think you misinterpreted me. Territory is just another name for reality, but reality is just a name and so is territory. By nature of names coming from mind, they are maps because they can't perfectly represent whatever actually is (or more accurately, we can't confirm our representations as perfectly representational and we possibly can't form perfect representations). Also, by saying "actually is," I'm creating a map, too -- but I hope you infer what I mean. The methods by which we as human...
The laws are in the map, of course (if it came from mind, it is necessarily of a map). And what we call the 'territory' is a map itself. The map/territory distinction is just a useful analogy for explaining that our models of reality aren't necessarily reality (whatever that actually is). Also, keep in mind that there are many incompatible meanings for 'reductionism'. A lot of LWers (like anonymous1) use it in a way that's not in line with EY, and EY uses it in a way that's not in line with philosophy (which is where I suspect most LWers get their definit...
Input->Black box->Desired output. "Black box" could be replaced with"magic." How would your black box work in practice?
And what meaning is that?
For example, abstract objects could be considered to exist in the minds of people imagining them, and consequently in some neuronal pattern, which may or may not match between different individuals, but considered to not exist as something independent of the conscious minds imagining them. While this is a version of nominalism, it is not nearly as clear-cut as "abstract objects do not exist".
That would be conceptualism and is a moderate anti-realist position about universals (if you're a physicalist). Nominalism and Platonism are two poles of a continuum about realism of universals. So, you probably lean towards nominalism if you're a physicalist and conceptualist.
It won't prevent trolling but it will minimize its effects. As it stands, you can input numbers like 1e+19 which will seriously throw off the mean. If trolls can only give the highest or lowest reasonable bound then they're not going to have much of an effect individually and that makes going through the effort to troll less worthwhile.
I completely agree with you; there shouldn't be any problems discussing political examples where you're only restating a campaign's talking points rather than supporting one side or the other.
The problem is that we don't know how influential the blind spot is. It could just fade away after a couple minutes and a "hey, wait a minute..." But assuming it sticks:
If I were a car salesmen, I would have potential customers tell me their ideal car and then I would tell them what I want their ideal car to be as though I were simply restating what they just said.
If I were a politician, I would target identities (e.g., latino, pro-life, low taxes, ect) rather than individuals because identities are made of choices and they're easier to target t...
Specifying a lower and upper bound on the input should be required.
"I built you."
Or, more meta-ly, you're not going to be very persuasive if you ignore pathos and ethos. I think this might be a common failure mode of aspiring rationalists because we feel we shouldn't have to worry about such things, but then we're living in the should-world rather than the real-world.
Yeah, I can pretty much recall 10k LOC from experience. But it's not just about having written something before, it's about a truly fundamental understanding of what is best in some area of expertise which comes with having written something before (like a GUI framework for example) and improved upon it for years. After doing that, you just know what the architecture should look like, and you just know how to solve all the hard problems already, and you know what to avoid doing, and so really all you're doing is filling in the scaffolding with your hard won experience.
I've done it, and it's not as impressive as it sounds. It's mostly just reciting from experience and not some savant-like act of intelligence or skill. Take those same masters into an area where they don't have experience and they won't be nearly as fast.
Actually, I think the sequences were largely a recital of experience (a post a day for a year).
Reading the comments in here, I think I understand Will Newsome's actions a lot better.
I am not supporting any of the assertions.
I don't think everyone wants to be more autonomous, either (subs in bsdm communities for example).
What if the problem is "I want to oppress you, but I know individually being nicer would get me more of what I want, so instead I'm going to recruit allies that will help me oppress you because I think that will get me even more of what I want."
So, assuming you're right, I think your conclusion then is that it's more productive to work towards uncovering what would be reflective extrapolated values than it is to bargain, but that's non-obvious given how political even LWers are. But OTOH I don't think we have anything to explicitly bargain with.
In light of the following comment by jim, I think we do disagree:
And while I would nor... (read more)