Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: Lumifer 23 April 2014 08:30:33PM 2 points [-]

What I wanted to communicate with those terms was communicated by the analogies to the dice cup and to the scientific theory: it's perfectly possible for two hypotheses to have the same present probability but different expectations of future change to that probability.

I think you are talking about what's in local parlance is called a "weak prior" vs a "strong prior". Bayesian updating involves assigning relative importance the the prior and to the evidence. A weak prior is easily changed by even not very significant evidence. On the other hand, it takes a lot of solid evidence to move a strong prior.

In this terminology, your pre-roll estimation of the probability of double sixes is a weak prior -- the evidence of an actual roll will totally overwhelm it. But your estimation of the correctness of the modern evolutionary theory is a strong prior -- it will take much convincing evidence to persuade you that the theory is not correct after all.

Of course, the posterior of a previous update becomes the prior of the next update.

Using this language, then, you are saying that prima facie evidence of someone's stupidity should be a minor update to the strong prior that she is actually a smart, reasonable, and coherent human being.

And I don't see why this should be so.

Comment author: V_V 23 April 2014 08:47:45PM 0 points [-]

Using this language, then, you are saying that prima facie evidence of someone's stupidity should be a minor update to the strong prior that she is actually a smart, reasonable, and coherent human being. And I don't see why this should be so.

People tend to update too much in these circumstances: Fundamental attribution error

Comment author: private_messaging 22 April 2014 07:33:04AM *  0 points [-]

The general math ability is learned, though. The capacity to learn it varies, yes, and could in principle be signalled, but it's on it's own of no value unless actualized.

I'm not sure what a philosophy degree is supposed to be signalling and to who. What profession it makes you more likely to be hired in, besides philosophy, as compared to a degree relevant for said profession?

Comment author: V_V 22 April 2014 09:07:20PM *  0 points [-]

I'm not sure what a philosophy degree is supposed to be signalling and to who. What profession it makes you more likely to be hired in, besides philosophy, as compared to a degree relevant for said profession?

I don't know besides philosophy, but certainly signalling makes a significant part of career advancement in philosophy.
Doing real innovation in philosophy, that is, coming up with new interesting philosophical problems or new "solutions" to old philosophical problems, or at least novel insight into them, is really hard, in part because the discipline is very old and therefore the low-hanging fruits have been picked, and in part because there are no clear standards for settling questions. Therefore, signalling of general scholarship and affiliation to particular trends plays a significant role in the profession.

Comment author: private_messaging 21 April 2014 06:21:56AM *  3 points [-]

The thing about the people who speak in terms of "signalling", what ever they say can not be taken on the face value.

In particular, if the general gist of the signalling theory is to be seen as applicable at least to folks who believe in it, the only sole reason why they would say that they didn't learn anything at school and only got the diploma, is that they believe that this is an opinion that someone extremely smart might have about such a commoner thing as education - an opinion that's useful to imitate for signalling purposes.

Comment author: V_V 21 April 2014 07:55:47AM *  1 point [-]

I think that the knowledge/signalling/networking balance depends on the profession. Hard sciences and engineering jobs require you to apply actual knowledge that you learned in your education. There is also a signalling aspect, mainly in the form of signalling intelligence and general math ability. The networking aspect is probably less important compared to other jobs, at least at entry level.
Soft "sciences", particularly of the "liberal arts" kind, and theology, are probably at the opposite end of the spectrum, with career entry and advancement being based on political affiliation signalling and networking.
Philosophy and economics are somewhere in between.

Comment author: Mark_Neznansky 20 April 2014 09:10:29PM 0 points [-]

Being new to this whole area, I can't say I have preference for anything, and I cannot imagine how any programming paradigm is related to its capabilities and potential. Where I stand I rather be given a (paradigmatic, if you will) direction, rather than recommended a specific programming language given a programming paradigm of choice. But as I understand, what you say is that if one opts for going for Haskell, he'd be better off going for F# instead?

Comment author: V_V 21 April 2014 12:30:24AM 1 point [-]

Think of programming paradigms as construction techniques and programming languages as tools. There is no technique or tool that is ideal in all situations.
If you want a broad education, you might want to study one representative language for any of the main paradigms, for instance C (imperative, static typed), C++/Java/C# (imperative-object oriented, largely static typed), one of the Lisp family, such as Scheme (multi-paradigm, mostly imperative and functional, metaprogramming, dynamic typed), and one of the ML family, such as F# (functional and imperative, static typed).
Python is very popular and very useful, and its basic syntax is easy to learn, but given that it is rather multi-paradigm and very high level (hiding lots of the underlying complexity) perhaps it is not the ideal place to start if you want to really understand what programming is about. At least, learn it aside something else. Similar considerations apply to "Python-like" languages such as Javascript, Ruby, Lua, etc.

But as I understand, what you say is that if one opts for going for Haskell, he'd be better off going for F# instead?

Generally yes.

Comment author: Error 19 April 2014 04:18:29PM 7 points [-]

I was going to post this story in the open thread, but it seems relevant here:

So my partner and I went to see the new Captain America movie, and at one point there is a scene involving an AI/mind upload, along with a mention of an Operation Paperclip. And my first thought was "Is that a real thing, or is someone on the writing staff a Less Wronger doing a shoutout? Because that would be awesome."

Turns out it was a real thing. :-( Oh well.

Something more interesting happened afterward. I mentioned the connection to my partner, said paperclips were an inside joke here. She asked me to explain, so I gave her a (very) brief rundown of some LW thought on AI to provide context for the concept of a paperclipper. Part of the conversation went like this:

"So, next bit of context, just because an AI isn't actively evil doesn't mean it won't try to kill us."

To which she responded:

"Well, of course not. I mean, maybe it decides killing us will solve some other problem it has."

And I thought: That click Eliezer was talking about in the Sequences? This seems like a case of it. What makes it interesting is that my partner doesn't have a Mensa-class intellect or any significant exposure to the Less Wrong memeplex. Which suggests that clicking on the dangers of...call it non-ethical AI, as opposed to un-ethical, unless there's already a more standard term for the class of AI's that contains paperclippers but not Skynet...isn't limited to the high-IQ bubble.

That may not be news to MIRI, but it seemed worth commenting about here. Because we are a high IQ bubble. And that's part of why I like coming here. But I'm sure MIRI would be pleased to reach outside the bubble.

(of interest: Obviously the first connection she drew from dangerous AI was Skynet...but once I described the idea of an AI that was neutral-but-still-dangerous, the second connection she made was to Kyubey. And that felt sort-of-right to me. I told her that was the right idea but didn't go far enough.)

Comment author: V_V 20 April 2014 09:04:19AM 1 point [-]

How do you know that Skynet is not a paperclipper?

Comment author: Mark_Neznansky 19 April 2014 11:00:10PM 0 points [-]

Hey,

Sounds very cool, promising and enticing. I do have a technical question for you (or anybody else, naturally).

I was wondering how "intentional" the choice of Haskell was? Was it chosen mainly because it seemed the best fitting programming language out of all familiar ones, or due to existing knowledge/proficiency at it at the time of formulation of the bot-world idea? How did cost/utility come into play here?

My inquiry is for purely practical, not theoretical purposes--- I’m looking for an advice. In the summer two years ago I was reading as much as I could about topics related to evolutionary psychology and behavioral ecology. During the same period, I was also working with my physics professor, modeling particle systems using Wolfram Mathematica. I think it was this concurrence that engendered in me the idea of programming a similar to yours, yet different, “game of life” program.

Back at the time programming things in AutoHotkey and in Mathematica was as far as my programming went. Later that year I took a terribly basic python course (that was concerned mainly with natural language processing), and that was about it. However, in the last couple of weeks I returned to python, this time taking the studying of it seriously. It brought back the idea of the life game of mine, but this time I feel like I can acquire the skills to execute the plan. I’m currently experiencing a sort of honeymoon period of excitement with programming, and I expect the following few months, at least, to be rather obligation-free for me and an opportune time to learn new programming languages.

I’ve read the above post only briefly (mainly due to restrictions of time--- I plan to read it and related posts soon), but it seems to me that our motivations and intentions with our respective games (mine being the currently non-existing one) are different, though there are similarities as well. I’m mainly interested in the (partially random) evolution/emergence of signaling/meaning/language/cooperation between agents. I’ve envisioned a grid-like game with agents that are “containers” of properties. That is, unlike Conway’s game where the progression of the game is determined purely on the on-the-grid mechanics, but like yours (as I understand it), where an individual agent is linked to an “instruction sheet” that lies outside the grid. I think what differentiates my game from yours (and excuse me for any misunderstandings), is the “place” where the Cartesian barrier is placed. [1] While in yours there’s the presence of a completely outside “god” (and a point that I had missed is whether the “player” writes a meta-language at t=0 that dictates how the robot-brain that issues commands is modified and then the game is let to propagate itself, or whether the player has a finer turn-by-turn control), in mine the god had simply created the primordial soup and then stands watching. Mine is more like a toy, perhaps, as there is no goal whatsoever (the existential version?). If to go with the Cartesian analogy, it’s as if every agent in my game contains an array of pineal glands of different indices, each one mapped to a certain behavior (of the agent), and to certain rules regarding how the gland interacts with other glands in the same agent. One of the “core” rules of the game is the way these glands are inherited by future agents from past agents.

What I had foreseen two years ago as the main obstacle to my programming of it remains my current concern today, after I had acquired some familiarity with python. I want the behavior-building-blocks (to which “the glands” of the agent are mapped to) to be as (conceptually) “reduced” as possible –– that is, that the complex behavior of the agents would be a phenomenon emerging from the complexity of interaction between the simple behaviors/commands –– and to be as mutable as possible. As far as I can tell, python is not the best language for that.

While browsing for languages in Wikipedia, I came across LISP, which appealed to me since it (quoth Wikipedia) “treats everything as data” – functions and statements are cut from the same cloth, and it is further suggested there that it is well suited for metaprogramming. What do you (or anybody else in here) think? Also, quite apart from this pursuit, I have intentions to at least begin learning R. I suspect it won’t have much relevancy for the construction of this game (but perhaps for the analysis of actual instance of the game play), but if it somehow goes into the consideration of the main language of choice--- well, here you go.

Thank you very much for your time,

[1] My point here is mainly to underscore what seem to be possible differences between your game and mine so that you could – if you will – advise me better about the programming language of choice.

Comment author: V_V 20 April 2014 12:09:57AM 1 point [-]

Haskell forces you to program in the pure functional programming paradigm. This, and other related idiosyncrasies of the language (such as default lazy evaluation), require you to use specific design patterns which take time to learn and even when mastered are of questionable convenience. At best, they don't seem to provide any advantage, and at worst they actively harm expressivity and efficiency.
Haskell seems mainly used by enthusiasts for hobby purposes, there seems to be very little free software written in Haskell besides tools for Haskell itself. Some companies claim to use it for commercial software development and/or prototyping, but it appears to be a small market.

If you like the static-typed functional approach, but you don't want to struggle with the pure functional paradigm, you may want to take a look at the ML family: F# is the biggest, Microsoft-backed, member of the family, it runs on .NET but it has an open source compiler and runs on Mono. OCaml is its non-.NET ancestor which still has some significant user base.
If you prefer dynamic typing, then try Scheme (Racket).

Comment author: Eliezer_Yudkowsky 18 April 2014 05:14:41PM 10 points [-]

Reminder! Although I haven't yet written abuot the general principle, the original Drake's Equation was bullshit. Things like this are even more bullshit since they exploit the human bias of assigning significant probabilities to everything elicited creating an unpacking bias where unpacked items are assigned much larger summed probabilities than the corresponding packed categories, meaning that the apparent probability of a conjunction goes down as you helpfully break it into more and more parts. By these means I could equally make the Moon landing appear impossible, just as I could make cryonics appear more and more likely by considering more and more disjunctive pathways to success. It also fails as probability theory because conditional dependency.

Again, general reminder: Across all cases not backed up by actual sampling, someone who offers to helpfully "elicit" a set of "conjunctive" probabilities and multiplies them together to get some low number, without considering any disjunctions, assuming conditional independence, and with no warnings about unpacking bias, is using a Fully General Counterargument that will underestimate the probability of anything. I have yet to see a good Breaking X Down for any X, unless X is a whole population (not a significant subsector of it) and the breakdown is just the actual data about X.

Comment author: V_V 18 April 2014 07:01:38PM 13 points [-]

By these means I could equally make the Moon landing appear impossible

Viewed from which historical time?

just as I could make cryonics appear more and more likely by considering more and more disjunctive pathways to success.

This is not the first time you claim that, but AFAIK you never did. I'm skeptical that this is possible.

It also fails as probability theory because conditional dependency.

Unless you propose a plausible mechanism for two variables to be correlated, it is reasonable to assume that they are approximately independent, (Occam's razor, principle of maximum entropy, etc.). Also, correlations can be positive or negative.

Comment author: VipulNaik 17 April 2014 03:46:29PM *  0 points [-]

Sorry for my lack of clarity.

I was making the point that if the run time can be broken down in a form where it's expressed as a sum of products, where the summands are the times taken for some sub-algorithms, then we can attempt to tweak the sub-algorithms to reduce the time taken for the individual tasks.

The time taken for the sub-algorithm may or may not be polynomial in the input dimensions.

Comment author: V_V 17 April 2014 03:57:26PM 0 points [-]

Ok.

Comment author: VipulNaik 16 April 2014 07:17:03PM 1 point [-]

Thanks for your thoughts.

You are assuming that the algorithm runs in polynomial time w.r.t. the input dimensions. That's a strong assumption.

I'm not assuming that.

In fact, my analysis isn't asymptotic at all, but rather, for fixed (but large) input sizes. In the asymptotic setting, moving to a new algorithm can yield an improvement that is no longer in the same order equivalence class (I could have elaborated more on this, but this was already a PS).

What do you mean by "improve any one factor"? In a complexity function, the variables represent the relevant dimensions of the input. How do you "improve" them?

I meant "factor" in the "part of a multiplication" sense. For instance, let's say you have an algorithm that has an outer loop A that calls an inner loop B every step. Then if a and b are the number of steps respectively, the total time is ab. Now, reducing either a or b by a given factor would reduce the product. That reduction by a factor could be through the identification of steps that turn out to be unnecessary.

And more generally, you seem to be under the impression that algorithms can be improved indefinitely, as evidenced by the fact that you are mentioning algorithm improvement in a post about exponential growth.

I am not under this impression. Sorry for the confusion.

Algorithms can't be improved indefinitely, nor can population or the economy. But we can still talk of exponential growth over a range of time as we chip off at the obvious issues.

Comment author: V_V 17 April 2014 03:22:43PM 0 points [-]

I'm not assuming that.

"let's assume that the time taken is a sum of products"

This is the definition of a polynomial, although you might not have intended it to be a polynomial in something other than the input dimensions. In that case, I'm not sure I've understood clearly what you are arguing.

Comment author: V_V 16 April 2014 05:19:05PM 0 points [-]

For simplicity, let's assume that the time taken is a sum of products that are all of the same order as one another.

You are assuming that the algorithm runs in polynomial time w.r.t. the input dimensions. That's a strong assumption.

To improve a particular summand by a particular constant of proportionality, we may improve any one factor of that summand by that constant of proportionality. Or, we may improve all factors of that summand by constants that together multiply to the desired constant of proportionality.

What do you mean by "improve any one factor"? In a complexity function, the variables represent the relevant dimensions of the input. How do you "improve" them?

And more generally, you seem to be under the impression that algorithms can be improved indefinitely, as evidenced by the fact that you are mentioning algorithm improvement in a post about exponential growth.
They can't: Complexity is bounded from below, which means that for each problem there exist a maximally efficient algorithm. And similarly, once you specify the details of the hardware, there exists a maximally efficient program for that specific hardware. Once you get there you can't improve any further.

View more: Next