Comment author: lukstafi 24 August 2013 06:19:16PM 1 point [-]

The relevant notion of consciousness we are concerned with is technically called phenomenal experience. Whole Brain Emulations will necessarily leave out some of the physical details, which means the brain processes will not unfold in exactly the same manner as in biological brains. Therefore a WBE will have different consciousness (i.e. qualitatively different experiences), although very similar to the corresponding human consciousness. I expect we will learn more about consciousness to address the broader and more interesting issue of what kinds and degrees of consciousness are possible.

Comment author: HungryHippo 11 August 2013 07:17:47AM 2 points [-]

common-sense rationality dressed up as intimidating math.

I'd just like to note that Bayes' Rule is one of the first and simplest theorems you prove in introductory statistics classes after you have written down the preliminary definitions/axioms of probability. It's literally taught and expected that you are comfortable using it after the first or second week of classes.

Comment author: lukstafi 11 August 2013 09:45:11AM 1 point [-]

I'd like to add that if the curriculum has a distinction between "probability" and "statistics", it is taught in the "probability" class. Much later, the statistics class has "frequentist" part and "bayesian" part.

Comment author: lukstafi 04 August 2013 06:32:07PM 1 point [-]

Inflationary multiverse is essentially infinite. But as you take a slice through (a part of) the multiverse, there is way more young universes. The proportion of universes of given age is inversely (exponentially, as in memoryless distribution) proportional to the age. This resolves the doomsday paradox (because our universe is very young relative to its lifespan). http://youtu.be/qbwcrEfQDHU?t=32m10s

Another argument to similar effect would be to consider a measure over possible indices. Indices pointing into old times would be less probable -- by needing more bits to encode -- than indices pointing to young times.

Our universe might be very old on this picture (relative to the measure), so the conclusion regarding Fermi paradox is to update towards the "great filter in the past" hypothesis. (It's more probable to be the first observer-philosopher having these considerations in one's corner of a universe.)

See also http://www.youtube.com/watch?v=jhnKBKZvb_U

Comment author: lukstafi 08 July 2013 05:01:37PM *  0 points [-]

I'm glad to see Mark Waser cited and discussed, I think he was omitted in a former draft but I might misremember. ETA: I misremembered, I've confused it with http://friendly-ai.com/faq.html which has an explicitly narrower focus.

Comment author: lukstafi 28 June 2013 06:34:26PM 0 points [-]

We should continue growing so that we join the superintelligentsia.

Comment author: Qiaochu_Yuan 19 June 2013 04:31:57AM *  18 points [-]

I've been reading a little of the philosophical literature on decision theory lately, and at least some two-boxers have an intuition that I hadn't thought about before that Newcomb's problem is "unfair." That is, for a wide range of pairs of decision theories X and Y, you could imagine a problem which essentially takes the form "Omega punishes agents who use decision theory X and rewards agents who use decision theory Y," and this is not a "fair" test of the relative merits of the two decision theories.

The idea that rationalists should win, in this context, has a specific name: it's called the Why Ain'cha Rich defense, and I think what I've said above is the intuition powering counterarguments to it.

I'm a little more sympathetic to this objection than I was before delving into the literature. A complete counterargument to it should at least attempt to define what fair means and argue that Newcomb is in fact a fair problem. (This seems related to the issue of defining what a fair opponent is in modal combat.)

Comment author: lukstafi 19 June 2013 04:06:08PM 1 point [-]

You might be interested in reading TDT chapter 5 "Is Decision-Dependency Fair" if you haven't already.

Comment author: RichardKennaway 18 June 2013 12:37:15PM 2 points [-]

At some point you should also learn Prolog and Haskell to have a well-rounded education.

I'm not sure knowing Prolog is actually useful, and I speak as someone who has been teaching Prolog as part of an undergraduate AI course for the last few years, and who learned it way back when Marseilles Prolog didn't even support negative numbers and I had to write my own code just to do proper arithmetic. (I'm not responsible for the course design, I'm just one of the few people in the department who knows Prolog.)

Functional languages, imperative languages, object-oriented languages, compiled languages, interpreted languages: yes. Even some acquaintance with assembler and the techniques that are used to execute all the other languages, just so you know what the machine is really doing. But Prolog remains a tiny, specialised niche, and I'm inclined to agree with what Paul Graham says of it: it's a great language for writing append, but after that it's all downhill.

Comment author: lukstafi 19 June 2013 10:57:50AM 0 points [-]

I mean learning Prolog in the way it would be taught in a "Programming Languages" course, not as an attempt at facilitating AI. Two angles are important here: (1) programming paradigm features: learning the concept of late-bound / dataflow / "logical" variables. http://en.wikipedia.org/wiki/Oz_(programming_language) is an OK substitute. (2) logic, which is also something to be taught in a "Programming Languages" context, not (only) in AI context. With Prolog, this means learning about SLD-resolution and perhaps making some broader forays from there. But one could also explore connections between functional programming and intuitionistic logics.

Comment author: peterward 18 June 2013 03:28:03AM 2 points [-]

I'm in a similar boat; also starting with Python. Python is intuitive and flexible, which makes it easy to learn but also, in a sense easy to avoid understating how a language actually works. In addition I'm now learning Java and OCaml.

Java isn't a pretty language but it's widely used and a relatively easy transition from Python. But it, I find, makes the philosophy behind object oriented programing much more explicit; forcing the developer to create objects from scratch even to accomplish basic tasks.

OCaml is useful because of the level of discipline it imposes on the programer, e.g. not being able to mix an integer with a floating point, or even--strictly speaking--being able to convert one to another. It forces one to get it right the first time, contra Python's more anything goes, fix it in debugging approach. It also has features like lazy evaluation and function programing, not supported at all in the case of the former (as far as I know) and as a kind of add on in the case of the latter, by Python. Even if you never need or want these features experience with them goes some way to really understating what programs are fundamentally.

Comment author: lukstafi 18 June 2013 11:19:05AM 0 points [-]

OCaml is my favorite language. At some point you should also learn Prolog and Haskell to have a well-rounded education.

Comment author: CarlShulman 16 June 2013 02:16:51AM *  3 points [-]

What is the lowest payoff ratio below at which you would one-box on Newcomb's problem, given your current subjective beliefs? [Or answer "none" if you would never one-box.]

Submitting...

Comment author: lukstafi 16 June 2013 09:46:13AM *  0 points [-]

Actually, the ratio alone is not sufficient, because there is a reward for two-boxing related to "verifying if Omega was right" -- if Omega is right "apriori" then I see no point in two-boxing above 1:1. I think the poll would be more meaningful if 1 stood for $1. ETA: actually, "verifying" or "being playful" might mean for example tossing a coin to decide.

Comment author: lukstafi 10 June 2013 12:11:47AM 0 points [-]

An interesting problem with CEV is demonstrated in chapter 5 "On the Rationality of Preferences" of Hilary Putnam "The Collapse of the Fact/Value Dichotomy and Other Essays". The problem is that a person might assign value to that a choice of a preference, underdetermined at a given time, being of her own free will.

View more: Prev | Next