Quite possible. I didn't intend for that sentence to come across in a hostile way.
Since in Swedish we usually talk about the 1800s and the 1900s instead of the 19th and 20th century, I thought something could have been lost in translation somewhere between the original sources, the book by Kelly and gwern's comment, which is itself ambiguous as to whether it is intended as (set aside an island for growing big trees for making wooden warships) (in the 1900s) or as (set aside an island for growing big trees for (making wooden warships in the 1900s)). (I assumed the former)
If we assume a scenario without AGI and without a Hansonian upload economy, it seems quite likely that there are large currently unexpected obstacles for both AGI and uploading. Computing power seems to be just about sufficient right now (if we look at supercomputers), so it probably isn't the problem. So it will probably be a conceptual limitation for AGI and a scanning or conceptual limitation for uploads.
Conceptual limitation for uploads seems unlikely, because were just taking a system cutting it up into smaller pieces and and solving differential equa...
a Scandinavian country which set aside an island for growing big trees for making wooden warships in the 1900s, which was completely wrong since by that point, warships had switched to metal, and so the island became a nature preserve;
This was probably Sweden planting lots of oaks in the early 19th century. 34 000 oaks were planted on Djurgården for shipbuilding in 1830. As it takes over a hundred years for the oak to mature, they weren't used and that bit of the Island is now a nature preserve. Quite funny is that when the parliament was deciding this ...
Two points of relevance that I see are:
If we care about the nature of morphisms of computations only because of some computations being people, the question is fundamentally what our concept of people refers to, and if it can refer to anything at all.
If we view isomorphic as a kind of extension of our naïve view of equals, we can ask what the appropriate generalisation is when we discover that equals does not correspond to reality and we need a new ontology as in the linked paper.
van Dalen's Logic and Structure has a chapter on second order logic, but it's only 10 pages long.
Shapiro's Foundations without Foundationalism has as its main purpose to argue in favour of SOL, I've only read the first two chapters which give philosophical arguments for SOL, which were quite good, but a bit too chatty for my tastes. Chapters 3 to 5 is where the actual logic lives, and I can't say much about them.
Which edition did you read? The image in the post is of the fifth edition, and some people (eg Peter Smith in his Teach Yourself Logic (§2.7 p24)) claim that the earlier editions by just Boolos and Jeffrey are better.
Cutland's Computability and Mendelson's Introduction to Mathematical Logic between them look like they cover everything in this one, and they are both in MIRI's reading list. What is the advantage of adding Computability and Logic to them? (ie is it easier to start out with, does it cover some of the ground between them that both miss, or is it just good with alternatives?)
Cantor who first did the first work on infinite cardinals and ordinals seemed to have a somewhat mystic point of view some times. He thought his ideas about transfinite numbers were communicated to him from god, whom he also identified with the absolute infinite (the cardinality of the cardinals which is too big to itself be a cardinal). This was during the 19th century so quite recently.
I'd say that much mysticism about foundational issues like what numbers really are, or what these possible infinities actually mean, have been abandoned by mathematicians ...
Coffee purchases seem to be done by near-mode thinking (at least for me), while having children is usually quite planned.
Personally I like giving myself quite a bit of leniency when it comes to impulsive purchases in order to direct my cognitive energy to long-term issues with higher returns. Compare and contrast to the idea of premature optimization in computer science.
Understanding the OS to be able to optimize better sounds somewhat useful to a self-improving AI.
Understanding the OS to be able to reason properly about probabilities of hardware/software failure sounds very important to a self-improving AI that does reflection properly. (obviously it needs to understand hardware as well, but you can't understand all the steps between AI and hardware if you don't understand the OS)
My largest problem with the Dark Lord == Death theory is that it doesn't really square with Quirrelmort being another super-rationalist and Eliezer's First Law of Fanfiction (You can't make Frodo a Jedi unless you give Sauron the Death Star). Either Quirrelmort is a henchman or personification of Death, which is unlikely considering he is afraid of dying and the dementor try to frighten him in the Humanism arch. Or Quirrelmort is not the Sauron of this story but will help Harry to defeat the main bad guy Death. This could be a really cool ending, but I doubt that it would fit in the remaining arch.
If they run your function from within theirs they simply tell the computer to start reading those instructions, possibly with a timer for stopping detailed in other parts of the comments. If they implement a VM from scratch they can mess with how the library functions work, for instance giving you a time that moves much faster so that your simulation must stop within 0.1s instead of 10 and they can run your code 100 different times to deal with randomness. Now implementing your own VM is probably not the optimal way to do this, you probably just want to do a transformation of the source code to use your own secret functions instead of the standard time ones.
Not strictly speaking. Warning, what follows is pure speculation about possibilities which may have little to no relation to how a computational multiverse would actually work. It could be possible that there are three computable universes A, B & C, such that the beings in A run a simulation of B appearing as gods to the intelligences therein, the beings in B do the same with C, and finally the beings in C do the same with A. It would probably be very hard to recognize such a structure if you were in it because of the enormous slowdowns in the simulati...
Your question is not well specified. Event though you might think that the proposition "its favorite ball is blue" is something that has a clear meaning, it is highly dependent on to which precision it will be able to see colours, how wide the interval defined as blue is, and how it considers multicoloured objects. If we suppose it would categorise the observed wavelength into one of 27 possible colours (one of those being blue), and further suppose that it knew the ball to be of a single colour and not patterned, and further not have any backgro...
That is true. However according to my experience you don't need to spend much time in the library itself if you know what you're looking for (you can always stay for the atmosphere). What takes time is going to and from the library. The value of this time obviously depends on a lot of parameters: is the library close to your route to/from some other place, are you currently very busy, do you enjoy city walks/bike-rides, etc.
I've now tried f.lux for the past week or so. And now I'm disabling it. I like working late at night, and being a student in a term of revising but no lectures, I'm very flexible about what times I have to wake up. So it made me tired when I didn't want to be which was annoying. It did work very well at getting me to bed though, so I'll definitely reenable it when I want to go to bed earlier.
How is this in any way relevant?
If someone were to write the same proposal from the point of view of a sequence on how to most effectively maximize animal welfare through research and optimal philanthropy, it would hardly be relevant to discuss whether it is unconditionally good to maximize animal welfare. Sure this discussion might be useful to have, but when an article starts with "Suppose that" you don't start by fighting this hypothetical.
When I searched the first hit was the Malaysian town called Miri. Looks like an example of filter bubbles.
Let N=3^^^^^^3, surely N nice world + another nice world is better than N nice worlds + a torture world. Why? Because another nice world is better than a torture world, and the prior existence of the N previous worlds shouldn't matter to that decision.
What about the probability of actually being in the torture world which is tiny 1/(N+1), the expected negative utility from this must surely be so small it can be neglected? Sure, but equally the expected utility of being the master of a torture world with probability 1/(N+1) can be neglected.
What this post tells me is that I'm still very very confused about reality fluid.
I would have done the following if I had been asked that: calculate which numbers I would have time to count up to before I was thrown out/got bored/died/earth ended/universe ran out of negentropy. I would probably have to answer I don't know, or I think X is a number for some of them, but it's still an answer, and until recently people could not say wether "the smallest n>2 such that there are integers a,b,c satisfying a^n + b^n = c^n" was a number or not.
I'm not advocating any kind of finitism, but I agree that the position should be taken seriously.
The standard approach in foundations of mathematics is to consider a special first order theory called ZFC, it describes sets, whose elements are themselves sets. Inside this theory you can encode all other mathematics using sets for example by the Von Neumann construction of ordinals. Then you can restrict yourself to the finite ordinals and verify the Peano axioms, including the principle of induction which you can now formulate using sets. So everything turns out to be unique and pinned down inside your set theory.
What about pinning down your set theory...
I think Sobel's fourth objection is confused about what an idealized/extrapolated agent actually would want. If it had the potential to such perfect experience that makes the human condition look worse than dead in comparison, then the obvious advice is not suicide, but rather to uplift the ordinary human to its own level. This should always be possible since we must already have achieved achieved this to create the extrapolated agent to make the decision, so we can just repeat this process at full resolution on the original human.
The reason why compactness is not provable from ZF is that you need choice for some kinds of infinite sets. You don't need choice for countable sets (if you have a way of mapping them into the integers that is). You can get a proof of compactness for any countable set of axioms by proving completeness for any countable set of axioms, which can be done by construction of a model as in Johnstone's Notes on Logic and Set Theory p. 25.
The hypothesis that should interest an AI are not necessarily limited to those it can compute but to those it could test. A hypothesis is useless if it does not tell us something about how the world looks when it's true as opposed to when it's false. So if there is a way for the AI to interact with the world such that it expects different probabilities of outcomes depending on whether the (possibly uncomputable) hypothesis holds or not then it is something worth having a symbol for, even if the exact dynamics of this universe cannot be computed.
Let's consi...
Consider instead of time traveling from time T' to T, that you were given a choice at time T which of the universes you would prefer: A or B. If B was better you would clearly pick it. Now consider someone gave you the choice instead between B and "B plus A until time T' when it gets destroyed". If A is by itself a better universe than nothing, surely having A around for a short while is better than not having A around at all. So "B plus A until time T' when it gets destroyed" is better than B which in turn is better than A. So if you w...
("Lord Martin Rees is a British cosmologist and astrophysicist. He has been Astronomer Royal since 1995 and Master of Trinity College, Cambridge since 2004. He was President of the Royal Society between 2005 and 2010". For anyone like me who didn't know.)
The fact that if we put any two objects into the same (previously empty) basket as any other two object we will in this basket have four objects is true before we can make any definitions. But the statement 2 + 2 = 4 does not make any sense before we have invented: (a) the numerals 2 and 4, (b) the symbol for addition + and (c) the symbol for equality =. When we have invented meanings for these symbols (symbols as things we use in formal manipulations are quite different from words and were invented quite late, much later than we started to actually solve ...
Another data point: in Cambridge the first course in logic done by mathematics undergraduates is in third year. It covers completeness and soundness of propositional and predicate logic and is quite popular. But in third year people are already so specialised that probably way less than half of us take it.
I think the division into problems and exercises usually seen in mathematics texts would be useful: A task is considered an exercise if it's routine application of previous material, it's a problem if it requires some kind of insight or originality. So far most of the Koans have seemed more like problems than like exercises, but depending on content both may be useful. I might be slightly biased towards this as I greatly enjoy mathematics texts and am used to that style.
"Problem" suggests something different in philosophy than in math. A philosophy "problem" is a seeming dilemma, e.g. Gettier, Newcomb's, or Trolley. So I'd suggest "exercise" here.
"Exercise" dominates "kōan" in that both have the sense of something to stop and think about and try to solve, but ① "exercise" avoids the misconstrual of Zen practice (the purpose of a Zen kōan is not to come up with a solution, nor to set up for an explanation), ② the Orientalism (the dubiosity of saying something in J...
Hodges claims that Turing at least had some interest in telepathy and prophesies:
... (read more)