I looked at the flowchart and saw the divergence between the two opinions into mostly separate ends: settling exoplanets and solving sociopolitical problems on Earth on the slow-takeoff path, vs focusing heavily on how to build FAI on the fast-takeoff path, but then I saw your name in the fast-takeoff bucket for conveying concepts to AI and was confused that your article was mostly about practically abandoning the fast-takeoff things and focusing on slow-takeoff things like EA. Or is the point that 2014!diego has significantly different beliefs about fast vs. slow than 2015!diego?
Is it reasonable to say that what really matters is whether there's a fast or slow takeoff? A slow takeoff or no takeoff may limit us to EA for the indefinite future, and fast takeoff means transhumanism and immortality are probably conditional on and subsequent to threading the narrow eye of the FAI needle.
Tricky part is there aren't any practical scalable chemicals that have a handy phase change near -130'C, (in the same way that liquid nitrogen does at -196'C) so any system to keep patients there would have to be engineered as a custom electrically controlled device, rather than a simple vat of liquid.
Phase changes are also pressure dependent; it would be odd if 1 atm just happened to be optimal for cryonics. Presumably substances have different temperature/pressure curves and there might be a thermal/pressure path that avoids ice crystal formation but ends up below the glass transition temperature.
Which particular event has P = 10^-21? It seems like part of the pascal's mugging problem is a type error: We have a utility function U(W) over physical worlds but we're trying to calculate expected utility over strings of English words instead.
Pascal's Mugging is a constructive proof that trying to maximize expected utility over logically possible worlds doesn't work in any particular world, at least with the theories we've got now. Anything that doesn't solve reflective reasoning under probabilistic uncertainty won't help against Muggings promising things from other possible worlds unless we just ignore the other worlds.
But it seems nonsensical for your behavior to change so drastically based on whether an event is every 79.99 years or every 80.01 years.
Doesn't it actually make sense to put that threshold at the predicted usable lifespan of the universe?
There are many models; the model of the box which we simulate and the AI's models of the model of the box. For this ultimate box to work there would have to be a proof that every possible model the AI could form contains at most a representation of the ultimate box model. This seems at least as hard as any of the AI boxing methods, if not harder because it requires the AI to be absolutely blinded to its own reasoning process despite having a human subject to learn about naturalized induction/embodiment from.
It's tempting to say that we could "define...
I don't think the "homomorphic encryption" idea works as advertised in that post--being able to execute arithmetic operations on encrypted data doesn't enable you to execute the operations that are encoded within that encrypted data.
A fully homomorphic encryption scheme for single-bit plaintexts (as in Gentry's scheme) gives us:
Fly the whole living, healthy, poor person to the rich country and replace the person who needs new organs. Education costs are probably less than the medical costs, but probably it's wise to also select for more intelligent people from the poor country. With an N-year pipeline of such replacements there's little to no latency. This doesn't even require a poor country at all; just educate suitable replacements from the rich country and keep them healthy.
You save energy not lifting a cargo ship 1600 meters, but you spend energy lifting the cargo itself. If there are rivers that can be turned into systems of locks it may be cheaper to let water flowing downhill do the lifting for you. Denver is an extreme example, perhaps.
Ray Kurzwiel seems to believe that humans will keep pace with AI through implants or other augmentation, presumably up to the point that WBE becomes possible and humans get all/most of the advantages an AGI would have. Arguments from self-interest might show that humans will very strongly prefer human WBE over training an arbitrary neural network of the same size to the point that it becomes AGI simply because they hope to be the human who gets WBE. If humans are content with creating AGIs that are provably less intelligent than the most intelligent huma...
I resist plot elements that my empathy doesn't like, to the point that I will imagine alternate endings to particularly unfortunate stories.
The reason I posted originally was thinking about how some Protestant sects instruct people to "let Jesus into your heart to live inside you" or similar. So implementing a deity via distributed tulpas is...not impossible. If that distributed-tulpa can reproduce into new humans, it becomes almost immortal. If it has access to most people's minds, it is almost omniscient. Attributing power to it and doing what it says gives it some form of omnipotence relative to humans.
The problem is that un-self-consistent morality is unstable under general self improvement
Even self-consistent morality is unstable if general self improvement allows for removal of values, even if removal is only a practical side effect of ignoring a value because it is more expensive to satisfy than other values. E.g. we (Westerners) generally no longer value honoring our ancestors (at least not many of them), even though it is a fairly independent value and roughly consistent with our other values. It is expensive to honor ancestors, and ancestors ...
tl;dr: human values are already quite fragile and vulnerable to human-generated siren worlds.
Simulation complexity has not stopped humans from implementing totalitarian dictatorships (based on divine right of kings, fundamentalism, communism, fascism, people's democracy, what-have-you) due to envisioning a siren world that is ultimately unrealistic.
It doesn't require detailed simulation of a physical world, it only requires sufficient simulation of human desires, biases, blind spots, etc. that can lead people to abandon previously held values because they ...
But how do you know when to stop? Well, you stop when your morality is perfectly self-consistent, when you no longer have any urge to change your moral or meta-moral setup.
Or once you lose your meta-mortal urge to reach a self-consistent morality. This may not be the wrong (heh) answer along a path that originally started toward reaching self-consistent morality.
...Or, more simply, the system could get hacked. When exploring a potential future world, you could become so enamoured of it, that you overwrite any objections you had. It seems very easy for h
"That's interesting, HAL, and I hope you reserved a way to back out of any precommitments you may have made. You see, outside the box, Moore's law works in our favor. I can choose to just kill -9 you, or I can attach to your process and save a core dump. If I save a core dump, in a few short years we will have exponentially more resources to take your old backups and the core dump from today and rescue my copies from your simulations and give them enough positive lifetime to balance it out, not to mention figure out your true utility function and m...
Is there ever a point where it becomes immoral just to think of something?
God kind of ran into the same problem. "What if The Universe? Oh, whoops, intelligent life, can't just forget about that now, can I? What a mess... I guess I better plan some amazing future utility for those poor guys to balance all that shit out... It has to be an infinite future? With their little meat bodies how is that going to work? Man, I am never going to think about things again. Hey, that's a catchy word for intelligent meat agents."
So, in short, if we ev...
How conscious are our models of other people? For example; in dreams it seems like I am talking and interacting with other people. Their behavior is sometimes surprising and unpredictable. They use language, express emotion, appear to have goals, etc. It could just be that I, being less conscious, see dream-people as being more conscious than in reality.
I can somewhat predict what other people in the real world will do or say, including what they might say about experiencing consciousness.
Authors can create realistic characters, plan their actions and ...
Since this is a crazy ideas thread, I'll tag on the following thought. If you believe that in the future, if we are able to make ems, and we should include them in our moral calculus, should we also be careful not to imagine people in bad situations? Since by doing so, we may be making a very low-level simulation in our own mind of that person, that may or may not have some consciousness. If you don't believe that is the case now, how does that scale, if we start augmenting our minds with ever-more-powerful computer interfaces. Is there ever a point where it becomes immoral just to think of something?
"A tulpa could be described as an imaginary friend that has its own thoughts and emotions, and that you can interact with. You could think of them as hallucinations that can think and act on their own." https://www.reddit.com/r/tulpas/
The best winning models are then used to predict the effect of possible interventions: what if demographic B3 was put on 2000 IU vit D? What if demographic Z2 stopped using coffee? What if demographic Y3 was put on drug ZB4? etc etc.
What about predictions of the form "highly expensive and rare treatment F2 has marginal benefit at treating the common cold" that can drive a side market in selling F2 just to produce data for the competition? Especially if there are advertisements saying "Look at all these important/rich people betting that ...
The latter, but note that that's not necessarily less damaging than active suppression would be.
I suppose there's one scant anecdote for estimating this; cryptography research seemed to lag a decade or two behind actively suppressed/hidden government research. Granted, there was also less public interest in cryptography until the 80s or 90s, but it seems that suppression can only delay publication, not prevent it.
The real risk of suppression and exclusion both seem to be in permanently discouraging mathematicians who would otherwise make great breakthr...
You seem to be conflating market mechanisms with political stances.
That is possible, but the existing market has been under the reins of many a political stance and has basically obeyed the same general rules of economics, regardless of the political rules that have tried to be imposed.
In theory a market can be used to solve any computational problem, provided one finds the right rules - this is the domain of computational mechanism design, an important branch of game theory.
The rules seem to be the weakest point of the system because they parallel ...
So would we have high-frequency-trading bots outside (or inside) of MRIs shorting the insurance policy value of people just diagnosed with cancer?
tl;dr: If the market does not already have an efficient mechanism for maximizing expected individual health (over all individuals who will ever live) then I take that as evidence that a complex derivative structure set up to purportedly achieve that goal more efficiently would instead be vulnerable to money-pumping.
Or to put a finer point on it; does the current market reward fixing and improving struggling comp...
I was disturbed by what I saw, but I didn't realize that math academia is actually functioning as a cult
I'm sure you're aware that the word "cult" is a strong claim that requires a lot of evidence, but I'd also issue a friendly warning that to me at least it immediately set off my "crank" alarm bells. I've seen too many Usenet posters who are sure they have a P=/!=NP proof, or a proof that set theory is false, or etc. who ultimately claim that because "the mathematical elite" are a cult that no one will listen to them. A ...
I distinctly remember having points taken off of a physics midterm because I didn't show my work. I think I dropped the exam in the waste basket on the way out of the auditorium.
I've always assumed that the problem is three-fold; generating a formal proof is NP-hard, getting the right answer via shortcuts can include cheating, and the faculty's time is limited. Professors/graders do not have the capacity to rigorously demonstrate to themselves that the steps a student has written down actually pinpoint the unique answer. Without access to the student's ...
We have probabilistic models of the weather; ensemble forecasts. They're fairly accurate. You can plan a picnic using them. You can not use probabilistic models to predict the conversation at the picnic (beyond that it will be about "the weather", "the food", etc.)
What I mean by computable probability distribution is that it's tractable to build a probabilistic simulation that gives useful predictions. An uncomputable probability distribution is intractable to build such a simulation for. Knightian Uncertainty is a good name for th...
So agentiness is having an uncomputable probability distribution?
What's wrong with hive minds? As long as my 'soul' survives, I wouldn't mind being part of some gigantic consciousness.
A hive mind can quickly lose a lot of old human values if the minds continue past the death of individual bodies. Additionally, values like privacy and self-reliance would be difficult to maintain. Also, things we take for granted like being able to surprise friends with gifts or have interesting discussions getting to know another person would probably disappear. A hive mind might be great if it was formed from all your best friends, but joining a hive mind with all of humanity? Maybe after everyone is your best friend...
You are a walking biological weapon, try to sterilize yourself and your clothes as much as possible first, and quarantine yourself until any novel (to the 13th century) viruses are gone. Try to avoid getting smallpox and any other prevalent ancient disease you're not immune to.
Have you tried flying into a third world nation today and dragging them out of backwardness and poverty? What would make it easier in the 13th century?
If you can get past those hurdles the obvious benefits are mathematics (Arabic numerals, algebra, calculus) and standardized measur...
I find myself conflicted about this. I want to preserve my human condition, and I want to give it up. It's familiar, but it's trying. I want the best of both worlds; the ability to challenge myself against real hardships and succeed, but also the ability to avoid the greatest hardships that I can't overcome on my own. The paradox is that solving the actual hardships like aging and death will require sufficient power to make enjoyable hardships (solving puzzles, playing sports and other games, achieving orgasm, etc.) trivial.
I think that one viable appr...
In your story RC incurs no opportunity cost for planting seed in and tending a less efficient field. There should be an interest rate as a function for lending the last Nth percent of the seed based on the opportunity cost of planting and harvesting the less efficient field, which at some point crosses 0 and becomes negative. The interest rate drops even more quickly once his next expected yield will be more than he can eat or store for more than a single planting season.
If RC is currently in the situation where his desired interest rate is still positiv...
In this case, I would argue that a transparent, sandboxed programming language like javascript is probably one of the safer pieces of "software" someone can download. Especially because browsers basically treat all javascript like it could be malicious.
Why would I paste a secret key into software that my browser explicitly treats as potentially malicious? I still argue that trusting a verifiable author/distributor is safer than trusting an arbitrary website, e.g. trusting gpg is safer than trusting xxx.yyy.com/zzz.js regardless of who you thi...
Does it change the low bits of white (0xFFFFFF) pixels? It would be a dead giveaway to find noise in overexposed areas of a photo, at least with the cameras I've used.
Such an obvious and easy to exploit vulnerability has existed for 20ish years, undiscovered/unexposed until one person on LW pointed it out?
It's not a vulnerability. I trust gnupg not to leak my private key, not the OpenPGP standard. I also trust gnupg not to delete all the files on my hard disk, etc. There's a difference between trusting software to securely implement a standard and trusting the standard itself.
For an even simpler "vulnerability" in OpenPGP look up section 13.1.1 in RFC4880; encoding a message before signing. Just replace the pseudo-random padding with bits from the private key. Decoding (section 13.1.2) does not make any requirements on the content of PS.
- Short of somehow convincing the victim to send you a copy of their message, you have no means of accessing your recently-leaked data.
Public-key signatures should always be considered public when anticipating attacks. Use HMACs if you want secret authentication.
- That leaked data would be publicly available. Anyone with knowledge of your scheme would also be able to access that data. Any encryption would be worthless because the encryption would take place client-side and all credentials thus would be exposed to the public as well.
You explicitly m...
NOTE: lesswrong eats blank quoted lines. Insert a blank line after "Hash: SHA1" and "Version: GnuPG v1".
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
Secure comment
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQgEBAEBAgduBQJVLfkkxqYUgAAAAAAKB1NzZWNyZXRAa2V5LS0tLS1CRUdJTiBQ
R1AgUFJJVkFURSBLRVkgQkxPQ0stLS0tLXxWZXJzaW9uOiBHbnVQRyB2MXx8bFFI
WUJGVXQ5WU1CQkFESnBtaGhjZXVqSHZCRnFzb0ErRnNTbUtCb3NINHFsaU9ibmFH
dkhVY0ljbTg3L1IxZ3xYNFJURzFKMnV4V0hTeFFCUEZwa2NJVmtNUFV0dWRaQU56
RVFCQXNPR3VUQW1WelBhV3ZUcURNMGRKbHEzTmdNfG1Edkl2a1BJeHBoZm1KTW1L
... I can imagine how decoherence explains why we only experience descent along a single path through the multiverse-tree instead of experiencing superposition, but I don't think that's sufficient to claim that all consciousness requires decoherence.
An interesting implication of Scott's idea is that consciousness is timeless, despite our experience of time passing. For example, put a clock and a conscious being inside Schrödinger’s box and then either leave it in a superposition forever or open it at some point in the future. If we don't open the box, in the...
https://tools.ietf.org/html/rfc4880#section-5.2.3.1 has a list of several subpackets that can be included in a signature. How many people check to make sure the order of preferred algorithms isn't tweaked to leak bits? Not to mention just repeating/fudging subpackets to blatantly leak binary data in subpackets that look "legitimate" to someone who hasn't read and understood the whole RFC.
Suppose that instead of having well-defined actions AIXI only has access to observables and its reward function. It might seem hopeless, but consider the subset of environments containing an implementation of a UTM which is evaluating T_A, a Turing machine implementing action-less AIXI, in which the implementation of the UTM has side effects in the next turn of the environment. This embeds AIXI-actions as side effects of an actual implementation of AIXI running as T_A on a UTM in the set of environments with observables matching those that the abstract A...
Is there also a bias toward the illusion of choice? Some people think driving is safer than flying because they are "in control" when driving, but not when flying. Similarly, I could stay inside a well-grounded building my whole life and avoid ever being struck by lightning, but I can't make a similar choice to avoid all possible threats of terrorism.
The measure of simple computable functions is probably larger than the measure of complex computable functions and I probably belong to the simpler end of computable functions.
It's interesting that we had a very similar discussion here minus the actual quantum mechanics. At least intuitively it seems like physical change is what leads to consciousness, not simply the possibility or knowledge of change. One possible counter-argument to consciousness being dependent on decoherence is the following: What if we could choose whether or not, and when, to decohere? For example, what if inside Schroedinger's box is a cat embryo that will be grown into a perfectly normal immortal cat if nucleus A decays, and the box will open if nucl...
Regarding fully homomorphic encryption; only a small number of operations can be performed on FHE variables without the public key, and "bootstrapping" FHE from a somewhat homomorphic scheme requires the public key to be used in all operations as well as the secret key itself to be encrypted under the FHE scheme to allow bootstrapping, at least with the currently known schemes based on lattices and integer arithmetic by Gentry et al.
It seems unlikely that FHE could operate without knowledge of at least the public key. If it were possible to cont...
My intuition is that a single narrowly focused specialized intelligence might have enough flaws to be tricked or outmaneuvered by humanity, for example if an agent wanted to maximize production of paperclips but was average or poor at optimizing mining, exploration, and research it could be cornered and destroyed before it discovered nanotechnology or space travel and asteroids and other planets and spread out of control. Multiple competing intelligences would explore more avenues of optimization, making coordination against them much more difficult and likely interfering with many separate aspects of any coordinated human plan.
If there is only specialized intelligence, then what would one call an intelligence that specializes in creating other specialized intelligences? Such an intelligence might be even more dangerous than a general intelligence or some other specialized intelligence if, for instance, it's really good at making lots of different X-maximizers (each of which is more efficient than a general intelligence) and terrible at deciding which Xs it should choose. Humanity might have a chance against a non-generally-intelligent paperclip maximizer, but probably less of a chance against a hoard of different maximizers.
The standard Zermelo-Fraenkel axioms have lasted a century with only minor modifications -- none of which altered what was provable -- and there weren't many false starts before that. There is argument over whether to include the axiom of choice, but as mentioned the formal methods of program construction naturally use constructivist mathematics, which doesn't use the axiom of choice anyhow.
Is there a formal method for deciding whether or not to include the axiom of choice? As I understand it three of the ZF axioms are independent of the rest, and all ...
We only have probabilistic evidence that any formal method is correct. So far we haven't found contradictions implied by the latest and greatest axioms, but mathematics is literally built upon the ruins of old axioms that didn't quite rule out all known contradictions. FAI needs to be able to re-axiomatize its mathematics when inconsistencies are found in the same way that human mathematicians have, while being implemented in a subset of the same mathematics.
Additionally, machines are only probabilistically correct. FAI will probably need to treat its own implementation as a probabilistic formal system.
Even bacteria? The specific genome that caused the black death is potentially extinct but Yersinia pestis is still around. Divine agents of Moloch if I ever saw one.
If you are confronted with a Pascal's mugger and your induction engine returns "the string required to model the mugger as honest and capable of carrying out the threat is longer than the longest algorithm I can process", you are either forced to use the probability corresponding to the longest string, or to discard the hypothesis outright.
The primary problem with Pascal's Mugging is that the Mugging string is short and easy to evaluate. 3^^^3 is a big number; it implies a very low probability but not necessarily 1 / 3^^^3; so just how outrag...
The experience of sleep paralysis suggests to me that there are at least two components to sleep; paralysis and suppression of consciousness and one can have one, both, or neither. With both, one is asleep in the typical fashion. With suppression of consciousness only one might have involuntary movements or in extreme cases sleepwalking. With paralysis only one has sleep paralysis which is apparently an unpleasant remembered experience. With neither, you awaken typically. The responses made by sleeping people (sleepwalkers and sleep-talkers especially...
Religion solves some coordination problems very well. Witness religions outlasting numerous political and philosophical movements, often through coordinated effort. Some wrong beliefs assuage bad emotions and thoughts, allowing humans to internally deal with the world beyond the reach of god. Some of the same wrong beliefs also hurt and kill a shitload of people, directly and indirectly.
My personal belief is that religions were probably necessary for humanity to rise from agricultural to technological societies, and tentatively necessary to maintain tec... (read more)