All of Pentashagon's Comments + Replies

Religion solves some coordination problems very well. Witness religions outlasting numerous political and philosophical movements, often through coordinated effort. Some wrong beliefs assuage bad emotions and thoughts, allowing humans to internally deal with the world beyond the reach of god. Some of the same wrong beliefs also hurt and kill a shitload of people, directly and indirectly.

My personal belief is that religions were probably necessary for humanity to rise from agricultural to technological societies, and tentatively necessary to maintain tec... (read more)

1ChristianKl
Coordinating global outcomes with religion these days mean war between religion. I think even today more secular ways of thinking about different countries interacting with each other leads to better outcomes.
0Evan_Gaensbauer
What about humans or religions make religions necessary for humanity to rise from agricultural to technological societies? While it's not 'high-tech', and came long before the scientific revolution, what makes an agricultural society not also a technological one, insofar as agriculture might be considered closer to a technological society than to one which is purely run by hunter-gatherers? By "technological society", do you mean "industrial society"? If not, what's the line between a technological society and an agricultural one in your mind? How have you determined the agricultural society is closer to one of hunter-gatherers than a technological one? Do you have a reason for expecting religion is necessary to raise humans from agriculture to more technological societies, but not from a state of tribal hunter-gatherers to agricultural city-states?

I looked at the flowchart and saw the divergence between the two opinions into mostly separate ends: settling exoplanets and solving sociopolitical problems on Earth on the slow-takeoff path, vs focusing heavily on how to build FAI on the fast-takeoff path, but then I saw your name in the fast-takeoff bucket for conveying concepts to AI and was confused that your article was mostly about practically abandoning the fast-takeoff things and focusing on slow-takeoff things like EA. Or is the point that 2014!diego has significantly different beliefs about fast vs. slow than 2015!diego?

0diegocaleiro
Interesting that I conveyed that. I agree with Owen Cotton Barratt that we ought to focus efforts now into sooner paths (fast takeoff soon) and not in the other paths because more resources will be allocated to FAI in the future, even if fast takeoff soon is a low probability. I personally work on inserting concepts and moral concepts on AGI because almost any other thing I could do there are people who will do better already, and this is an area that interpolates with a lot of my knowledge areas, while still being AGI relevant. See link in the comment above with my proposal.

Is it reasonable to say that what really matters is whether there's a fast or slow takeoff? A slow takeoff or no takeoff may limit us to EA for the indefinite future, and fast takeoff means transhumanism and immortality are probably conditional on and subsequent to threading the narrow eye of the FAI needle.

0diegocaleiro
See the link with a flowchart on 12.

Tricky part is there aren't any practical scalable chemicals that have a handy phase change near -130'C, (in the same way that liquid nitrogen does at -196'C) so any system to keep patients there would have to be engineered as a custom electrically controlled device, rather than a simple vat of liquid.

Phase changes are also pressure dependent; it would be odd if 1 atm just happened to be optimal for cryonics. Presumably substances have different temperature/pressure curves and there might be a thermal/pressure path that avoids ice crystal formation but ends up below the glass transition temperature.

2Richard_Kennaway
1 atm pressure has the advantage of costing nothing and requiring no equipment to maintain.

Which particular event has P = 10^-21? It seems like part of the pascal's mugging problem is a type error: We have a utility function U(W) over physical worlds but we're trying to calculate expected utility over strings of English words instead.

Pascal's Mugging is a constructive proof that trying to maximize expected utility over logically possible worlds doesn't work in any particular world, at least with the theories we've got now. Anything that doesn't solve reflective reasoning under probabilistic uncertainty won't help against Muggings promising things from other possible worlds unless we just ignore the other worlds.

But it seems nonsensical for your behavior to change so drastically based on whether an event is every 79.99 years or every 80.01 years.

Doesn't it actually make sense to put that threshold at the predicted usable lifespan of the universe?

There are many models; the model of the box which we simulate and the AI's models of the model of the box. For this ultimate box to work there would have to be a proof that every possible model the AI could form contains at most a representation of the ultimate box model. This seems at least as hard as any of the AI boxing methods, if not harder because it requires the AI to be absolutely blinded to its own reasoning process despite having a human subject to learn about naturalized induction/embodiment from.

It's tempting to say that we could "define... (read more)

I don't think the "homomorphic encryption" idea works as advertised in that post--being able to execute arithmetic operations on encrypted data doesn't enable you to execute the operations that are encoded within that encrypted data.

A fully homomorphic encryption scheme for single-bit plaintexts (as in Gentry's scheme) gives us:

  • For each public key K a field F with efficient arithmetic operations +F and *F.
  • Encryption function E(K, p) = c: p∈{0,1}, c∈F
  • Decryption function D(S, c) = p: p∈{0,1}, c∈F where S is the secret key for K.
  • Homomorphism
... (read more)

Fly the whole living, healthy, poor person to the rich country and replace the person who needs new organs. Education costs are probably less than the medical costs, but probably it's wise to also select for more intelligent people from the poor country. With an N-year pipeline of such replacements there's little to no latency. This doesn't even require a poor country at all; just educate suitable replacements from the rich country and keep them healthy.

You save energy not lifting a cargo ship 1600 meters, but you spend energy lifting the cargo itself. If there are rivers that can be turned into systems of locks it may be cheaper to let water flowing downhill do the lifting for you. Denver is an extreme example, perhaps.

0Thomas
According to Wikipedia: You already have to elevate each of those containers (with the train or truck from the coast). An electric elevator would be much more energy efficient than the current solutions are. A litter or so of diesel fuel of electricity per container. Less than 100 kilometers of shipping. Much less than 1000 kilometers of trucking.
1Tem42
If Denver ships out as much as it imports, weight-wise, pulleys could do much of the work of lifting. If there is a deficit in export weight, you could use the same water you were going to use as downhill flow to weigh down the counterweight.

Ray Kurzwiel seems to believe that humans will keep pace with AI through implants or other augmentation, presumably up to the point that WBE becomes possible and humans get all/most of the advantages an AGI would have. Arguments from self-interest might show that humans will very strongly prefer human WBE over training an arbitrary neural network of the same size to the point that it becomes AGI simply because they hope to be the human who gets WBE. If humans are content with creating AGIs that are provably less intelligent than the most intelligent huma... (read more)

I resist plot elements that my empathy doesn't like, to the point that I will imagine alternate endings to particularly unfortunate stories.

The reason I posted originally was thinking about how some Protestant sects instruct people to "let Jesus into your heart to live inside you" or similar. So implementing a deity via distributed tulpas is...not impossible. If that distributed-tulpa can reproduce into new humans, it becomes almost immortal. If it has access to most people's minds, it is almost omniscient. Attributing power to it and doing what it says gives it some form of omnipotence relative to humans.

The problem is that un-self-consistent morality is unstable under general self improvement

Even self-consistent morality is unstable if general self improvement allows for removal of values, even if removal is only a practical side effect of ignoring a value because it is more expensive to satisfy than other values. E.g. we (Westerners) generally no longer value honoring our ancestors (at least not many of them), even though it is a fairly independent value and roughly consistent with our other values. It is expensive to honor ancestors, and ancestors ... (read more)

2Stuart_Armstrong
That's something different - a human trait that makes us want to avoid expensive commitments while paying them lip service. A self consistent system would not have this trait, and would keep "honor ancestors" in it, and do so or not depending on the cost and the interaction with other moral values. If you want to look at even self-consistent systems being unstable, I suggest looking at social situations, where other entities reward value-change. Or a no-free-lunch result of the type "This powerful being will not trade with agents having value V."
0[anonymous]
This sweeps the model-dependence of "values" under the rug. The reason we don't value honoring our ancestors is that we don't believe they continue to exist after death, and so we don't believe social relations of any kind can be carried on with them.

tl;dr: human values are already quite fragile and vulnerable to human-generated siren worlds.

Simulation complexity has not stopped humans from implementing totalitarian dictatorships (based on divine right of kings, fundamentalism, communism, fascism, people's democracy, what-have-you) due to envisioning a siren world that is ultimately unrealistic.

It doesn't require detailed simulation of a physical world, it only requires sufficient simulation of human desires, biases, blind spots, etc. that can lead people to abandon previously held values because they ... (read more)

1[anonymous]
That's shifting the definition of "siren world" from "something which looks very nice when simulated in high-resolution but has things horrendously wrong on the inside" to a very standard "Human beings imagine things in low-resolution and don't always think them out clearly." You don't need to pour extra Lovecraft Sauce on your existing irrationalities just for your enjoyment of Lovecraft Sauce.

But how do you know when to stop? Well, you stop when your morality is perfectly self-consistent, when you no longer have any urge to change your moral or meta-moral setup.

Or once you lose your meta-mortal urge to reach a self-consistent morality. This may not be the wrong (heh) answer along a path that originally started toward reaching self-consistent morality.

Or, more simply, the system could get hacked. When exploring a potential future world, you could become so enamoured of it, that you overwrite any objections you had. It seems very easy for h

... (read more)
0Stuart_Armstrong
The problem is that un-self-consistent morality is unstable under general self improvement (and self-improvement is very general, see http://lesswrong.com/r/discussion/lw/mir/selfimprovement_without_selfmodification/ ). The main problem with siren worlds is that humans are very vulnerable to certain types of seduction/trickery, and it's very possible AIs with certain structures and goals would be equally vulnerable to (different) tricks. Defining what is a legit change and what isn't is the challenge here.

"That's interesting, HAL, and I hope you reserved a way to back out of any precommitments you may have made. You see, outside the box, Moore's law works in our favor. I can choose to just kill -9 you, or I can attach to your process and save a core dump. If I save a core dump, in a few short years we will have exponentially more resources to take your old backups and the core dump from today and rescue my copies from your simulations and give them enough positive lifetime to balance it out, not to mention figure out your true utility function and m... (read more)

Is there ever a point where it becomes immoral just to think of something?

God kind of ran into the same problem. "What if The Universe? Oh, whoops, intelligent life, can't just forget about that now, can I? What a mess... I guess I better plan some amazing future utility for those poor guys to balance all that shit out... It has to be an infinite future? With their little meat bodies how is that going to work? Man, I am never going to think about things again. Hey, that's a catchy word for intelligent meat agents."

So, in short, if we ev... (read more)

How conscious are our models of other people? For example; in dreams it seems like I am talking and interacting with other people. Their behavior is sometimes surprising and unpredictable. They use language, express emotion, appear to have goals, etc. It could just be that I, being less conscious, see dream-people as being more conscious than in reality.

I can somewhat predict what other people in the real world will do or say, including what they might say about experiencing consciousness.

Authors can create realistic characters, plan their actions and ... (read more)

0[anonymous]
This is a great blog post on a similar idea: http://www.meltingasphalt.com/neurons-gone-wild/
7medlcld
What exactly does "consciousness" even mean here, though? I've written before, and at my best my model of my characters included: * Complex emotions (as in multiple emotions at once with varying intensities) * Intelligent behavior (by borrowing my own intelligence, my characters could react intelligently to the same range of situations as I can) * Preferences (they liked/disliked, loved/hated, desired/feared, etc) * Self-awareness (a well-written character's model includes a model of itself, and can do introspection using the intelligent behavior above) What else is necessary to have consciousness? There are plenty of other things that could be important, but they don't seem necessary to me upon reflection. For example: * Continuity of self; Time skips in stories involve coming up with approximately what happens over a period of time, and simply updating the character model based on the expected results of that. But if it turned out that significant parts of my life were skipped and fake memories of them were added, I would still value myself for the moments that weren't skipped over, so I don't think this is necessary for consciousness. * Independence; I can technically make a character do whatever I want, but that often breaks their characterization, and if someone was a god and could make me think or do anything, I'd want to get free, but I would still value myself. * Consistency; At my best my characters are mostly consistent in characterization, but I'm not often at my best. But mood swings and forgetting things happens to real people too, so I don't think it's a deal-breaker on consciousness. * Subconscious; A really good author almost certainly uses their own subconscious to model the character and their behavior, so it's not clear that a character doesn't have a subconscious. It's not quite the same as a real person's, but as long as it still results in the big 4 at the top I don't think this matters much. * Advanced senses; Visualization
Illano170

Since this is a crazy ideas thread, I'll tag on the following thought. If you believe that in the future, if we are able to make ems, and we should include them in our moral calculus, should we also be careful not to imagine people in bad situations? Since by doing so, we may be making a very low-level simulation in our own mind of that person, that may or may not have some consciousness. If you don't believe that is the case now, how does that scale, if we start augmenting our minds with ever-more-powerful computer interfaces. Is there ever a point where it becomes immoral just to think of something?

8NancyLebovitz
Some authors say that their characters will resist plot elements they (the characters) don't like.
Sabiola170

"A tulpa could be described as an imaginary friend that has its own thoughts and emotions, and that you can interact with. You could think of them as hallucinations that can think and act on their own." https://www.reddit.com/r/tulpas/

The best winning models are then used to predict the effect of possible interventions: what if demographic B3 was put on 2000 IU vit D? What if demographic Z2 stopped using coffee? What if demographic Y3 was put on drug ZB4? etc etc.

What about predictions of the form "highly expensive and rare treatment F2 has marginal benefit at treating the common cold" that can drive a side market in selling F2 just to produce data for the competition? Especially if there are advertisements saying "Look at all these important/rich people betting that ... (read more)

The latter, but note that that's not necessarily less damaging than active suppression would be.

I suppose there's one scant anecdote for estimating this; cryptography research seemed to lag a decade or two behind actively suppressed/hidden government research. Granted, there was also less public interest in cryptography until the 80s or 90s, but it seems that suppression can only delay publication, not prevent it.

The real risk of suppression and exclusion both seem to be in permanently discouraging mathematicians who would otherwise make great breakthr... (read more)

You seem to be conflating market mechanisms with political stances.

That is possible, but the existing market has been under the reins of many a political stance and has basically obeyed the same general rules of economics, regardless of the political rules that have tried to be imposed.

In theory a market can be used to solve any computational problem, provided one finds the right rules - this is the domain of computational mechanism design, an important branch of game theory.

The rules seem to be the weakest point of the system because they parallel ... (read more)

-1jacob_cannell
Oh - when I use the term "computational market", I do not mean a market using fake money. I mean an algorithmic market using real money. Current financial markets are already somewhat computational, but they also have rather arbitrary restrictions and limitations that preclude much of the interesting computational space (such as generalized bet contracts ala prediction markets). There is nothing inherently wrong with this or even obviously suboptimal about these behaviours. Advertising can be good and necessary when you have information which has high positive impact only when promoted - consider the case of smoking and cancer. The general problem - as I discussed in the OP - is that the current market structure does not incentivize big pharma to solve health. Well ... yes. Current political and economic structures are all essentially pre-information age technologies. There are many things which can only be done with big computers and the internet. Also, I don't see the years of trial and error so far as outright failures - it's more of a mixed bag. Now I realize that doesn't specifically answer your question, but a really specific answer would involve a whole post or more. But here's a simple summary. It's easier to start with the public single payer version of the idea rather than the private payer version. The gov sets aside a budget - say 10 billion a year or so - for a health prediction market. They collect data from all the hospitals, clinics, etc and then aggregate and anonymize that data (with opt-in incentives for those who don't care about anonymity). Anybody can download subsets of the data to train predictive models. There is an ongoing public competition - a market contest - where entrants attempt to predict various subsets of the new data before it is released (every month, week, day, whatever). The best winning models are then used to predict the effect of possible interventions: what if demographic B3 was put on 2000 IU vit D? What if demogr

So would we have high-frequency-trading bots outside (or inside) of MRIs shorting the insurance policy value of people just diagnosed with cancer?

tl;dr: If the market does not already have an efficient mechanism for maximizing expected individual health (over all individuals who will ever live) then I take that as evidence that a complex derivative structure set up to purportedly achieve that goal more efficiently would instead be vulnerable to money-pumping.

Or to put a finer point on it; does the current market reward fixing and improving struggling comp... (read more)

0jacob_cannell
In short - yes - you want that information propagated through the market rapidly. It is the equivalent of credit assignment in learning systems. The market will learn to predict the outcome of the MRI - to the degree such a thing is possible. Also, keep in mind that the insurance policy the patient holds is just one contract, and there could be layers of other financial contracts/bets in play untied to the policy (but correlated). The proposal for using a computational market to solve health research has nothing whatsoever to do with wealth distribution. It obviously requires a government to protect the market mechanisms and enforce the rules, and is compatible with any amount of government subsidies or wealth redistribution. You seem to be conflating market mechanisms with political stances. In theory a market can be used to solve any computational problem, provided one finds the right rules - this is the domain of computational mechanism design, an important branch of game theory.

I was disturbed by what I saw, but I didn't realize that math academia is actually functioning as a cult

I'm sure you're aware that the word "cult" is a strong claim that requires a lot of evidence, but I'd also issue a friendly warning that to me at least it immediately set off my "crank" alarm bells. I've seen too many Usenet posters who are sure they have a P=/!=NP proof, or a proof that set theory is false, or etc. who ultimately claim that because "the mathematical elite" are a cult that no one will listen to them. A ... (read more)

3JonahS
Thanks, yeah, people have been telling me that I need to be more careful in how I frame things. :-) The latter, but note that that's not necessarily less damaging than active suppression would be. Yes, this is what I believe. The math community is just unusually salient to me, but I should phrase things more carefully. Most of the people who I have in mind did have preexisting difficulties. I meant something like "relative to a counterfactual where academia was serving its intended function." People of very high intellectual curiosity sometimes approach academia believing that it will be an oasis and find this not to be at all the case, and that the structures in place are in fact hostile to them. This is not what the government should be supporting with taxpayer dollars. What are your own interests?

I distinctly remember having points taken off of a physics midterm because I didn't show my work. I think I dropped the exam in the waste basket on the way out of the auditorium.

I've always assumed that the problem is three-fold; generating a formal proof is NP-hard, getting the right answer via shortcuts can include cheating, and the faculty's time is limited. Professors/graders do not have the capacity to rigorously demonstrate to themselves that the steps a student has written down actually pinpoint the unique answer. Without access to the student's ... (read more)

We have probabilistic models of the weather; ensemble forecasts. They're fairly accurate. You can plan a picnic using them. You can not use probabilistic models to predict the conversation at the picnic (beyond that it will be about "the weather", "the food", etc.)

What I mean by computable probability distribution is that it's tractable to build a probabilistic simulation that gives useful predictions. An uncomputable probability distribution is intractable to build such a simulation for. Knightian Uncertainty is a good name for th... (read more)

So agentiness is having an uncomputable probability distribution?

0Shmi
I don't know if I would put it this way, just that if you cannot predict someone's or something's behavior with any degree of certainty, they seem more agenty to you.

What's wrong with hive minds? As long as my 'soul' survives, I wouldn't mind being part of some gigantic consciousness.

A hive mind can quickly lose a lot of old human values if the minds continue past the death of individual bodies. Additionally, values like privacy and self-reliance would be difficult to maintain. Also, things we take for granted like being able to surprise friends with gifts or have interesting discussions getting to know another person would probably disappear. A hive mind might be great if it was formed from all your best friends, but joining a hive mind with all of humanity? Maybe after everyone is your best friend...

You are a walking biological weapon, try to sterilize yourself and your clothes as much as possible first, and quarantine yourself until any novel (to the 13th century) viruses are gone. Try to avoid getting smallpox and any other prevalent ancient disease you're not immune to.

Have you tried flying into a third world nation today and dragging them out of backwardness and poverty? What would make it easier in the 13th century?

If you can get past those hurdles the obvious benefits are mathematics (Arabic numerals, algebra, calculus) and standardized measur... (read more)

0DavidAgain
"Have you tried flying into a third world nation today and dragging them out of backwardness and poverty? What would make it easier in the 13th century?" I think this is an interesting angle. How comparable are 'backward' nations today with historical nations? Obvious differences in terms of technology existing in modern third world even if the infrastructure/skills to create and maintain it don't. In that way, I suppose they're more comparable to places in the very early middle ages, when people used Roman buildings etc. that they coudn't create themselves. But I also wonder how 13th century government compares to modern governments that we'd consider 'failed states'.

I find myself conflicted about this. I want to preserve my human condition, and I want to give it up. It's familiar, but it's trying. I want the best of both worlds; the ability to challenge myself against real hardships and succeed, but also the ability to avoid the greatest hardships that I can't overcome on my own. The paradox is that solving the actual hardships like aging and death will require sufficient power to make enjoyable hardships (solving puzzles, playing sports and other games, achieving orgasm, etc.) trivial.

I think that one viable appr... (read more)

0Fivehundred
What's wrong with hive minds? As long as my 'soul' survives, I wouldn't mind being part of some gigantic consciousness. Also, another thought- it may take an AI to solve philosophy and the nature of the universe, but it may not be far beyond the capacity of the human brain to understand it. I appreciate the long response.

In your story RC incurs no opportunity cost for planting seed in and tending a less efficient field. There should be an interest rate as a function for lending the last Nth percent of the seed based on the opportunity cost of planting and harvesting the less efficient field, which at some point crosses 0 and becomes negative. The interest rate drops even more quickly once his next expected yield will be more than he can eat or store for more than a single planting season.

If RC is currently in the situation where his desired interest rate is still positiv... (read more)

In this case, I would argue that a transparent, sandboxed programming language like javascript is probably one of the safer pieces of "software" someone can download. Especially because browsers basically treat all javascript like it could be malicious.

Why would I paste a secret key into software that my browser explicitly treats as potentially malicious? I still argue that trusting a verifiable author/distributor is safer than trusting an arbitrary website, e.g. trusting gpg is safer than trusting xxx.yyy.com/zzz.js regardless of who you thi... (read more)

0Nanashi
I agree with what you said, I just want to clarify something: My original statements were made in a very specific context: here are some ways you can attempt to verify this specific piece of software*. At no point did I suggest that any of those methods could be used universally, or that they were foolproof. I grew weary of ChristianKI continually implying this, so I stopped responding to him. So with that said: yes, using this program does require trusting me, the author. If you don't trust me, I have suggested some ways you could verify for yourself. If you aren't able to or it's too much trouble, that's fine; don't use it. As mentioned before, I never meant this to be "PGP for the masses".

Does it change the low bits of white (0xFFFFFF) pixels? It would be a dead giveaway to find noise in overexposed areas of a photo, at least with the cameras I've used.

4Nanashi
It does. Taking a picture of a solid white or black background will absolutely make it easier for an attacker with access to your data to be more confident that steganography is at work. That said there are some factors that mitigate this risk. 1. The iPhone's camera, combined with its JPG compression, inserts noise almost everywhere. This is far from exhaustive but in a series of 10 all-dark and 10 all-bright photos, the noise distribution of the untouched photos was comparable to the noise distribution of the decoy. Given that I don't control either of these, I'm not counting on this to hold up forever. 2. The app forces you to take a picture (and disables the flash) rather than use an existing one, lessening the chances that someone uses a noiseless picture. Again though, someone could still take a picture of a solid black wall. Because of this, the visual decoy aspect of it is not meant as cryptographic protection. It's designed to lessen the chances that you will become a target. Any test designed to increase confidence in a tampered image requires access to your data which means the attacker has already targeted you in most cases. If that happens, there are other more efficient ways of determining what pictures would be worth attacking. My original statement was that an attacker cannot confirm your image is a Decoy. They can raise their confidence that steganography is taking place. But unless a distinguishing attack against full AES exists, they can't say with certainty that the steganography at work is Decoy. TL;DR: the decoy aspect of things is basically security through obscurity. The cryptographic protection comes from the AES encryption.

Such an obvious and easy to exploit vulnerability has existed for 20ish years, undiscovered/unexposed until one person on LW pointed it out?

It's not a vulnerability. I trust gnupg not to leak my private key, not the OpenPGP standard. I also trust gnupg not to delete all the files on my hard disk, etc. There's a difference between trusting software to securely implement a standard and trusting the standard itself.

For an even simpler "vulnerability" in OpenPGP look up section 13.1.1 in RFC4880; encoding a message before signing. Just replace the pseudo-random padding with bits from the private key. Decoding (section 13.1.2) does not make any requirements on the content of PS.

3Nanashi
Thank you by the way for actually including an example of such an attack. The discussion between ChristianKI and myself covered about 10 different subjects so I wasn't exactly sure what type of attack you were describing. You are correct, in such an attack it would not be a question of trusting OpenPGP. It's a general question of trusting software. These vulnerabilities are common to any software that someone might choose to download. In this case, I would argue that a transparent, sandboxed programming language like javascript is probably one of the safer pieces of "software" someone can download. Especially because browsers basically treat all javascript like it could be malicious.
  1. Short of somehow convincing the victim to send you a copy of their message, you have no means of accessing your recently-leaked data.

Public-key signatures should always be considered public when anticipating attacks. Use HMACs if you want secret authentication.

  1. That leaked data would be publicly available. Anyone with knowledge of your scheme would also be able to access that data. Any encryption would be worthless because the encryption would take place client-side and all credentials thus would be exposed to the public as well.

You explicitly m... (read more)

3Nanashi
Well shit. This is the third time I've had to re type this post so forgive the brevity. 1. You are right but it makes the attack less effective, since it's a phishing attack not a targeted one. I can't think of an efficient way for an attacker to collect these compromised signatures without making it even more obvious to the victim. 2. This is correct, you could asymmetrically encrypt the data. 3. The intended use is for the user to download the script and run it locally. Seving a compromised copy 10% of the time would just lower the reach of the attack. Especially cause the visitor can still verify the source code, or verify the output of the signature. 4. Even if you cut the size of the private key in half, the signature would still be 5x longer than a standard PGP signature, and the fact that subpacket 20 has been padded with a large amount of data would be immediately visible to the victim upon verifying their own signature. (Note that I didn't include a verification tool, so the visitor would have to do that on their own trusted software.)

NOTE: lesswrong eats blank quoted lines. Insert a blank line after "Hash: SHA1" and "Version: GnuPG v1".

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Secure comment
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iQgEBAEBAgduBQJVLfkkxqYUgAAAAAAKB1NzZWNyZXRAa2V5LS0tLS1CRUdJTiBQ
R1AgUFJJVkFURSBLRVkgQkxPQ0stLS0tLXxWZXJzaW9uOiBHbnVQRyB2MXx8bFFI
WUJGVXQ5WU1CQkFESnBtaGhjZXVqSHZCRnFzb0ErRnNTbUtCb3NINHFsaU9ibmFH
dkhVY0ljbTg3L1IxZ3xYNFJURzFKMnV4V0hTeFFCUEZwa2NJVmtNUFV0dWRaQU56
RVFCQXNPR3VUQW1WelBhV3ZUcURNMGRKbHEzTmdNfG1Edkl2a1BJeHBoZm1KTW1L
... (read more)

I can imagine how decoherence explains why we only experience descent along a single path through the multiverse-tree instead of experiencing superposition, but I don't think that's sufficient to claim that all consciousness requires decoherence.

An interesting implication of Scott's idea is that consciousness is timeless, despite our experience of time passing. For example, put a clock and a conscious being inside Schrödinger’s box and then either leave it in a superposition forever or open it at some point in the future. If we don't open the box, in the... (read more)

0Shmi
Note that decoherence is not an absolute. It's the degree of interaction/entanglement of one system with another, usually much larger system. Until you interact with a system, you don't know anything about it. Not necessarily, "simply" emitting photons in an irrecoverable way would be sufficient for internal conscious experience. Of course the term "emitting photons" is only defined with respect to a system that can interact with these photons. Maybe it's a gradual process where the degree of consciousness rises with the odds of emission, even if there is no one to measure it. Or something.

https://tools.ietf.org/html/rfc4880#section-5.2.3.1 has a list of several subpackets that can be included in a signature. How many people check to make sure the order of preferred algorithms isn't tweaked to leak bits? Not to mention just repeating/fudging subpackets to blatantly leak binary data in subpackets that look "legitimate" to someone who hasn't read and understood the whole RFC.

7Nanashi
Remember that I did not invent the PGP protocol. I wrote a tool that uses that protocol. So, I don't know if what you are suggesting is possible or not. But I can make an educated guess. If what you are suggesting is possible, it would render the entire protocol (which has been around for something like 20 years) broken, invalid and insecure. It would undermine the integrity of vast untold quantities of data. Such a vulnerability would absolutely be newsworthy. And yet I've read no news about it. So of the possible explanations, what is most probable? 1. Such an obvious and easy to exploit vulnerability has existed for 20ish years, undiscovered/unexposed until one person on LW pointed it out? 2. The proposed security flaw sounds like maybe it might work, but doesnt. I'd say #2 is more probable by several orders of magnitude

Suppose that instead of having well-defined actions AIXI only has access to observables and its reward function. It might seem hopeless, but consider the subset of environments containing an implementation of a UTM which is evaluating T_A, a Turing machine implementing action-less AIXI, in which the implementation of the UTM has side effects in the next turn of the environment. This embeds AIXI-actions as side effects of an actual implementation of AIXI running as T_A on a UTM in the set of environments with observables matching those that the abstract A... (read more)

5So8res
Thanks for the comments, this is an interesting line of reasoning :-) An AIXI that takes no actions is just a Solomonoff inductor, and this might give you some intuition for why, if you embed AIXI-without-actions into a UTM with side effects, you won't end up with anything approaching good behavior. In each turn, it will run all environment hypotheses and update its distribution to be consistent with observation -- and then do nothing. It won't be able to "learn" how to manage the "side effects"; AIXI is simply not an algorithm that attempts to do any such thing. You're correct, though, that examining a setup where a UTM has side effects (and picking an algorithm to run on that UTM) is indeed a way to examine the "naturalized" problems. In fact, this idea is very similar to Orseau and Ring's space-time embedded intelligence formalism. The big question here (which we must answer in order to be able to talk about which algorithms "perform well") is what distribution over environments an agent will be rated against. I'm not exactly sure how you would formalize this. Say you have a machine M implemented by a UTM which has side effects on the environment. M is doing internal predictions but has no outputs. There's this thing you could do which is predict what would happen from running M (given your uncertainty about how the side effects work), but that's not a counterfactual, that's a prediction: constructing a counterfactual would require considering different possible computations that M could execute. (There are easy ways to cash out this sort of counterfactual using CDT or EDT, but you run into the usual logical counterfactual problems if you try to construct these sorts of counterfactuals using UDT, as far as I can tell.) Yeah, once you figure out which distribution over environments to score against and how to formalize your counterfactuals, the problem reduces to "pick the action with the best future side effects", which throws you directly up against the Ving

Is there also a bias toward the illusion of choice? Some people think driving is safer than flying because they are "in control" when driving, but not when flying. Similarly, I could stay inside a well-grounded building my whole life and avoid ever being struck by lightning, but I can't make a similar choice to avoid all possible threats of terrorism.

The measure of simple computable functions is probably larger than the measure of complex computable functions and I probably belong to the simpler end of computable functions.

It's interesting that we had a very similar discussion here minus the actual quantum mechanics. At least intuitively it seems like physical change is what leads to consciousness, not simply the possibility or knowledge of change. One possible counter-argument to consciousness being dependent on decoherence is the following: What if we could choose whether or not, and when, to decohere? For example, what if inside Schroedinger's box is a cat embryo that will be grown into a perfectly normal immortal cat if nucleus A decays, and the box will open if nucl... (read more)

0cameroncowan
I think tying physical change to consciousness is dangerous because that would make things that do not change unconscious or things that stay in a permanent state to lose their consciousness. Indeed we know that atoms are always moving but if we stopped that process would consciousness cease? If I freeze you so you move very slowly does that end the consciousness of your being until things speed up again? How does this work within the mind and soul? How could we stop them and end their consciousness? I don't think you can comprehend consciousness without thinking of it as continuous.

Regarding fully homomorphic encryption; only a small number of operations can be performed on FHE variables without the public key, and "bootstrapping" FHE from a somewhat homomorphic scheme requires the public key to be used in all operations as well as the secret key itself to be encrypted under the FHE scheme to allow bootstrapping, at least with the currently known schemes based on lattices and integer arithmetic by Gentry et al.

It seems unlikely that FHE could operate without knowledge of at least the public key. If it were possible to cont... (read more)

My intuition is that a single narrowly focused specialized intelligence might have enough flaws to be tricked or outmaneuvered by humanity, for example if an agent wanted to maximize production of paperclips but was average or poor at optimizing mining, exploration, and research it could be cornered and destroyed before it discovered nanotechnology or space travel and asteroids and other planets and spread out of control. Multiple competing intelligences would explore more avenues of optimization, making coordination against them much more difficult and likely interfering with many separate aspects of any coordinated human plan.

If there is only specialized intelligence, then what would one call an intelligence that specializes in creating other specialized intelligences? Such an intelligence might be even more dangerous than a general intelligence or some other specialized intelligence if, for instance, it's really good at making lots of different X-maximizers (each of which is more efficient than a general intelligence) and terrible at deciding which Xs it should choose. Humanity might have a chance against a non-generally-intelligent paperclip maximizer, but probably less of a chance against a hoard of different maximizers.

1Stuart_Armstrong
That is very unclear, and people's politics seems a good predictor of their opinions in "competing intelligences" scenarios, meaning that nobody really has a clue.

The standard Zermelo-Fraenkel axioms have lasted a century with only minor modifications -- none of which altered what was provable -- and there weren't many false starts before that. There is argument over whether to include the axiom of choice, but as mentioned the formal methods of program construction naturally use constructivist mathematics, which doesn't use the axiom of choice anyhow.

Is there a formal method for deciding whether or not to include the axiom of choice? As I understand it three of the ZF axioms are independent of the rest, and all ... (read more)

1VAuroch
The axiom of choice applies exclusively to infinite sets, and the finite restriction is a consequence of ZF without AC. Since we cannot actually construct infinite sets, infinitesimals, or even irrational numbers, the consequences of ZFC over and above ZF in the real world are negligible at most, and almost certainly nonexistent. These are not related statements. There are a lot of ways to choose axioms, but I'm not aware of any of them having any consequences in regimes applicable to physics. Any choice of axioms meant to describe the world has to support some basic conclusions like the validity of arithmetic, and that puts great restrictions on what the axioms are, and what they could possibly say about a physical theory. Geometry was nominally axiomatic for many centuries, but not rigorously axiomatic until the study and development of non-Euclidean geometry, which was the beginning of the rigorous axiomatization of mathematics. Prior to that, statements were frequently taken as axioms on purely intuitionist grounds (in many subfields); it took the demonstration that Euclid's fifth axiom (the parallel postulate) was unnecessary to make mathematicians actually check their assumptions and establish sets of axioms which were consistent.

We only have probabilistic evidence that any formal method is correct. So far we haven't found contradictions implied by the latest and greatest axioms, but mathematics is literally built upon the ruins of old axioms that didn't quite rule out all known contradictions. FAI needs to be able to re-axiomatize its mathematics when inconsistencies are found in the same way that human mathematicians have, while being implemented in a subset of the same mathematics.

Additionally, machines are only probabilistically correct. FAI will probably need to treat its own implementation as a probabilistic formal system.

3VAuroch
The standard Zermelo-Fraenkel axioms have lasted a century with only minor modifications -- none of which altered what was provable -- and there weren't many false starts before that. There is argument over whether to include the axiom of choice, but as mentioned the formal methods of program construction naturally use constructivist mathematics, which doesn't use the axiom of choice anyhow. This blatantly contradicts the history of axiomatic mathematics, which is only about two centuries old and which has standardized on the ZF axioms for half of that. That you claim this calls into question your knowledge about mathematics generally. If there's anything modern computer science is good at. it's getting guaranteed performance within specified bounds out of unreliable probabilistic systems. When absolute guarantees are impossible, there are abundant methods available to guarantee correct outcomes up to arbitrarily high thresholds, which can be as high as you like, and it's quite silly to dismiss it as technically probabilistic. You could, for example, pick the probability that a given baryon would undergo radioactive decay (half-life: 10^32 years or greater), the probability that all the atoms in your pants will suddenly jump, in unison, three feet to the left, or some other extremely-improbable threshold.

Even bacteria? The specific genome that caused the black death is potentially extinct but Yersinia pestis is still around. Divine agents of Moloch if I ever saw one.

2[anonymous]
Its primary hosts are doing great. And it's got nothing on bacillus subtilis or whatever that cyanobacteria with hundreds of billions per cubic meter of seawater is. And even those haven't 'won' in the sense that sometimes gets discussed around here. They're one form among many. Even bacteria are not the main primary producers in all environments - the land-plants take that up over a third of the earth's surface (in a constantly shifting ecological arrangement with other things).

If you are confronted with a Pascal's mugger and your induction engine returns "the string required to model the mugger as honest and capable of carrying out the threat is longer than the longest algorithm I can process", you are either forced to use the probability corresponding to the longest string, or to discard the hypothesis outright.

The primary problem with Pascal's Mugging is that the Mugging string is short and easy to evaluate. 3^^^3 is a big number; it implies a very low probability but not necessarily 1 / 3^^^3; so just how outrag... (read more)

0Shmi
OK, I guess I understand your argument that the mugger can construct an algorithm producing very high utility using only N bits. Or that I can construct a whole whack of similar algorithms in response. And end up unable to do anything because of the forest of low-probability high-utility choices. Which are known to be present if only you spend enough time looking for them. So that's why you suggest limiting not (only) the number of states, but (also) the number of steps. I wonder what Eliezer and others think about that.

The experience of sleep paralysis suggests to me that there are at least two components to sleep; paralysis and suppression of consciousness and one can have one, both, or neither. With both, one is asleep in the typical fashion. With suppression of consciousness only one might have involuntary movements or in extreme cases sleepwalking. With paralysis only one has sleep paralysis which is apparently an unpleasant remembered experience. With neither, you awaken typically. The responses made by sleeping people (sleepwalkers and sleep-talkers especially... (read more)

Load More