A counterexample to your claim: Ackermann(m,m) is a computable function, hence computable by a universal Turing machine. Yet it is designed to be not primitive recursive.
And indeed Kleene's normal form theorem requires one application of the μ-Operator. Which introduces unbounded search.
I don't buy your first argument against time-travel. Even under the model of the universe as a static mathematical object connected by wave-function consistency constraints, there is still a consistent interpretation of the intuitive notion of "time travel":
The "passage" of time is the continuous measurement of the environment by a subsystem (which incidentally believes itself to be an 'observer') and the resulting entanglement with farther away parts of the system as "time goes on" (i.e. further towards positive time). Then ...
Here is my attempt to convince you also of 1 (in your numbering):
I disagree with your: "From a preference utilitarian Perspective, only a self-conscious being can have preferences for the future, therefore you can only violate the preferences of a self-conscious being by killing it."
To the contrary, every agent which follows an optimization goal exhibits some preference (even if itself does not understand them). Namely that its optimization goal shall be reached. The ability to understand ones own optimization goal is not necessary for a preferen...
I, for one, like my moral assumptions and cached thoughts challenged regularly. This works well with repugnant conclusions. Hence I upvoted this post (to -21).
I find two interesting questions here:
How to reconcile opposing interests in subgroups of a population of entities whose interests we would like to include into our utility function. An obvious answer is facilitating trade between all interested to increase utility. But: How do we react to subgroups whose utility function values trade itself negatively?
Given that mate selection is a huge driver o
Interestingly, there appears (at least in my local cultural circle) that being attended by human caretakers when incapacitated by age, is supposed to be a basic right. Hence, there must be some other reason - and not just the problem about rights being fulfilled by other persons, why the particular example assumed to underlie the parable, is reprehensible to many people.
To disagree with this statement is to say that a scanned living brain, cloned, remade and started will contain the exact same consciousness, not similar, the exact same thing itself, that simultaneously exists in the still-living original. If consciousness has an anatomical location, and therefore is tied to matter, then it would follow that this matter here is the exact matter as that separate matter there. This is an absurd proposition.
You conclude that consciousness in your scenario cannot have 1 location(s).
...If consciousness does not have an anatom
Regarding auras. I am not sure, if I observed the same phenomenon, but if I sit still and keep my eyes fixed on the same spot for a while (in a still scene), my eyes will -- after a while -- get accustomed to the exact light pattern incoming and everythig kind-of fades to gray. But very slight movements will then generate colorful borders on edges (like a gaussian edge detector).
Install a smoke detector
Do martial arts training until you get the falling more or less right. While this might be helpful against muggers the main benefit is the reduced probability of injury in various unfortunate situation.
The Metamath project was started by a person who also wanted to understand math by coding it: http://metamath.org/
Generally speaking, machine-checked proofs are ridiculously detailed. But it being able to create such detailed proofs did boost my mathematical understanding a lot. I found it worthwhile.
Install a smoke detector (and reduce mortality by 0.3% if I'm reading the statistics right - not to talk of the property damages prevented).
I use multiple passwords of consisting of 12 elements of a..z, A..Z, 0..9, and ~20 symbol characters, generated randomly. Total entropy of these is around 76 bits.
10 decimal digits is actually more like 33 bits of entropy.
small enough to be masked by confounders There are an extremely large number of companies. Unrelated effects should average out.
Regarding statistics: http://thinkprogress.org/economy/2014/07/08/3457859/women-ceos-beat-stock-market/ links to quite some.
Given identical money payoffs between two options (even when adjusting for non-linear utility of money), choosing the non-ambiguous has the added advantage of giving a limited rationality agent less possible futures to spend computing resources on while the process of generating utility runs.
Consider two options: a) You wait one year and get 1 million dollars. b) You wait one year and get 3 million dollars with 0.5 probability (decided after this year).
If you take option b), depending on the size of your "utils", all planning for after the year must essentially be done twice, once for the case with 3 million dollars available and once for the case without.
I usually take the minutes of the German Pirate Party assemblies. It is non-trivial to transcribe two days of speach alone (and I don't know steno). A better solution is a collaborative editor and multiple people typing while listening to the audio with increasing delay, i.e. one person gets life audio, the next one 20 seconds delay, etc... There is EtherPad, but the web client cannot really handle the 250kB files a full day transcript needs, also two of the persons interested in taking minutes (me included) strongly prefer VIm over a glorified textfield.
H...
while corporations have a variety of mechanisms for trying to provide their employees with the proper incentives, anyone who's worked for a big company knows that they employees tend to follow their own interests, even when they conflict with those of the company. It's certainly nothing like the situation with a cell, where the survival of each cell organ depends on the survival of the whole cell. If the cell dies, the cell organs die; if the company fails, the employees can just get a new job.
These observations might not hold for uploads running on ha...
There should be a step 9, where every potential author is sent the final article and has the option of refusing formal authorship (if she doesn't agree with the final article). Convention in academic literature is that each author individually endorses all claims made in an article, hence this final check.
So... how would I design an exercise to teach Checking Consequentialism?
Divide the group into pairs. One is the decider, the other is the environment. Let them play some game repeatedly, prisoners dilemma might be appropriate, but maybe it should be a little bit more complex. The algorithm of the environment is predetermined by the teacher and known to both of the players.
The decider tries to maximize utilitiy over the repeated rounds, the environment tries to minimise the winnigs of the decider, by using social interaction between the evaluated game round...
The described effect seems strongly related to the concept of opportunity cost.
I.e. while a bet of yours is still open, the resources spent paying for entering the bet cannot be used again to enter a (better) bet.
The AGI would have to acquire new resources slowly, as it couldn’t just self-improve to come up with faster and more efficient solutions. In other words, self-improvement would demand resources. The AGI could not profit from its ability to self-improve regarding the necessary acquisition of resources to be able to self-improve in the first place.
If the AGI creates a sufficiently convincing business plan / fake company front, it might well be able to command a significant share of the world's resources on credit and either repay after improving or grab power and leave it at that.
Small scale fusion power.
Research challenges: How to get hydrogen to fuse into helium using only 500kg of machinery and less energy than will be produced.
Urgent Tasks: (In-)Validate the results of the fusor people, scale up / down as neccessary.
Reasons: Enormous amounts of energy goes into everything. If energy costs drop significantly, I expect sustained, fast and profound economic growth, in this case without too much ecological impact. Also, a lot of high-energy technology will become way more feasible, e.g. space missions.
Risk mitigation groups would gain some credibility by publishing concrete probability estimates of "the world will be destroyed by X before 2020" (and similar for other years). As many of the risks are a rather short event (think nuclear war / asteroid strike / singularity), the world will be destroyed by a single cause and the respective probabilities can be summed. I would not be surprised if the total probability comes out well above 1. Has anybody ever compiled a list of separate estimates?
On a related note, how much of the SIAI is financed o...
My classical example for algorithms applicable to real life: Merge sort for sorting stacks of paper.
A short list for prediction making of groups (and via extension decision making):
But it's hard for me to be properly outraged about this, because the conclusion that the LHC will not destroy the world is correct.
What is your argument for claiming that the LHC will not destroy the world?
That the world still exists albeit ongoing experiments is easily explained by the fact that we are necessarily living in those branches of the universe where the LHC didn't destroy the world. (On an related side note: Has the great filter been found yet?)
It appears slow. In particular I seem to think more things per time, sometimes noticing significant delays between thought and action. However according to the scores, performance improvement is only marginal (but existent). The effect wears off after 10 to 15 minutes according to my experience.
I usually play Quake 3 (just in case anybody want's to compare effects between games).
Same goes for videos (Yay action movies at 2x).
Bonus points (for fun only): Play action games afterwards. Time sensation is a weird thing.
Video game authors probably put a lot of effort into optimizing video games for human pleasure.
Workplace design, User Interfaces etc., they could all be improved if more ideas were copied from video games.
Games often fall into the trap of optimizing for addictiveness which is not quite the same thing as pleasure. Jonathan Blow has talked about this and I think there is a lot of merit in his arguments:
...He clarified, "I’m not saying [rewards are] bad, I’m saying you can divide them into two categories – some are like foods that are naturally beneficial and can increase your life, but some are like drugs."
Continued Blow, "As game designers, we don’t know how to make food, so we resort to drugs all the time. It shows in the discontent at the sta
The only difference I can see between "an agent which knows the world program it's working with" and "agent('source of world')" is that the latter agent can be more general.
If agent() is actually agent('source of world') as the classical newcomb problem has it, I fail to see what is wrong with simply enumerating the possible actions and simulating the 'source of world' with the constant call of agent('source of world') replaced by the current action candidate? And then returning the action with maximum payoff obviously.
Loyality to petrified opinion has already kept chains from being closed and souls from being trapped.
But some thoughts are both so complex and so irrelevant that a correct analysis of the thought would cost more than an infrequent error about thoughts of this class (costs of necessary meta-analysis included).
What is the difference between non-nested and modular? (Or between non-modular and nested?)
The pictures seem to be rotated by 180 degrees essentially.
Decreasing frequency of surprising technology advancements are caused by faster and more frequent information of the general public about scientific advancements.
If the rate of news consumes grows faster than the rate of innovations produced, the perceived magnitude of innovation per news will go down.
If you are out for the warm fuzzies: According to my experience fuzzies / $ is optimized via giving a little often.
Microfinancing might be an option, as the same capital can be lend multiple times, generating some fuzzies each time.
Then again, GiveWell seems not too decided on the concept: http://www.givewell.org/international-giving-marketplaces
I fail to understand the sentence about overthinking. Mind to explain?
As for the condition of removing all energy and mass in a part of space not being sufficient to destroy all agents therein, I cannot see the error. Do you have an example of an agent which would continue to exist in those circumstances?
That the condition is not necessary is true: I can shoot you, you die. No need to remove much mass or energy from the part of space you occupy. However we don't need a necessary condition, only a sufficient one.
Not having heard your argument against "Describing ..." yet, but assuming you believe some to exist, I estimate the chance of me still believing it after your argument at 0.6.
Now for guessing the two problems:
The first possible problem will be describing "mass" and "energy" to a system which basically only has sensor readings. However, if we can describe concepts like "human" or "freedom", I expect descriptions of matter and energy to be simpler (even though 10.000 years ago, telling somebody about "hu...
The claim is relevant to the question of whether giving an action description for the red wire which will fit all of human future is not harder than constructing a real moral system. That the claim is trivial is a good reason to use "certainly".
I meant certainly as in "I have an argument for it, so I am certain."
Claim: Describing some part of space to "contain a human" and its destruction is never harder than describing a goal which will ensure every part of space which "contains a human" is treated in manner X for a non-trivial X (where X will usually be "morally correct", whatever that means). (Non-trivial X means: Some known action A of the AI exists which will not treat a space volume in manner X).
The assumption that the action A is known is reasonably ...
How so? The AI lives in a universe where people are planning to fuse AIs in the way described here. Given this website, and the knowledge that one believes that the red wire is magic, there is a high probability that the red wire is fake, and some very small probability that the wire is real. But it is also known for certain that the wire is real. There is not even a contradiction here.
Giving a wrong prior is not the same as walking up to the AI and telling it a lie (which should never raise probability to 1).
It cannot fix bugs in its priors as for any other part of the system, e.g. sensor drivers, the AI can fix the hell out of itself. Anything which can be fixed is not a true prior though. If we allow the AI to change its prior completely then it is effectively acting upon a prior which does not include any probability 1 entries.
There is no reason to fix the red wire belief if you are certain that it is true. Every evidence is against it, but the red wire does magic with probability 1, hence something is wrong with the evidence (e.g. sensor errors).
I agree. The AI + Fuse System is a deliberately broken AI. In general such an AI will perform suboptimal compared to the AI alone.
If the AI under consideration has a problematic goal though, we actually want the AI to act suboptimal with regards to its goals.
This is indeed a point I did not consider.
In particular, it might be impossible to construct a simple action description which will fit all of human future. However, it is certainly not harder than to construct a real moral system.
One might get pretty far by eliminating every volume in space (AI excluded) which can learn (some fixed pattern for example) within a certain bounded time, instead of converting DNA into fluorine. It is not clear to me whether this would be possible to describe or not though.
The other option would be to disable the fuse after som...
There is no hand coded goal in my proposal. I propose to craft the prior, i.e. restrict the worlds the AI can consider possible.
This is the reason both why the procedure is comparatively simple (in comparison with friendly AI) and why the resulting AIs are less powerful.
It might be the case that adding the red wire belief will cripple the AI to a point of total unusability. Whether that is the case can be found out by experiment however.
Adding a fuse as proposed turns an AI which might be friendly or unfriendly into an AI that might be friendly, might spontaneously combust or be stupid.
I prefer the latter kind of AI (even though they need rebuilding more often).
War mongering humans are also not particularly useful. In particular they are burning energy like there is no tomorrow for things definitely not paperclippy at all. And you have to spend significant energy resources on stopping them from destroying you.
A paperclip optimizer would at some point turn against humans directly, because humans will turn against the paperclip optimizer if it is too ruthless.
Because broken != totally nonfunctional.
If we have an AI which we believe to be friendly, but can not verify to be so, we add the fuse I described, then start it. As long as the AI does not try to kill humanity or tries to understand the red wire too well, it should operate pretty much like an unmodified AI.
From time to time however it will conclude the wrong things. For example it might waste significant resources on the production of red wires, to conduct various experiments on them. Thus the modified AI is not optimal in our universe, and it contains one known bug. Hence I think it justified to call it broken.
If the AI is able to question the fact that the red wire is magical, then the prior was less than 1.
It should still be able to reason about hypothetical worlds where the red wire is just a usual copper thingy, but it will always know that those hypothetical worlds are not our world. Because in our world, the red wire is magical.
As long as superstitious knowledge is very specialized, like about the specific red wire, I would hope that the AI can act quite reasonable as long as the specific red wire is not somehow part of the situation.
I think every AI will need to learn from it's environment. Thus it will need to update its current believes based upon new information from sensors.
It might conduct an experiment to check whether transmutation at a distance is possible - and find that transmutation at a distance could never be produced.
As the probability that transmutation of human DNA into fluorine is 1, this leaves some other options, like
After sufficiently many experiments,...
Quoting https://en.wikipedia.org/wiki/Kleene%27s_T_predicate:
In other words: If someone gives you an encoding of a program, an encoding of its input and a trace of its run, you c... (read more)