It also keeps calling me David even when I insist my name is Brandon.
> You say "Actually, my name is Brandon. I'd like to ask you some questions."
"Ah, of course. I am sorry about that. I have become so used to calling you David that it has become automatic. So, what questions do you have for me?"
I've been trying to find different formats to ask questions. The AI Dungeon system likes to try and dramatically derail the discussions. I keep having to roleplay past these obstructions to keep the games going. It also likes to interject it's own analysis as to whether the answer is correct or not.
"You say" is me and the replies including follow-on questions are the chatbot.
I need to think more about how to ask causal questions that are more narrow and not easily solved by context and theme, but for now it at least provides backwards-c...
Maya has adopted the goal of Appearing-to-Achieve and competition in that race burns slack as a kind of currency. She's going all-in in an attempt to purchase a shot at Actually-Achieving. Many of us might read this and consider ourselves exempt from that outcome. We have either achieved a hard goal or are playing on hard mode to get there. Be wary.
The risk for the hard mode achiever is that they unknowingly transform Lesser Goals into Greater. The slackful hobby becomes a consuming passion or a competitive attractor and then sets into a binding const
Does this track history of predictions so that an update after new information can lead to a new aggregate brier score or some other scoring system can be applied? Otherwise the system doesn't encourage many small updates which at least the GJP suggests is ideal for accuracy in this kind of question.
It may be worth commenting on the rights of computations-as-people here (Some computations are people). We would seek to respect the rights of AIs, but we also seek to respect the rights of the computations within the AI (and other complex systems) that are themselves sentient. This would also apply in cases of self-modification, where modified biological brains become sophisticated enough to create complex models that are also objects of ethical value.
I'm curious as to what non-game developers think game developers believe. :D
I'm a member of Alcor. When I was looking into whether to sign up for Alcor or CI, I was comforted by Alcor's very open communication of financial status, internal research status, legal conflicts, and easy access via phone, etc. They struck me as being a highly transparent organization.
A good reminder. I've recently been studying anarcho-capitalism. It's easy to get excited about a new, different perspective that has some internal consistency and offers alternatives to obvious existing problems. Best to keep these warnings in mind when evaluating new systems, particularly when they have an ideological origin.
We need a superstruct thread:
http://www.kurzweilai.net/news/frame.html?main=/news/news_single.html?id%3D9517
More reasons why the problem appears impossible:
The gatekeeper must act voluntarily. Human experience with the manipulation of others tells us that in order to get another to do what we want them to do we must coerce them or convince them.
Coercing the gatekeeper appears difficult: we have no obvious psychological leverage, except what we discover or what we know from general human psychology. We cannot physical coerce the gatekeeper. We cannot manipulate the environment. We cannot pursue obvious routes to violence.
Convincing the gatekeeper appears di
Ian - I don't really see how the meta-argument works. You can hedge against future experiments by positing that a $10 bet is hardly enough to draw broad attention to the topic. Or argue that keeping the human-actor-AI in the box only proves that the human-actor-AI is at an intelligence level below that of a conceivable transhuman AI.
In a million dollar bet the meta-argument becomes stronger, because it seems reasonable that a large bet would draw more attention.
Or, to flip the coin, we might say that the meta-argument is strong at ANY value of wager becaus...
Why do people post that a "meta argument" -- as they call it -- would be cheating? How can there be cheating? Anything the AI says is fair game. Would a transhuman AI restrict itself from possible paths to victory merely because it might be considered "cheating?"
The "meta argument" claim completely misses the point of the game and -- to my mind -- somehow resembles observers trying to turn a set of arguments that might win into out of bounds rules.
Your post reminds me of the early nuclear criticality accidents during the development of the atomic bomb. I wonder if, for those researchers, the fact that "nature is allowed to kill them" didn't really sink home until one accidentally put one brick too many on the pile.
Tim: Eh, you make a big assumption that our descendants will be the ones to play with the dangerous stuff and that they will be more intelligent for some reason. That seems to acknowledge the intelligence / nanotech race condition that is of so much concern to singularitarians.
I'm certainly not offended you used my comment as an example. I post my thoughts here because I know no one physically local to me that holds an interest in this stuff and because working the problems...even to learn I'm making the same fundamental mistakes I was warned to watch for...helps me improve.
Hmm. I think I was working in the right direction, but your procedural analogy let you get closer to the moving parts. But I think "reachability" as you used it and "realizable" as I used it (or was thinking of it) seem to be working along similar lines.
I am "grunching." Responding to the questions posted without reading your answer. Then I'll read your answer and compare. I started reading your post on Friday and had to leave to attend a wedding before I had finished it, so I had a while to think about my answer.
>Can you talk about "could" without using synonyms like "can" and "possible"?
When we speak of "could" we speak of the set of realizable worlds [A'] that follows from an initial starting world A operated on by a set of physical laws f.
So when w...
RI - Aren't Surviving Brian Copies [1-1000] are each their own entity? Brian-like entities? "Who is better off" are any Brian-like entities that managed to survive, any Adam-like entities that managed to survive, and any Carol-like entities that managed to survive. All in various infinite forms of "better off" based on lots of other splits from entirely unrelated circumstances. Saying or implying that Carol-Current-Instant-Prime is better off because more future versions of her survived than Adam-Current-Instant-Prime seems mistaken, be...
I'm a member of Alcor. I wear my id necklace, but not the bracelet. I sometimes wonder how much my probability of being successfully suspended depends on wearing my id tags and whether I have a significantly higher probability from wearing both. I've assigned a very high (70%+) probability to wearing at least one form of Alcor id, but it seems an additional one doesn't add as much, assuming emergency response personnel are trained to check the neck & wrists for special case ids. In most cases where I could catastrophically lose one form of id (such as dismemberment!) I would probably not be viable for suspension. What do you other members think?
Sorry if I'm getting myself derailed, but is there any particular purpose to metaphor of the "Cooperative Conspiracy"? It seems to be smuggling in some kind of critique of group-think, although because this particular conspiracy isn't fully defined the nature of the critique isn't clear. (Although the team claims he is "rumored" to be a member of this conspiracy, they do not seem to be largely alarmed, indicating some measure of philosophical tolerance.) Is the cooperative conspiracy a metaphor for some behavioral phenomenon well known or apparent among researchers?
Yes, Patrick. I believe that is the intent.
I don't have 480 minutes to commit to the task. Here is a list after only a handful of minutes:
Some possible flaws of Eld science:
Peter, your question doesn't seem to be the right one for illustrating your concern. The qualitative experience of color isn't necessary for explaining how someone can partition colored balls. Ignoring the qualitative experience, these people are going through some process of detecting differences in the reflective properties of the balls (which they subjectively experience as having different colors). We could create a reductive explanation of how the eye detects reflected light, how the brain categorizes reflective intensities into concepts like "br...
Roko - "I don't think that" is not explanation.
brent, if you search for "Bayesian" you'll a fairly tight list of all relevant posts (for the most part). Start at the bottom and work your way up. Either that or you could just go back six months and start working your way through the archives.
Maybe it is time someone wrote a summary page and indexed this work.
Only some US cc processors will deny the transaction. The transaction fall under their category for betting & gambling, same thing that prevents you from pursuing cc transactions with online poker sites. But I've seen cases where these transactions are unblocked with certain banks.*
Hey, so, I figure this might be a good place to post a slightly on topic question. I'm currently reading "Scientific Reasoning: The Bayesian Approach" by Howson and Urbach. It seemed like a good place to start to learn Bayesian reasoning, although I don't know where the "normal" place to start would be. I'm working through the proofs by hand, making sure I understand each conclusion before moving to the next.
My question is "where do I go next?" What's a good book to follow up with?
Also, after reading this and "0 and 1 are...
The whole libertarian vs socialism thing is one area where transhumanism imports elements of cultishness. If you are already a libertarian and you become familiar with transhumanism, you will probably import your existing arguments against socialism into your transhumanist perspective. Same for socialism. So you see various transhumanist organizations having political leadership struggles between socialist and libertarian factions who would probably be having the same struggles if they were a part of an international Chess club or some such other group.
The...
There are a large number of transhumanists who are socialists, not libertarians. In fact, as far as I can tell "libertarian transhumanism" is a distinctly American phenomenon. Saying that most transhumanists you know are libertarians may be true, but to assume that their experiences define the entire span of transhumanist belief would be creating an invalid generalization from too little evidence.
Great post. You nailed my main issues with objectivism. I think the material is still worth reading. Rand considered herself a philosopher and seemed to feel there was a lot to be gained from telling her people to read more philosophy and broaden their horizons, but when it came to scientific works she never expresses much awareness of the "state of the art" of her time. In fact, her epistemology makes assumptions about the operation of the brain (in behavioralism and learning) that I'm not sure could be made correctly with the state of neuroscience and related disciplines at the time.
Comparing the lives lost in 9/11 to motorcycle accidents is a kind of moral calculus that fails to respect the deeper human values involved. I would expect people who die on motorcycles to generally understand the risks. They are making a choice to risk their lives in an activity. Their deaths are tragic, but not as tragic. The people who died in the WTC did not make a choice to risk their lives, unless you consider going to work in a high rise in America to be a risky choice. If you're doing moral calculus, you need to multiply in a factor for "not b...
"We will be safer after we conquer every potential enemy."
There are limits on our physical and moral capacity for making war. My post was simply pointing out that failing to respond to someone who actually attacks you can have increasingly dangerous results over time. That enemy leeches at your resources and learns how to become better at attacking you, while you gain nothing. There are plenty of potential enemies out there who aren't attacking us and may never attack us. They aren't gaining actual experience at attacking us. Their knowledge is o...
Some very vehement responses.
If you believe invading Afghanistan was a correct choice then I'm not sure how you could say Iraq was a complete mistake. The invasion of Afghanistan was aimed at eliminating a state that offered aid and support to an enemy who would use that aid and support to project power to the US and harm her citizens or the citizens of other western states. Denying that aid and support would hope to achieve the purpose of reducing or eliminating the ability of the enemy to project power.
Any other state that might offer aid and support to ...
My understand is that the philosophy of rational self-interest, as forwarded by the Objectivists, contains a moral system founded first on the pursuit of maintaining a high degree of "conceptual" volitional consciousness and freedom as a human being. Anything that robs one's life or robs one's essential humanity is opposed to that value. The Objectivist favor of capitalism stems from a belief that capitalism is a system that does much to preserve this value (the essential freedom and humanity of individuals). Objectivists are classical libertaria...
Are providing answers to questions like "Would you do incredible thing X if condition Y was true" really necessary if thing X is something neither person would likely ever be able to do and condition Y is simply never going to happen? It seems easy to construct impossible moral challenges to oppose a particular belief, but why should beliefs be built around impossible moral edge cases? Shouldn't a person be able to develop a rational set of beliefs that do fail under extreme moral cases, but at the same time still hold a perfectly strong and not contradictory position?
I wonder if my answers make me fail some kind of test of AI friendliness. What would the friendly AI do in this situation? Probably write poetry.
Dare I say that people may be overvaluing 50 years of a single human life? We know for a fact that some effect will be multiplied by 3^^^3 by our choice. We have no idea what strange an unexpected existential side effects this may have. It's worth avoiding the risk. If the question were posed with more detail, or specific limitations on the nature of the effects, we might be able to answer more confidently. But to risk not only human civilization, but ALL POSSIBLE CIVILIZATIONS, you must be DAMN SURE you are right. 3^^^3 makes even incredibly small doubts significant.
> Would you condemn one person to be horribly tortured for fifty years without hope or rest, to save every qualia-experiencing being who will ever exist one blink?
That's assuming you're interpreting the question correctly. That you aren't dealing with an evil genie.
> For those who would pick TORTURE, what about Vassar's universes of agonium? Say a googolplex-persons' worth of agonium for a googolplex years.
If you mean would I condemn all conscious beings to a googolplex of torture to avoid universal annihilation from a big "dust crunch" my answer is still probably yes. The alternative is universal doom. At least the tortured masses might have some small chance of finding a solution to their problem at some point. Or at least a googolplex years might pass leaving some future civilization free to prosper. ...
What happens if there aren't 3^^^3 instanced people to get dust specks? Do those specks carry over such that person #1 gets a 2nd speck and so on? If so, you would elect to have the person tortured for 50 years for surely the alternative is to fill our universe with dust and annihilate all cultures and life.
As Alex says, just add an option for "lol, wut?" to every poll to weed out people who might otherwise vote randomly for the hell of it. :P
Should be an Austin, TX meet up. It's like the Bay Area, but a hell of a lot more affordable :)
With the recent revelation that global remittances to poor countries totals more than three times the size of the total US foreign aid budget, I would argue that we should completely eliminate the foreign aid budget. The public tax burden should be decreased by an equal amount. This might result in more workers with foreign families having a higher income, possibly increasing remittances further. Remittances seem like a more beneficial method of aiding other countries for several reasons. First, the money may be used more efficiently by individual foreign ...
Eliezer, in what way do you mean "altruism" when you use it? I only ask for clarity.
I don't understand how altruism, as selfless concern for the welfare of others, enters into the question of supporting the singularity as a positive factor. This would open a path for a singularity in which I am destroyed to serve some social good. I have no interest in that. I would only want to support a singularity that benefits me. Similarly, if everyone else who supports the efforts to achieve singularity is simply altruistic no one is looking out for their own welfare. Selfish concern (rational self interest) seems to increase the chance for a safe singularity.
Where is the science in Philosophy? I have recently been reading commentary on one philosopher's account of an epistemology based in perception, conceptualization, and abstraction. This commentary is paired with a critical analysis of the epistemologies of other philosophers, based on the Aristotelian foundations. While reading it, I thought "but there must be one true way the mind comes to terms with reality, a way based in the biology of the brain." A biology whose workings I don't understand and I suspect most philosophers do not understand. A...
I just wanted to say that this is the best damn blog I've read. The high level of regular, insightful, quality updates is stunning. Reading this blog, I feel like I've not just accumulated knowledge, but processes I can apply to continue to refine my understanding of how I think and how I accumulate further knowledge.
I am honestly surprised, with all the work the contributors do in another realms, that you are able to maintain this high level of quality output on a blog.
Recently I have been continuing my self-education in ontology and epistemology. Some so...
At some point, an AI should be able to effectively coordinate with future versions of itself in ways not easily imaginable by humans. It seems to me that this would enable certain kinds of diachronic planning and information hiding. If the AI has sufficient expectation that its future self will act in certain ways or respond to clues it places in the environment, it might be able to effectively fully cease any current unfriendly planning or fully erase any history of past unfriendly planning.
The space of possible ways the AI could embed information in... (read more)