All of Transfuturist's Comments + Replies

Most of those who haven't ever been on Less Wrong will provide data for that distinction. It isn't noise.

This is a diaspora survey, for the pan-rationalist community.

0gjm
Hence "have been". Maybe I'm misunderstanding, but usually what "X diaspora" means is "people who have been X but have now moved elsewhere".

I have taken the survey. I did not treat the metaphysical probabilities as though I had a measure over them, because I don't.

[anonymous]100

Similarly, I gave self-conscious nonsense numbers when asked for subjective probabilities for most things, because I really did not have an internal model with few-enough free parameters (and placement of causal arrows can be a free parameter!) to think of numerical probabilities.

So I may be right about a few of the calibration questions, but also inconsistently confident, since I basically put down low (under 33%) chances of being correct for all the nontrivial ones.

Also, I left everything about "Singularities" blank, because I don't consider th... (read more)

I guess the rejection is more based on the fact that his message seems like it violates deep-seated values on your end about how reality should work than his work being bullshit.

Lumifer rejects him because he thinks Simon Anhold is simply a person who isn't serious but a hippy.

How about you let Lumifer speak for Lumifer's rejection, rather than tilting at straw windmills?

1ChristianKl
I think it's valuable for discussion to make clear statements. That allows other people to either agree with them or reject them. Both of those moves the discussion forward. Being to wage to be wrong is bad. There's nothing straw about the analysis of how most people who are not aware who Simon Anhold happens to be will pattern match the argument. Simon Anhold makes his case based on non-trival empiric research that Lumifer very likely is unaware. If he would be aware of the research I don't think he would have voted down the post and called it bullshit. I even believe that's a charitible intepretation of Lumifer's writing.

The equivocation of 'created' in those four points are enough to ignore it entirely.

I'm curious why this was downvoted. The last statement, which has political context?

1RowanE
I didn't downvote because it was already at minus one, but it seemed to apply mainly to government policies rather than private donations and be missing the point because of it, and "miss the point so as to bring up politics in your response" is not good.
5Lumifer
I downvoted this because it was content-free bullshit. You asked :-/
0ChristianKl
I'm not exactly sure. My first guess would be karma-slash damage from other conversations.

Are there any egoist arguments for (EA) aid in Africa? Does investment in Africa's stability and economic performance offer any instrumental benefit to a US citizen that does not care about the welfare of Africans terminally?

0knb
There are definitely social benefits to being seen as generous. Also, a lot of infectious diseases originate in Africa, which might eventually spread into other countries if we don't help control them in Africa. Overall I doubt the selfish benefits are sufficient to make it a good deal for a typical person.
1ChristianKl
If you are talking about egoistic in the sense that you want as an US citizen outcomes that are generally good for US citizens: Government-consultant Simon Anholt argues that if a country does a lot of good in the world that results in a positive brand in his TED talk. The better reputation than makes a lot of things easier. You are treated better when you travel in foreign countries. A lot of positive economic trade happens on the back on good brand reputations. Good reputations reduce war and terrorism. Spending money on EA interventions likely has better returns for US citizens than spending money on waging wars like the Iraq war on a per-dollar basis.

We don't need to describe the scenarios precisely physically. All we need to do is describe it in terms of the agent's epistemology, with the same sort of causal surgery as described in Eliezer's TDT. Full epistemological control means you can test your AI's decision system.

This is a more specific form of the simulational AI box. The rejection of simulational boxing I've seen relies on the AI being free to act and sense with no observation possible, treating it like a black box, and somehow gaining knowledge of the parent world through inconsistencies and probabilities and escaping using bugs in its containment program. White-box simulational boxing can completely compromise the AI's apparent reality and actual abilities.

Stagnation is actually a stable condition. It's "yay stability" vs. "boo instability," and "yay growth" vs. "boo stagnation."

2WalterL
Those are true words you wrote. I lounge corrected.

(Ve could also be copied, but it would require coping of all world).

Why would that be the case? And if it were the case, why would that be a problem?

Resurrect one individual, filling gaps with random quantum noise.

Resurrect all possible individuals with all combinations of noise.

That is a false trichotomy. You're perfectly capable of deciding to resurrect some sparse coverage of the distribution, and those differences are not useless. In addition, "the subject is almost exactly resurrected in one of the universes" is true of both two and three, and you don't have to refer to spooky alternate histories to do it in the first place.

0turchin
So, as I understood you, you stay for resurrecting of "sparse coverage of the distribution", which will help to prevent exponential explosion of number of copies, but will cover most peculiar of possible copies landscape? While I can support this case, I see the following problem: For example, I have a partner X, which will better preserved via cryonics, but my information will be partly lost. If there will be created 1000 semi-copies of me to cover the distribution, 999 of them will be without partner X, and partner X also will suffer because ve will now care for other my copies. (Ve could also be copied, but it would require coping of all world). If it were my choice, I prefer to lose some of my memories or personal traits than to live in the world with many my copies.

Quals are the GRE, right?

3[anonymous]
Nope. No.

...Okay? One in ten sampled individuals will be gay. You can do that. Does it really matter when you're resurrecting the dead?

Your own proposal is to only sample one, and call the inaccuracy "acausal trade," which isn't even necessary in this case. The AI is missing 100 bits. You're already admitting many-worlds. So the AI can simply draw those 100 bits out of quantum randomness, and in each Everett branch, there will be a different individual. The incorrect ones you could call "acausal travelers," even though you're just wrong. There w... (read more)

0turchin
I think that there is 3 option in case of incomplete information. 1. Do not resurrect at all. 2. Resurrect one individual, filling gaps with random quantum noise. 3. Resurrect all possible individuals with all combinations of noise. I suggest to choose variant 2. In this case everybody is happy. The subject is almost exactly resurrected in one of the universes. Each universe get a person which corresponds its conditions and do not get useless semi-copies of the subject.
0turchin
I think that there is 3 option in case of incomplete information. 1. Do not resurrect at all. 2. Resurrect one individual, filling gaps with random quantum noise. 3. Resurrect all possible individuals with all combinations of noise. I suggest to choose variant 2. In this case everybody is happy. The subject is almost exactly resurrected in one of the universes. Each universe get a person which corresponds its conditions and do not get useless semi-copies of the subject.
0turchin
If a gap is about very important feature or a secret event, it could be two completely different people. Like if we don't know if a person of interest was a gay.

No, that's easy to grasp. I just wonder what the point is. Conservation of resources?

0turchin
If we don't know 100 bits of information, we need to create 2 power 100 copies to fill all gaps. Even for FAI it may be difficult. Also it may be unpleasant to the copies themselves, as it would delude their value to outside world.

The evidence provided of any dead person produces a distribution on human brains, given enough computation. The more evidence there is, the more focused the distribution. Given post-scarcity, the FAI could simply produce many samples on each distribution.

This is certainly a clever way of producing mind-neighbors. I find problems with these sorts of schemes for resurrection, though. Socioeconomic privilege, tragedy of the commons, and data rot, to be precise.

0turchin
It could be solved by acasual trading between parallel worlds. I tried to explain it in the map under the title that DI stalks well with many world immortality. If we have infinitely many worlds with the same evidence about the person ( but the person is different in different world, only the evidence is the same), we could create only one resurrection in each world which is in agreement with this evidence, AND it will be exact resurrection of the person from another world. BUT, the person from this world will be exact resurrected in the another world, so each person will have exact resurrection in some world, and each world will have only one person which much its evidence. (So, no problems with ethics and resources.) I think that it may be difficult to explain this in several lines, but I hope you grasp the idea. If it is not clear I could try better explanation.

You're confusing the intuitive notion of "simple" with "low Kolmogorov complexity"

I am using the word "simple" to refer to "low K-complexity." That is the context of this discussion.

It does if you look at the rest of my argument.

The rest of your argument is fundamentally misinformed.

Step 1: Stimulation the universe for a sufficiently long time.

Step 2: Ask the entity now filling up the universe "is this an agent?".

Simulating the universe to identify an agent is the exact opposite of a short refere... (read more)

It reminded me of reading Simpsons comics, is all.

3gjm
Kolmogorov's, which is of course the actual reason for my initial "k"s.

Doesn't that undermine the premise of the whole "a godless universe has low Kolmogorov complexity" argument that you're trying to make?

Again, there is a difference between the complexity of the dynamics defining state transitions, and the complexity of the states themselves.

But, the AGI can. Agentiness is going to be a very important concept for it. Thus it's likely to have a short referent to it.

What do you mean by "short referent?" Yes, it will likely be an often-used concept, so the internal symbol signifying the concept is li... (read more)

-4VoiceOfRa
You're confusing the intuitive notion of "simple" with "low Kolmogorov complexity". For example, the Mandelbrot set is "complicated" in the intuitive sense, but has low Kolmogorov complexity since it can be constructed by a simple process. It does if you look at the rest of my argument. Step 1: Stimulation the universe for a sufficiently long time. Step 2: Ask the entity now filling up the universe "is this an agent?". What do you mean by that statement? Kolmogorov complexity is a property of a concept. Well "reducing entropy" as a concept does have low Kolmogorov complexity.

An AGI has low Kolmogorov complexity since it can be specified as "run this low Kolmogorov complexity universe for a sufficiently long period of time".

That's a fundamental misunderstanding of complexity. The laws of physics are simple, but the configurations of the universe that runs on it can be incredibly complex. The amount of information needed to specify the configuration of any single cubic centimeter of space is literally unfathomable to human minds. Running a simulation of the universe until intelligences develop inside of it is not th... (read more)

I could stand to meet a real-life human. I've heard they exist, but I've had such a hard time finding one!

I don't think mind designs are dependent on their underlying physics. The physics is a substrate, and as long as it provides general computation, intelligence would be achievable in a configuration of that physics. The specifics of those designs may depend on how those worlds function, like how jellyfish-like minds may be different from bird-like minds, but not the common elements of induction, analysis of inputs, and selection of outputs. That would mean the simplest a priori mind would have to be computed by the simplest provision of general computation,... (read more)

Two or three people confused about K-complexity doesn't herald the death of LW.

The non-triviality arises from technical considerations

The laws of physics as we know them are very simple, and we believe that they may actually be even simpler. Meanwhile, a mind existing outside of physics is somehow a more consistent and simple explanation than humans having hardware in the brain that promotes hypotheses involving human-like agents behind everything, which explains away every religion ever? Minds are not simpler than physics. This is not a technical controversy.

I especially like the question "Is it ethical to steal truffle mushrooms and champagne to feed your family?" That's an intuitive concept fairly voiced. Calculating the damage to the trolley is somewhat ridiculous, however.

The Open Thread is for all posts that don't necessitate their own thread. The media thread is for recommendations on entertainment. I don't see why comments should be necessary to bring a paper to LWs attention, especially in the Open Thread, and others clearly disagree with your opinion on this matter.

2Elo
There is a combination of multiple factors that I have an issue with. 1. anonymous user 2. multiple posts 3. no commentary or discussion started 4. link outbound 5. repeated process each week Alone I have no problem with any of these things (or a few of them); together they add up to; "that which by slow decay" destroys the nature of Lesswrong as we had it before. There are multiple reasonable solutions to this problem; which include: * me leaving Lesswrong for greener pastures. changing any of the 5 things: * making an account * posting less each week * starting a discussion * not linking outbound (but also not starting content that links outbound) * not doing it every week But I would like the 5 changes to be tackled first before I leave.

I believe in this instance he was reasoning alethically. De facto you are not necessarily correct.

As an example of number 10, consider the Optimalverse. The friendliest death of self-determination I ever did see.

Unfortunately, I'm not quite sure of the point of this post, considering you're posting a reply to news articles on a forum filled with people who understand the mistakes they made in the first place. Perhaps as a repository of rebuttals to common misconceptions posited in the future?

2Stuart_Armstrong
As an article to link to when the issue comes up.

This is no such advancement for AI research. This only provides the possibility of typechecking your AI, which is neither necessary nor sufficient for self-optimizing AI programs.

2[anonymous]
I like how you made this comment, and then emailed me the link to the article, asking whether it actually represents something for self-modifying systems. Now, as to whether this actually represents an advance... let's go read the LtU thread. My guess is that the answer is, "this is an advancement for self-modifying reasoning systems iff we can take System U as a logic in which some subset of programs prove things in the Prop type, and those Prop-typed programs always terminate." So, no.
0Gunnar_Zarncke
This is no major result indeed. Neither necessary nor sufficient. But if you want safe self-optimizing AI you (and the AI) need to reason about the source. If you don't understand how the AI reasons about itself you can't control it. If you force the AI to reason in a way you can do too, e.g. by piggybacking on a sufficiently strong type system, then you at least have a chance to reason about it. There may be other ways to reason about self-modifying programs that don't rely on types but these are presumably either equivalent to such types - and thus the result is helpful in that area too - or more general - in which case proofs become likely more complicated (if feasible at all). So some equivalent to these types is needed for reasoning about safe self-modifying AI.

She is preserving paperclip-valuing intelligence by protecting herself from the potential threat of non-paperclip-valuing intelligent life, and can develop interstellar travel herself.

It's a lonely job, but someone has to make the maximum possible amount of paperclips. Someone, and only one. Anyone else would be a waste of paperclip-material.

3skeptical_lurker
It does say she would die too, - "wipe out all life on Earth" -otherwise I would agree.

It is not irrational, because preferences are arational. Now, Gal might be mistaken about her preferences, but she is the current best source of evidence on her preferences, so I don't see how her actions in this case are irrational either. She's an engine specialist and a mechanic, so it's perfectly understandable that she would want something she knew how to maintain and repair.

That doesn't get rid of randomness, it pushes it into the observer.

You seem to be opposed to the nature of your species. This can't be very good for your self-esteem.

What use is such an AI? You can't even use the behavior of its utility function to predict a real-world agent because it would have such a different ontology. Not to mention the fact that GoL boards of the complexity needed for anything interesting would be massively intractable.

Hardcoding has nothing to do with it.

Well, actually, I think it could. Given that we want the AI to function as a problem solver for the real world, it would necessarily have to learn about aspects of the real world, including human behavior, in order to create solutions that account for everything the real world has that might throw off the accuracy of a lesser model.

1Houshalter
A comment above had an interesting idea of putting it in Conway's game of life. A simple universe that gives absolutely no information about what the real world is like. Even knowing it's in a box, the AI has absolutely no information to go on to escape.
-2tailcalled
I would have assumed that we would let it learn about the real world, but I guess it's correct that if enough information about the real world is hardcoded, my idea wouldn't work. ... which means my idea is an argument for minimizing how much is hardcoded into the AI, assuming the rest of the idea works.
3[anonymous]
1. How about no theory of justice? :) Philosophers should learn from scientists here: if you have no good explanation, none at all is more honest than a bad but seductive one. As a working hypothesis we could consider our hunger for justice and fairness an evolved instinct, a need, emotion, a strong preference, something similar to the desire for social life or romantic love, it is simply one of the many needs a social engineer would aim to satisfy. The goal is, then, to make things "feel just" enough to check that checkmark. 2. "to each his own" reading Rawls and Rawlsians I tend to sense a certain, how to put it, overly collective feeling. That there is one heavily interconnected world and it is the property of all humankind and there is a collective, democratic decision-making on how to make it suitable for all. So in this kind of world there is nothing exempt for politics, nothing is like "it is mine and mine alone and not to be touched by others". The question is, is it a hard reality derived by the necessities of the dynamics of a high-tech era? Or just a preference? My preferences are way more individualistic than that. The attitude that everything is collective and to be shaped and formed in a democratic way is IMHO way too often a power play by "sophists" who have a glib tongue, good at rhethorics, and can easily shape democratic opinion. I am atheist but "culturally catholic" enough to find the parable of the snake offering the fruit useful: that it is not only through violence, but also through glib, seductive persuasion, through 100% consent, a lot of damage can be done. This is something not really understood properly in the modern world, we understand how violence, oppression or outright fraud can be bad, but not really realize how much harm a silver tongue can cause without even outright lying, because we already live in socities where silver-tongue intellectuals are already the ruling class, so they underplay their own power by lionizing consent

I disagree with John Rawl's veil-of-ignorance theory and even find it borderline disgusting (he is just assuming everybody is a risk-averse coward)

Um, what? What's wrong with risk-aversion? And what's wrong with the Veil of Ignorance? How does that assumption make the concept disgusting?

9[anonymous]
First of all, the there is the meta-level issue whether to engage the original version or the pop version, as the first is better but the second is far, far more influential. This is an unresolved dilemma (same logic: should an atheist debate with Ed Feser or with what religious folks actually believe?) and I'll just try to hover in between. A theory of justice does not simply describe a nice to have world. It describes ethical norms that are strong enough to be warrant coercive enforcement. (I'm not even libertarian, just don't like pretending democratic coercion is somehow not one.) Rawls is asking us to imagine e.g. what if we are born with a disability that requires really a lot of investment from society to make its members live an okay life, let's call the hypothetical Golden Wheelchair Ramps. Depending on whether we look at it rigorously, in a more "pop" version Rawls is saying our pre-born self would want GWR built everywhere even when it means that if we are born able and rich we taxed through the nose to pay for it, or in a more rigorous version 1% change to be born with this illness would mean we want 1% of GWRs built. Now, this all is all well if it is simply understood as the preferences of risk-averse people. After all we have a real, true veil of ignorance after birth: we could get poor, disabled etc. any time. It is easy to lose birth privileges, well, many of them at least. More risk-taking people will say I don't really want to pay for GWR, I am taking my gamble tha I will be born rich and able in which case I won't need them and I would rather keep that tax money. (This is a horribly selfish move, but Rawls set up the game so that it is only about fairness emerging out of rational selfishness and altruism is not required in this game so I am just following the rules.) However, since it is a theory of justice, it means the preferences of risk-aversge people are made mandatory, turned into a social policy and enforced with coercion. And that is
0seer
The problem is that Rawls asserts that everyone is maximally risk-averse.

This was very informative, thank you.

Not so. I'm trying to figure out how to find the maximum entropy distribution for simple types, and recursively defined types are a part of that. This does not only apply to strings, it applies to sequences of all sorts, and I'm attempting to allow the possibility of error correction in these techniques. What is the point of doing statistics on coin flips? Once you learn something about the flip result, you basically just know it.

0Kindly
Well, in the coin flip case, the thing you care about learning about isn't the value in {Heads, Tails} of a coin flip, but the value in [0,1] of the underlying probability that the coin comes up heads. We can then put an improper prior on that underlying probability, with the idea that after a single coin flip, we update it to a proper prior. Similarly, you could define here a family of distributions of string lengths, and have a prior (improper or otherwise) about which distribution in the family you're working with. For example, you could assume that the length of a string is distributed as a Geometric(p) variable for some unknown parameter p, and then sampling a single string gives you some evidence about what p might be. Having an improper prior on the length of a single string, on the other hand, only makes sense if you expect to gain (and update on) partial evidence about the length of that string.

The result of some built-in string function length(s), that, depending on the implementation of the string type, either returns the header integer stating the size, or counts the length until the terminator symbol and returns that integer.

0Kindly
That doesn't sound like something you'd need to do statistics on. Once you learn something about the string length, you basically just know it. Improper priors are not useful on their own: the point of using them is that you will get a proper distribution after you update on some evidence. In your case, after you update on some evidence, you'll just have a point distribution, so it doesn't matter what your prior is.

If this post doesn't get answered, I'll repost in the next open thread. A test to see if more frequent threads are actually necessary.

I'm trying to make a prior probability mass distribution for the length of a binary string, and then generalize to strings of any quantity of symbols. I'm struggling to find one with the right properties under the log-odds transformation that still obeys the laws of probability. The one I like the most is P(len(x)) = 1/(x+2), as under log-odds it requires log(x)+1 bits of evidence for strings of len(x) to meet even odds. For... (read more)

6Kindly
Here is a different answer to your question, hopefully a better one. It is no coincidence that the prior that requires log(x)+1 bits of evidence for length x does not converge. The reason for this is that you cannot specify using only log(x)+1 bits that a string has length x. Standard methods of specifying string length have various drawbacks, and correspond to different prior distributions in a natural way. (I will assume 32-bit words, and measure length in words, but you can measure length in bits if you like.) Suppose you have a length-prefixed string. Then you pay 32 bits to encode the length; but the length can be at most 2^32-1. This corresponds to the uniform distribution that assigns all lengths between 0 and 2^32-1 equal probability. (We derive this distribution by supposing that every bit doing length-encoding duty is random and equally likely.) Suppose you have a null-terminated string. Then you are paying a hidden linear cost: the 0 word is reserved for the terminator, so you have only 2^32-1 words to use in your message, which means you only convey log(2^32-1) bits of information per 32 bits of message. The natural distribution here is one in which every bit conveys maximal information, so each word has a 1 in 2^32 chance of being the terminator, and so the length of your string is Geometric with parameter 1/2^32. A common scheme for big-integer types is to have a flag bit in every word that is 1 if another word follows, and 0 otherwise. This is very similar to the null-terminator scheme, and in fact the natural distribution here is also Geometric, but with parameter 1/2 because each flag bit has a probability of 1/2 of being set to 0, if chosen randomly. If you are encoding truly enormous strings, you could use a length-prefixed string in which the length is a big integer. This is much more efficient and the natural distribution here is also much more heavy-tailed: it is something like a smoothed-out version of 2^(32 Geometric(1/2)). We have come
0[anonymous]
Here is a different answer to your question, hopefully a better one. It is no coincidence that the prior that requires log(x)+1 bits of evidence for length x does not converge. The reason for this is that you cannot specify using only log(x)+1 bits that a string has length x. Standard methods of specifying string length have various drawbacks, and correspond to different prior distributions in a natural way. (I will assume 32-bit words, and measure length in words, but you can measure length in bits if you like.) Suppose you have a length-prefixed string. Then you pay 32 bits to encode the length; but the length can be at most 2^32-1. This corresponds to the uniform distribution that assigns all lengths between 0 and 2^32-1 equal probability. (We derive this distribution by supposing that every bit doing length-encoding duty is random and equally likely.) Suppose you have a null-terminated string. Then you are paying a hidden linear cost: the 0 word is reserved for the terminator, so you have only 2^32-1 words to use in your message, which means you only convey log(2^32-1) bits of information per 32 bits of message. The natural distribution here is one in which every bit conveys maximal information, so each word has a 1 in 2^32 chance of being the terminator, and so the length of your string is Geometric with parameter 1/2^32. A common scheme for big-integer types is to have a flag bit in every word that is 1 if another word follows, and 0 otherwise. This is very similar to the null-terminator scheme, and in fact the natural distribution here is also Geometric, but with parameter 1/2 because each flag bit has a probability of 1/2 of being set to 0, if chosen randomly. If you are encoding truly enormous strings, you could use a length-prefixed string in which the length is a big integer. This is much more efficient and the natural distribution here is also much more heavy-tailed: it is something like a smoothed-out version of 2^(32 Geometric(1/2)). We have come
0Kindly
What sort of evidence about x do you expect to update on?

The dark side is Voldemort's thought patterns. In other words, Voldemort is constantly in the dark side.

Ah, so about as large as it takes for a fanfic to be good. :P

Load More