All of ACrackedPot's Comments + Replies

Another crackpot physics thing:

My crackpot physics just got about 10% less crackpot.  As it transpires, one of the -really weird- things in my physics, which I thought of as a negative dimension, already exists in mathematics - it's a Riemann Sphere.  (Thank you, Pato!)

This "really weird" thing is kind of the underlying topology of the universe in my crackpot physics - I analogized the interaction between this topology and mass once to an infinite series of Matryoshka dolls, where every other doll is "inside out and backwards".  Don't ask me... (read more)

The issue arises specifically in the situation of recursive self-improvement: You can't prove self-consistency in mathematical frameworks of "sufficient complexity" (that is, containing the rules of arithmetic in a provable manner).

What this cashes out to is that, considering AI as a mathematical framework, and the next generation of AI (designed by the first) as a secondary mathematical framework - you can't actually prove that there are no contradictions in an umbrella mathematical framework that comprises both of them, if they are of "sufficient complex... (read more)

Alternatively - we communicate about the things that pose the most danger to us, in a manner intended to minimize that danger.

In a typical Level-4 society, people don't have a lot to fear from lions and they aren't in imminent danger of starvation.  The bottom half of Maslow's hierarchy is pretty stable.

It's the social stuff where our needs run the risk of being unfulfilled; it is the social stuff that poses the most danger.  So of course most of the communication that takes place is about social stuff, in manners intended to reinforce our own social status.  This isn't a simulacra of reality - it is reality, and people suffer real harms for being insufficient to the task.

I think a substantial part of the issue here is the asymmetry created when one party is public, and one party is not.

Suppose a user is posting under their real name, John Doe, and another user is posted under a pseudonym, Azure_Pearls_172.  An accusation by Azure against John can have real-world implications; an accusation by John against Azure is limited by the reach of the pseudonym.  Azure can change their pseudonym, and leave the accusations behind; John cannot.

Doxxing can make a situation more symmetrical in this case.  Whether or not i... (read more)

8jefftk
I agree that the situations you're describing are complex, but they're not the situation I'm trying to talk about here. I'm talking about a case where someone starts posting under a pseudonym to make accusations.

I am, unapologetically, a genius.  (A lot of people here are.)

My experience of what it is like being a genius: I look at a problem and I know an answer.  That's pretty much it.  I'm not any faster at thinking than anybody else; I'd say I'm actually a somewhat slower thinker, but make up for it by having "larger" thoughts; most people seem to have fast multi-core processors, and I'm running a slightly slow graphics card.  Depending on what you need done, I'm either many orders of magnitude better at it - or completely hopeless.  It ... (read more)

Take a step back and try rereading what I wrote in a charitable light, because it appears you have completely misconstrued what I was saying.

A major part of the "cooperation" involved here is in being able to cooperate with yourself.  In an environment with a well-mixed group of bots each employing differing strategies, and some kind of reproductive rule (if you have 100 utility, say, spawn a copy of yourself), Cooperate-bots are unlikely to be terribly prolific; they lose out against many other bots.

In such an environment, a strategem of defecting ag... (read more)

Evolution gave us "empathy for the other person", and evolution is a reasonable proxy for a perfectly selfish utility machine, which is probably good evidence that this might be an optimal solution to the game theory problem.  (Note: Not -the- optimal solution, but -an- optimal solution, in an ecosystem of optimal solutions.)

2Dagon
If you think evolution has a utility function, and that it's the SAME function that an agent formed by an evolutionary process has, you're not likely to get me to follow you down any experimental or reasoning path.  And if you think this utility function is "perfectly selfish", you've got EVEN MORE work cut out in defining terms, because those just don't mean what I think you want them to. Empathy as a heuristic to enable cooperation is easy to understand, but when normatively modeling things, you have to deconstruct the heuristics to actual goals and strategies.

Note that it is possible to deceive others by systematically adjusting predictions upward or downward to reflect how desirable it is that other people believe those predictions, in a way which preserves your score.

This is true even if you bucket your scores; say you're evaluating somebody's predictive scores.  You see that when they assign a 60% probability to an event, that event occurs 60% of the time.  This doesn't mean that any -specific- prediction they make of 60% probability will occur 60% of the time, however!  They can balance out t... (read more)

1nikos
Agreed. I think a strong reason why this might work at all is that forecasters are primarily judged by some other strictly proper scoring rule - meaning that they wouldn't have an incentive to fake calibration if it makes them come out worse in terms of e.g. Brier or log score. 

How does one correctly handle multi-agent dilemmas, in which you know the other agents follow the same decision theory? My implementation of "UDT" defects in a prisoner's dilemma against an agent that it knows is following the same decision procedure. More precisely: Alice and Bob follow the same decision procedure, and they both know it. Alice will choose between cooperate/defect, then Bob will choose between cooperate/defect without knowing what Alice picked, then the utility will be delivered. My "UDT" decision procedure reasons as follows for Alice: "i

... (read more)
1justinpombrio
Ah, so I'm interested in normative decision theory: how one should ideally behave to maximize their own utility. This is what e.g. UDT&FDT are aiming for. (Keep in mind that "your own utility" can, and should, often include other people's utility too.) Minimizing runtime is not at all a goal. I think the runtime of the decision theories I implemented is something like doubly exponential in the number of steps of the simulation (the number of events in the simulation is exponential in its duration; each decision typically involves running the simulation using a trivial decision theory). That's an interesting approach I hadn't considered. While I don't care about efficiency in the "how fast does it run" sense, I do care about efficiency in the "does it terminate" sense, and that approach has the advantage of terminating. You're doing to defect against UDT/FDT then. They defect against cooperate-bot. You're thinking it's bad to defect against cooperate-bot, because you have empathy for the other person. But I suspect you didn't account for that empathy in your utility function in the payoff matrix, and that if you do, you'll find that you're not actually in a prisoner's dilemma in the game-theory sense. There was a good SlateStarCodex post about this that I can't find.

The point there is that there is no contradiction because the informational content is different.  "Which is the baseline" is up to the person writing the problem to answer.  You've asserted that the baseline is A vs B; then you've added information that A is actually A1 and A2.

The issue here is entirely semantic ambiguity.

Observe what happens when we remove the semantic ambiguity:

You've been observing a looping computer program for a while, and have determined that it shows three videos.  The first video portrays a coin showing tails.  ... (read more)

If you have two options, A and B, 50% odds is maximal ignorance; you aren't saying they have equivalent odds of being true, you're saying you have no information by which to make an inference which is true.

If you then say we can split A into A1 and A2, you have added information to the problem.  Like the Monty Hall problem, information can change the odds in unexpected ways!

There's no contradiction here - you have more information than when you originally assigned odds of 50/50.  And the information you have added should, in real situations, info... (read more)

2Chris_Leong
“If you then say we can split A into A1 and A2, you have added information to the problem. Like the Monty Hall problem, information can change the odds in unexpected ways!” - It’s not clear which is the baseline.

I tried potassium supplementation.  The very first thing I noticed is that a significant portion of hunger was immediately converted into thirst; to be specific, where normally at time X I would be hungry, instead at time X I was thirsty instead.  There was an immediate and overall reduction of calories in.

This suggests to me that I had a slight potassium deficiency which my body was compensating for by increasing the amount of food I was consuming.

Cursory research suggests potassium content in fresh foods has declined ~20% over the past century ... (read more)

1CuoreDiVetro
Thanks for this info. Ya this really goes in the direction of what I think is happening. 

Instead of further elaborations on my crackpot nonsense, something short:

I expect that there is some distance from a magnetic source between 10^5 meters and 10^7 meters at which there will be magnetic anomalies; in particular, there will be a phenomenon by which the apparent field strength drops much faster than expected and passes through zero into the negative (reversed polarity).

I specifically expect this to be somewhere in the vicinity of 10^6 meters, although the specific distance will vary with the mass of the object.


There should be a second magnetic... (read more)

Yes, but then it sounds like those who have no such altruistic desire are equally justified as those who do. An alternative view of obligation, one which works very well with utilitarianism, is to reject personal identity as a psychological illusion. In that case there is no special difference between "my" suffering and "your" suffering, and my desire to minimize one of these rationally requires me to minimize the other. Many pantheists take such a view of ethics, and I believe its quasi-official name is "open individualism".

Yes.

I think this requires an as... (read more)

1Shiroe
The obligation in this theory is conditional on you wanting to end your own suffering. If you don't care about your own suffering, then you have no reason to care about the suffering of others. However, if you do care, then you must also care about the suffering of others.

Where you see neutrality, he would see obligation.

 

In what sense is it an obligation?  By what mechanism am I obligated?  Do I get punished for not living up to it?

You use that word, but the only meaningful source of that obligation, as I see it, is the desire to be a good person.  Good, not neutral.

I disagree, and I think that you are more of a relativist than you are letting on. Ethics should be able to teach us things that we didn't already know, perhaps even things that we didn't want to acknowledge.

This is a point of divergence, an... (read more)

1Shiroe
Yes, but then it sounds like those who have no such altruistic desire are equally justified as those who do. An alternative view of obligation, one which works very well with utilitarianism, is to reject personal identity as a psychological illusion. In that case there is no special difference between "my" suffering and "your" suffering, and my desire to minimize one of these rationally requires me to minimize the other. Many pantheists take such a view of ethics, and I believe its quasi-official name is "open individualism". You would prefer that we had the ethical intuitions and views of the first human beings, or perhaps of their hominid ancestors?
4JBlack
Some moral theories have zero "slack": everything that is not mandatory (morally good) is forbidden (morally evil). It seems that yours is not one of them, but they do exist. I suppose that people who adhere to them think that any other system is morally repugnant, and they can have that opinion if they want, but it seems completely impractical and downright counterproductive even if there was some absolute standard by which they could be said to be "correct".

Utility, as measured, is necessarily relative.  By this I don't mean that it is theoretically impossible to have an objective measure of utility, only that it is practically impossible; in reality / in practice, we measure utility relative to a baseline.  When calculating the utility of doing something nice for somebody, it is impractical to calculate their current utility, which would include the totality of their entire experience as summed in their current experience.

Rule utilitarianism operates in the same fashion much more straightforwardly,... (read more)

1Shiroe
Total Act Utilitarianism is what comes to mind when I think of a "standard" utilitarian theory. Your theory seems like a kind of rule or non-total variant. Your alterations would be much unliked by someone like Peter Singer, who thinks that we have an obligation to help people simply because us doing so could improve their lives. Where you see neutrality, he would see obligation. I disagree, and I think that you are more of a relativist than you are letting on. Ethics should be able to teach us things that we didn't already know, perhaps even things that we didn't want to acknowledge. As for someone who murders fewer people than he saves, such a person would be superior to me (who saves nobody and kills nobody) and inferior to someone who saves many and kills nobody.

I suspect there might be a qualia differential.

What is your internal experience of morality?

1Shiroe
"Evil is not just a goodness minimization problem" makes sense, but "Badness is not just a goodness minimization problem" doesn't make sense to me. Your analysis hinges on a concept of evil as distinct from merely bad. In the trolley problem, sacrificing the one to save the five will always lead to less badness, because fewer people are dead in the resulting state of the world than would otherwise be. This is why utilitarianism always chooses to sacrifice the one to save the five, ceteris paribus. Whether less badness is the same thing as less evilness is not considered, because utilitarianism has only one concept of utility. There may be additional contextual facts, e.g. that the person making the decision is employed as a switch operator. But unless these facts influence the resulting world-state (i.e. the number of casualties), they will not factor into the utility calculation. Therefore, I do not think that your analysis works with utilitarianism. Though it may work for other ethical systems.

The tax should, in fact, cause some landlords / landowners to just abandon their land.  This is a critical piece of Georgism; the idea that land is being underutilized, in particular as an investment which is expected to pay off in terms of higher land values / rents later, but also in terms of things like parking lots, where the current value of the use of the land may exceed the current taxes (which include only a portion of the value of the land and the improvements combined) while being lower than the Georgist taxes (which include the entire value... (read more)

Related: https://www.lesswrong.com/posts/57sq9qA3wurjres4K/ruling-out-everything-else

I do not think the linked post goes anywhere near far enough.  In particular, it imagines that people share a common concept-space.  The totality to which thought is arbitrary is, basically, complete.

I'm a crackpot.

Self-identifiably as so. Part of the reason I self-identify as a crackpot is to help create a kind of mental balance, a pushback against the internal pressure to dismiss people who don't accept my ideas: Hey, self, most people who have strong beliefs similar to or about the thing you have strong beliefs about are wrong, and the impulse to rage against the institution and people in it for failing to grasp the obvious and simple ideas you are trying to show them is exactly the wrong impulse.

The "embitterment" impulse can be quite strong; when ... (read more)

Why are you using what I presume is your real name here?

I'm not actually interested in whether or not it is your real name, mind; mostly I'd like to direct your attention to the fact that the choice of username was in fact a choice.  That choice imparts information.  By choosing the username that you did, you are, deliberately or not, engaging in a kind of signaling.

In particular, from a particular frame of reference, you are engaging in a particular kind of costly signaling, which may serve to elevate your relative local status, by tying any rep... (read more)

I get the impression, reading this and the way you and commenters classify people, that the magnitude of days is to some extent just equivalent to an evaluation of somebody's intellectual ability, and the internal complexity of their thoughts.

So if I said your article "Ruling Out Everything Else" is the 10-day version of a 10000-day idea, you might agree, or you might disagree, but I must observe that if you agree, it will be taken as a kind of intellectual humility, yes?  And as we examine the notion of humility in this context, I think it should be ... (read more)

2Duncan Sabien (Deactivated)
No Closer I think there's going to be a higher minimum bar for higher magnitudes; I think that there are fewer people who can cut it wrestling with e.g. fundamental philosophical questions about the nature of existence (a 100,000+ day question) than there are who can cut it wrestling with e.g. questions of social coordination (a 10-100 day question in many cases). But I think that there are a very large number of people who could, in principle, qualify to be higher-order monks, who instead apply prodigious intelligence to smaller questions one after the other all the time. So, like, higher orders will have a higher density of smarter people, but there are ~equally upper-echelon smart people at all levels. The 10-day version of a 10,000-day idea is an unusually valuable thing; as the old adage goes, "if I had had more time, I would have composed a shorter letter."  Distillations are difficult, especially distillations that preserve all of the crucial elements, rather than sacrificing them. So to the extent that I might sometimes write 10-day distillations of 10,000-day ideas, this is a pretty > high-status claim, actually.  It's preserving the virtues of both orders. It's more that they are wrong about different things, in systematically different ways.  A 10-day monk is right, about 10-day concerns viewed through 10-day ontologies, about as often as a 1-day monk or a 100-day monk, in their respective domains.

It isn't the thing that the KL divergence is measuring, it is an analogy for it.  The KL divergence is measuring the amount of informational entropy; strictly speaking, zipping a file has no effect in those terms.

However, we can take those examples more or less intact and place them in informational-entropy terms; the third gets a little weird in the doing, however.

So, having an intuition for what the ZIP file does, the equivalent "examples":

Example 1: KLE(Reference optimizer output stage, ineffective optimizer output) is 0; KLE(Reference final stage,... (read more)

Stepping into a real-world example, consider a text file, and three cases, illustrating different things:

First case: Entirely ineffective ZIP compression, (some processes), effective ZIP compression.  If we treat the ineffective ZIP compression as "the optimizer", then it is clear that some compression happens later in the sequence of processes; the number of bits of optimization increased.  However, the existence or non-existence of the first ineffective ZIP compression has no effect on the number of bits of optimization, so maybe this isn't qui... (read more)

1tgbrooks
I'm intrigued by these examples but I'm not sure it translates. It sounds like you are interpreting "difference of size of file in bits between reference and optimized versions" as the thing the KL divergence is measuring, but I don't think that's true. I'm assuming here that the reference is where the first step does nothing and outputs the input file unchanged (effectively just case 1).  Let's explicitly assume that the input file is a randomly chosen English word. Suppose a fourth case where our "optimizer" outputs the file "0" regardless of input. The end result is a tiny zip file. Under the "reference" condition, the original file is zipped and is still a few bytes, so we have reduced the file size by a few bytes at most. However, the KL divergence is infinite! After all "0" is not an English word and so it's zip never appears in the output distribution of the reference but occurs with probability 1 under our optimizer. So the KL divergence is not at all equal to the number of bits of filesize reduced. Obviously this example is rather contrived, but it suffices to show that we can't directly translate intuition about filesizes to intuition about bits-of-optimization as measured by KL divergence. Were you going for a different intuition with these examples?

Is this assuming all optimization happens at "the optimizer" (and thus implicitly assuming that no optimization can take place in any intermediate step?)

3johnswentworth
That wording is somewhat ambiguous, but the answer to the question which I think you're trying to ask is "yes". We're assuming that any differences between "optimized" and "not optimized" states/distributions are driven by differences in the optimizer, in the sense that the causal structure of the world outside the optimizer remains the same. Intuitively, this corresponds to the idea that the optimizer is "optimizing" some far-away chunk of the world, and we're quantifying the optimization "performed by the optimizer", not whatever optimization might be done by whatever else is in the environment.

I notice that my ethics, my morality, my beliefs, differ in many ways from those of the past; I expect these things to differ in many ways from my own, in the future.  I notice the relationship between these two concepts is reciprocal.

My grandfather talked to me, several times, about how he knew I had my own life, and that I wouldn't always want to spend a lot of time with him; he was explicitly giving me permission, I think, to do something that he himself regretted in his youth, but understood better with age.  He was telling me to live unfette... (read more)

What does it look like, when the optimization power is turned up to 11 on something like the air conditioner problem?

I think it looks exactly like it does now; with a lot of people getting very upset that local optimization often looks un-optimized from the global perspective.

If I needed an air-conditioner for working in my attic space, which is well-insulated from my living space and much, much hotter than either my living space or the outside air in the summer, the single-vent model would be more efficient.  Indeed, it is effectively combining the m... (read more)

You have a simplification in your "black swan awareness" column which I don't think it is appropriate to carry over; in particular you'd need to rewrite the equation entirely to deal with an anti-Taleb, who doesn't believe in black swans at all.  (It also needs to deal with the issue of repricocity; if somebody doesn't hang out with you, you can't hang out with them.)

You probably end up with a circle, the size of which determines what trends Taleb will notice; for the size of the apparent circle used for the fan, I think Taleb will notice a slight dow... (read more)

4tailcalled
I'm not sure what you are saying, could you create a simulation or something?

There's a phenomenon in multidimensional motion called "gimbal locking", in which the number of effective dimensions decrease over time under motion owing to local correlations between the dimensions, which I believe may be relevant here.

Yes, it does depend on the selection model; my point was that the selection model you were using made the same predictions for everybody, not just Taleb.  And yes, changing the selection model changes the results.

However, in both cases, you've chosen the selection model that supports your conclusions, whether intentionally or accidentally; in the post, you use a selection model that suggests Taleb would see a negative association.  Here, in response to my observation that that selection model predicts -everybody- would see a negative association,... (read more)

2tailcalled
No, if I use this modified selection model for Taleb, the argument survives. For instance, suppose he is 140 IQ - 2.67 sigma above average in g. That should mean that his selection expression should be black_swan_awareness - (g-2.67)**2 * 0.5 > 1. Putting this into the simulation gives the following results:

So ... smart people are worse than average at the task of evaluating whether or not smart people are worse than average at some generic task which requires intellectual labor to perform, and in fact smart people should be expected to be better than average at some generic task which requires intellectual labor to perform?

Isn't the task of evaluating whether or not smart people are worse than average at some generic task which requires intellectual labor to perform, itself a task which requires intellectual labor to perform?  So shouldn't we expect the... (read more)

3tailcalled
I mentioned this as a bias that a priori very much seems like it should exist. This does not mean smart people can't get the right answer anyway, by using their superior skills. (Or because they have other biases in favor of intelligence, e.g. self-serving biases.) Maybe they can, I wouldn't necessarily make strong predictions about it. I think it depends on the selection model. The simulation in the main post assumed a selection model of black_swan_awareness + g > 3. If we instead change that to black_swan_awareness - g**2 * 0.5 > 1, we get the following:   This seems to exhibit a positive correlation. For convenience, here's the simulation code in case you want to play around with it: import numpy as np import matplotlib.pyplot as plt N = 10000 g = np.random.normal(0, 1, N) black_swan_awareness = np.random.normal(0, np.sqrt(1-0.3**2), N) + 0.3 * g selected = black_swan_awareness - g**2 * 0.5 > 1 #g + black_swan_awareness > 3 iq = g * 15 + 100 plt.scatter(iq[~selected], black_swan_awareness[~selected], s=0.5, label="People Taleb fanboys avoid") plt.scatter(iq[selected], black_swan_awareness[selected], s=0.5, label="People Taleb fanboys hangs out with") plt.legend() plt.xlabel("IQ") plt.ylabel("black swan awareness") plt.title("Hypothetical collider") plt.show()

Another term for this pattern of behavior is "the script"; this terminology, and the related narrative-oriented way of framing the behavior, seems particularly common as arising from LSD usage, dating back something like sixty years at this point to an individual whose name I can't quite recall.

In this framing, people see themselves as characters living out a story; the grayed-out options are simply those things that are out of character for them.  Insofar as your character is "agent of chaos", as another commenter alludes to, you still have grayed-ou... (read more)

The topic question is "Why is Toby Ord's likelihood of human extinction due to AI so low?"

My response is that it isn't low; as a human-extinction event, that likelihood is very high.

You ask for a comparison to MIRI, but link to EY's commentary; EY implies a likelihood of human extinction of, basically, 100%.  From a Bayesian updating perspective, 10% is closer to 50% than 100% is to 99%; Ord is basically in line with everybody else, it is EY who is entirely off the charts.  So the question, why is Ord's number so low, is being raised in the conte... (read more)

Remember that mathematics is something we make up; mathematics isn't fundamental to, prior to, or indeed related to existence itself at all; mathematics is the process of formalizing rules and seeing what happens.  You can invent whatever rules you want, although the interesting stuff generally doesn't really happen unless the rules are consistent / satisfiable with respect to one another.

The fact that mathematics happen to be useful in describing reality doesn't imply that reality is fundamentally mathematical, except in the sense that reality does s... (read more)

Suppose astronomers detect an asteroid, and suggest a 10% chance of it hitting the Earth on a near-pass in 2082.  Would you regard this assessment of risk as optimistic, or pessimistic?  How many resources would you dedicate to solving the problem?

My understanding is that 10% isn't actually that far removed from what many people who are deeply concerned about AI think (or, for that matter, people who aren't that concerned about AI think - it's quite remarkable how differently people can see that 10%); they just happen to think that a 10% chance o... (read more)

2ChristianKl
I didn't speak about absolute optimism but said: "more optimistic then".  That's an argument you can make for spending much more money on alignment research.  It's however not an argument against stronger measures such as doing the kind of government regulation that would make it impossible to develop AGI.

I think this is a useful abstraction.

But I think the word you're looking for is "god".  In the "Bicameral Consciousness" sense - these egregores you refer to are gods that speak to us, whose words we know.  There's another word, zeitgeist, that refers to something like the same thing.

If you look in your mind, you can find them; just look for what you think the gods would say, and they will say it.  Pick a topic you care about.  What would your enemy say about that topic?  There's a god, right there, speaking to you.

Mind, in a sense... (read more)

"Anti-Slavic" is a bit reductionist and slightly skew of the truth, but basically, anti-Slavic, in the same way that an overly reductionist version of the Western perspective on what "Nazi" stands for would be "Anti-Semitic".

Note that the Russian perspective on what "Nazi" represents doesn't necessarily look the same as your perspective of what "Nazi" represents.

1arunto
What is, based on your understanding, the Russian perspective on what "Nazi" stands for?

If we consider the extra dimension(s) on which the amplitude of the wave function given to the Schrodinger Equation, the wave function instead defines a topology (or possibly another geometric object, depending on exactly what properties end up being invariant.)

If the topology can be evaluated over time by some alternative mathematical construct, that alternative mathematical construct may form the basis for a more powerful (in the sense of describing a wider range of potential phenomena) physics, because it should be constructable in such a way as to not ... (read more)

Suppose for a moment your washing machine is broken.

You have some options; you could ignore the problem.  You could try to fix it yourself.  You could call somebody to fix it.  This isn't intended to be a comprehensive list of options, mind, these are cached thoughts.

Each of these options in turn produce new choices; what to do instead, what to try to do to fix it, who to call.

Let's suppose for a moment that you decide to call somebody.  Who do you call?  You could dial random numbers into your phone, but clearly that's not a great... (read more)

What are you calling the "framing" of a decision?  Is it something other than a series of decisions about what qualities with regard to the results of that decision that you care about?

2Dagon
The "framing" of a decision is the identification that there's a decision to make, and the enumeration of the set or series of sub-decisions that describe the possible actions.

The point is that meaningful labor is increasingly "selection effort", the work involved in making a decision between multiple competing choices, and some starter thoughts about how society can be viewed once you notice the idea of making choices as meaningful labor (maybe even the only meaningful form of labor).

The idea of mapping binary strings to choices is a point that information is equivalent to a codification of a sequence of choices; that is, the process of making choices is in fact the process of creating information.  For a choice between N ... (read more)

2Dagon
I'm not sure if I'm just misunderstanding, or actively disagreeing. Whether you model something as a tree of binary choices, or a lookup table of options doesn't matter much on this level.  The tree is less efficient, but easier to modify, but that's a completely different level than your post seems to be about, and not relevant to whatever you're trying to show. But the hard and important point is NOT in making the decision or executing the choice(s) (whether a jump or a sequence of binary).  That just does not matter.  Actually identifying the options and decisions and figuring out what decisions are POSSIBLE is the only thing that matters. The FRAMING of decisions is massively information-producing.  Making decisions is also information-producing (in that the uncertainty of the future becomes the truth of the past), but isn't "information labor" in the same way that creating the model is.

Suppose you have a list of choices a selection must be made from, and that the decision theory axioms of orderability and transitivity apply.

It should then be possible to construct a binary tree representing this list of choices, such that a choice can be represented as a binary string.

Likewise, a binary string, in a certain sense, represents a choice.

In this specific sense, what computers automate is the process of selection, of choice.  Noticing this, and noticing that computers have automated away considerable amounts of "work", we must notice that... (read more)

2Dagon
Another phrase for "binary string" is "number".  A choice can be represented by a number, ok.  I think you're skipping the hard part - discovering the choices, and mapping them to numbers.   Then you lose me when you start talking about crowdsourcing and political beliefs and investment and such.  That's all the hard part of mapping.  And the resulting map is likely to be uncomputable given current limits (possibly even theoretically, if the computation includes the substrate on which it's computing).   I don't think there's any logical chain here - just rambling.
4Gunnar_Zarncke
Only loosely related but your first sentences prompted it: A way to convert complex decisions into a tree of binary choices for humans is Final Version Perfected.

Would you willingly go back in time and re-live your life from the beginning, with all the knowledge you have now?  Say, knowing what stocks to purchase, what cryptocurrencies are worth buying and when, being able to breeze through education and skip ahead in life, and all the other advantages you would have?

If the answer to that is yes, then observe that this is exactly the same thing.

The point of this being that you don't actually think of past-you, present-you, and future-you as you in the same sense.  You'll happily overwrite past-you with present-you, but you'd see it as a problem if future-you overwrote present-you, so far as to be equatable to dying.

2Adam Zerner
Why do you say that? I don't see it as overwriting. I am 28 years old. The way I see it is, I live 28 years, then I go back to the time I was born, then I re-live those 28 years, and so I get to be alive for 56 years.

What, exactly, does it even mean for "you" to exist for 100k years?

Is the "you" from yesterday "you"?  Would you be comfortable with your conscious mind being replaced with the conscious mind of that entity?  What about the "you" from tomorrow"?  What about the "you" from 100k years in the future?  If that's still "you", should it be a problem for your mind to be erased, and that mind to be written in its place?

2Adam Zerner
I don't have a great grasp on the question of "what makes you you". However, I do feel solid about "yesterday you" = "present moment you" = "100k years from now you". In which case living for eg. 100k years, there isn't an issue where it isn't you that is alive 100k years from now. Yes, I see that as a problem because it'd still be a short lifespan. You wouldn't be alive and conscious from years 30 through 100k. I would like to maximize the amount of years that I am alive and conscious (and happy).

What does it mean, for a thing to move?

First, what phenomenon are we even talking about?  It's important to start here.  I'm going to start somewhat cavalierly: Motion is a state of affairs in which, if we measure two variables, X and T, where X is the position on some arbitrary dimension relative to some arbitrary point using some arbitrary scale, and T is the position in "time" as measured by a clock (also arbitrary), we can observe that X varies with T.

Notice there are actually two distinct phenomena here: There is the fact that "X" changed, w... (read more)

Well, if my crackpot physics is right, it actually kind of reduces the probability I'd assign to the world I inhabit being "real".  Seriously, the ideas aren't complicated, somebody else really should have noticed them by now.

But sure it makes predictions.  There should be a repulsive force which can be detected when the distance between two objects is somewhere between the radius of the solar system and the radius of the smallest dwarf galaxy.  I'd guess somewhere in the vicinity of 10^12 meters.

Also electrical field polarity should invert ... (read more)

Does any of that make an observable difference.

Not really, no.  And that's sort of the point; the claim that the world is external is basically an empty claim.

1TAG
But you seem to favour a rather specific alternative involving fractals and stuff. Why wouldn't that be empty? Isn't evidence for realism, evidence against anti realism, and vice versa? If making predictions really is the only game in town, then your alternative physics needs to make predictions. Can it?

I think one of the more consistent reports of those who connect with that voice is that they lose that fear.

3Alex Flint
What fear is that, friend?

Because it's expensive, slow, and orthogonal to the purpose the AI is actually trying to accomplish.

As a programmer, I take my complicated mirror models, try to figure out how to transform them into sets of numbers, try to figure out how to use one set of those numbers to create another set of those numbers.  The mirror modeling is a cognitive step I have to take before I ever start programming an algorithm; it's helpful for creating algorithms, but useless for actually running them.

Programming languages are judged as helpful in part by how well they ... (read more)

Load More