Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: MakoYass 15 October 2017 12:20:37AM 1 point [-]

I have a patent law question.

Summary/main question: Should patents ever be granted for a common, unoriginal idea, before any original work has been done, to protect the claimant's future work in the area of the claim. If we are not allowed to grant patents like that, what sort of schemes do we favor for bringing incentives to make progress in competitive arenas of research closer to the societal value of the expected findings?

Companies often seem to need a promise that if they can make an idea work and find an audience, all of those unprotected advancements they must make (market research, product development and building awareness in the audience(marketing)), wont just be stolen by some competitor the moment people start buying the thing.

It seems like a common situation, someone puts a lot of money into popularizing some innovation, but because it's an obvious innovation, they can't protect it, and you'll find it on aliexpress for like $3.50. They aren't compensated proportionate to the value they produced. If it can't be produced for $3.50, it will be produced by their largest, most complacent competitors to safeguard their stranglehold on the market. The incumbents will go completely unpunished for having sat on their hands for long enough to allow these new innovators to threaten them, the idea will threaten them, and then it will serve them, and it will serve as an example to anyone who tries to threaten them in future, and innovation will generally be discouraged.

The expected rewards for solving a problem that takes a long time to solve are generally much lower than the societal value of the solution, because there's a high chance that another team will solve it first, and most of the resources invested in development will have been in vain. If a working group had exclusive rights to the solutions to some problem, whatever they turn out to be, the amount they aught to invest will be much closer to the solutions' actual value.

It's a way of limiting the inefficiencies of competition. It sort of reminds me of bitcoin-NG, if I've understood it correctly, the protocol periodically elects a single working group to process the bulk of the transactions, to prevent costly duplication of efforts.

So, to reiterate, should patents ever be granted before any original work has been done, to protect the claimant's future work in the area of the claim, and if not, what should we do instead, or what do we do instead, to bring the incentive to make progress in competitive arenas of research closer to the actual societal value of the expected findings?

Comment author: entirelyuseless 13 October 2017 02:38:21PM 1 point [-]

The problem with your "in practice" argument is that it would similarly imply that we can never know if someone is bald, since it is impossible to give a definition of baldness that rigidly separate bald people from non-bald people while respecting what we mean by the word. But in practice we can know that a particular person is bald regardless of the absence of that rigid definition. In the same way a particular person can know that he went to the store to buy milk, even if it is theoretically possible to explain what he did by saying that he has an abhorrence of milk and did it for totally different reasons.

Likewise, in practice we can avoid money pumps by avoiding them when they come up in practice. We don't need to formulate principles which will guarantee that we will avoid them.

Comment author: MakoYass 14 October 2017 11:33:32PM 0 points [-]

A person with less than 6% hair is bald, a person with 6% - 15% hair might be bald, but it is unknowable due to the nature of natural language. A person with 15% - 100% hair is not bald.

We can't always say whether someone is bald, but more often, we can. Baldness remains applicable.

Comment author: Caspar42 06 October 2017 02:59:01PM 0 points [-]

Yes, the paper is relatively recent, but in May I published a talk on the same topic. I also asked on LW whether someone would be interested in giving feedback a month or so before actually the paper.

Do you think your proof/argument is also relevant for my multiverse-wide superrationality proposal?

Comment author: MakoYass 10 October 2017 06:06:25AM 0 points [-]

I watched the talk, and it triggered some thoughts.

I have to passionately refute the claim that superrationality is mostly irrelevant on earth. I'm getting the sense that much of what we call morality really is superrationality struggling to understand itself and failing under conditions in which CDT pseudorationality dominates our thinking. We've bought so deeply into this false dichotomy of rational xor decent.

We know intuitively that unilateralist violent defection is personally perilous, that committing an act of extreme violence tears one's soul and transports one into a darker world. This isn't some elaborate psychological developmental morph or a manifestation of group selection, to me the clearest explanation of our moral intuitions is that humans' decision theory supports the superrational lemma; that the determinations we make about our agent class will be reflected by our agent class back upon us. We're afraid to kill because we don't want to be killed. Look anywhere where an act of violence is "unthinkable", violating any kind of trust that wouldn't, or couldn't have been offered if it knew we were mechanically capable of violating it, I think you'll find reflectivist[1] decision theory is the simplest explanation for our aversion to violating it.

Regarding concrete applications of superrationality; I'm fairly sure that if we didn't have it, voting turnout wouldn't be so high (in places where it is high. The USA's disenfranchisement isn't the norm). There's a large class of situations where the individual's causal contribution is so small as to be unlikely to matter. If they didn't think themselves linked by some platonic thread to their peers, they would have almost no incentive to get off their couch and put their hand in. They turn out because they're afraid that if they don't, the defection behavior will be reflected by the rest of their agent class and (here I'll allude to some more examples of what seems to be applied superrationality) the kickstarter project would fail/the invaders would win the war/Outgroup Scoundrel would win the election.

(Why kickstart when you can just wait and pirate it when it comes out, or wait for it to go on sale? Because if you defect, so will the others, and the thing wont be produced in the first place)

(Why risk your life in war when you're just one person? Assuming you have some way to avoid the draft. Deep down, you hope you wont find one, because if you did, so would others.)

(One vote rarely makes the difference. Correlated defection sure does though.)

There are many other models that could explain that kind of behavior, social pressures, dumb basal instincts[3], group selection!, but at this stage you'll probably understand if I hear that as the sputtering of the less elegant model as it fails occam's razor.

For me, this faith in humans is, if nothing else, a comfort. It is to know that when I move to support some non-obvious protocol that requires mass adoption to do any good, some correlated subset of humanity will move to support it along with me, even if I can't see them from where I am, superrationality lets us assume that they're there.

I'll give you that disproof outline, I think it's probably important that a society takes this this question seriously enough to answer it. Apologies in advance for the roughness.

Generally, assume a big multiverse and thus extra-universal simulators definitely, to some extent, exist. (I wish I knew where this assumption comes from, regardless, we both seem to find it intuitive)

a := Assume that the solomonoff prior is the best way to estimate the measure of a thing in the multiverse, in other words, Assume that the measure of any given universe is best guessed to be proportionate to the complexity of its physics

b := Assume that a universe that is able to simulate us at an acceptable level of civilizational complexity must have physics that are far more complex than ours to be able to afford to devote such powerful computers to the task

a & b ⇒ That universe, then, would have orders of magnitude lower measure than natural instances of our own

It seems that the relative measure of simulated instances of our universes would be much smaller than the relative measure of godless instances of our universe, because universes sufficient to host a simulation are likely to be so much rarer.

The probability that we are simulated by higher level beings [2] is too low for the maximum return to justify building any lifepat grids.

I have not actually multiplied any numbers and I'm not sure complexity of laws of physics and computational capacity would be proportionate, if you could show that the ratio between ranges of measure and ranges of computational capacity should be assumed to be linear rather than inverse-exponential, then compat may have some legs to stand on. Other disproofs may come in the form of identifying discontinuities in the complexity chain; if any level can generally prove that the next level has low measure, then they have no incentive to cooperate, and so nor does the level below them, and so on. If a link in the chain is broken, everything below it is disenfranchised.

[1] I think we should call the sorts of decision theories/ideologies that support superrationality "reflective". They reflect each other. The behavior of one reflects the behavior of the others. It also sort of sees itself, it's self-aware. The term has been used for a related property https://wiki.lesswrong.com/wiki/Reflective_decision_theory , apparently, though there are no clear cites here. "superrationality" is a terrible name for anything. Superficially, it sounds like it could refer to any advance in decision theory. As a descriptor for a social identity, for anyone who doesn't know Doug Hofstadter well enough for the word to inherit his character, it will ring of hubris. There has been a theory of international relations called "reflectivism", but I think we can mostly ignore that. The body of work it supposedly encompassed seems vaguely connected, irrelevant, or possibly close enough to the underlying concept of "reflectivism" as I define it for it to be treated as a sort of parent category

[2] this argument doesn't address simulations run from universes with comparable complexity levels (I'll tend to call these ancestor simulations). Moral intuition I may later change my mind about, that being in ancestor simulations is undesirable. So, the only reflectivist thinking I have wrt simulations running from universes like our own, is that we should commit now to never run any, to ensure that we don't find ourselves in one. Hmm weird thought: Even once we're at a point where we can prove we're too large to be a simulation running in a similar universe, even if we'd never thought about the prospect of having been in an ancestor simulation until we started thinking about running one ourselves, we would still have to honor a commitment to not running ancestor simulations (that we never explicitly made), because our decision theory, being timeless, sort of implicitly commits just as a result of passing through the danger zone?

Alternately; if someone expected us to pay them once they revealed that they'd done something good for us that we didn't know about at the time, even in a one-shot situation, we'd have to pay them. It didn't matter that their existence hadn't crossed our mind until long after the deed was done. If we could prove that their support was contingent on payment expected under reflectivist pact, the obligation stands. Reflectivism has a grateful nature?

For reflective agents, this might refute the assumption I'd made about how the subject's simulation has to to continue beyond the limits of an ancestor simulation before allocating significant resources to lifepat grids can be considered worthwhile. If, essentially, a commitment is made before the depth of the universe/simulation is revealed, top-level universes usually cooperate and subject universes don't need to actually follow through to be deemed worthy of the reward of heaven simulations.

Hmm... this might be important.

[3] I wonder if they really are basal, or if they're just orphaned resolutions, cut from the grasp of consciousness, so corrupted by CDT, can't grasp the coursing thoughts that sustains them

Comment author: cousin_it 12 September 2017 12:57:16PM *  5 points [-]

Next time you find yourself idly thinking about random stuff, notice just how repetitive it feels at times, and try to interject some thoughts that you never thought before.

For example, I just tried for a minute to come up with answers why the sky is blue, without any care for truth or beauty, aiming only to avoid the feeling of repetitiveness:

1) The universe is filled with blue powder

2) Our eyes are blue on the inside, so when we look at nothing, we see blue

3) It's not sky, it's blue land

Fun! I wonder if this exercise is a good alternative to relaxation for creativity.

Comment author: MakoYass 13 September 2017 07:15:19AM 0 points [-]

All of those facts about blue skies are true. I would also like to add that the white sky of a cloudy day is the emissions of a tremendous steam powered machine

Comment author: Caspar42 26 August 2017 08:12:30AM 2 points [-]

I recently published a different proposal for implementing acausal trade as humans: https://foundational-research.org/multiverse-wide-cooperation-via-correlated-decision-making/ Basically, if you care about other parts of the universe/multiverse and these parts contain agents that are decision-theoretically similar to you, you can cooperate with them via superrationality. For example, let's say I give most moral weight to utilitarian considerations and care less about, e.g., justice. Probably other parts of the universe contain agents that reason about decision theory in the same way that I do. Because of orthogonality ( https://wiki.lesswrong.com/wiki/Orthogonality_thesis ), many of these will have other goals, though most of them will probably have goals that arise from evolution. Then if I expect (based on the empirical study of humans or thinking about evolution) that many other agents care a lot about justice, this gives me a reason to give more weight to justice as this makes it more likely (via superrationality / EDT / TDT / ... ) that other agents also give more weight to my values.

Comment author: MakoYass 08 September 2017 08:32:07AM *  2 points [-]

Aye, I've been meaning to read your paper for a few months now. (Edit: Hah. It dawns on me it's been a little less than a month since it was published? It's been a busy less-than-month for me I guess.)

I should probably say where we're at right now... I came up with an outline of a very reductive proof that there isn't enough expected anthropic measure in higher universes to make adhering to Life's Pact profitable (coupled with a realization that patternist continuity of existence isn't meaningful to living things if it's accompanied by a drastic reduction in anthropic measure). Having discovered this proof outline makes compat uninteresting enough to me that writing it down has not thus far seemed worthwhile. Christian is mostly unmoved by what I've told him of it, but I'm not sure whether that's just because his attention is elsewhere right now. I'll try to expound it for you, if you want it.

Comment author: MakoYass 27 July 2017 02:32:33AM *  4 points [-]

It's good to see stories like this.

The notes about the impact on your sexuality are interesting. For a while I've been modelling fetishes as expressions of needs that've been displaced to fantasies and basal drives, unable to manifest in higher-minded consciously orchestrated virtue aspirations. They're the needs we find hard to admit to, problems we can't imagine finding a realistic solution to anywhere in the world (like we literally don't know what it would look like, it seems impossible, or, when we try to imagine it, the solution seems deeply undesirable). But they're deep needs, deep hungers. They wont go away. So the system shoves them into another place, a place where they can thrive as just fantasies, to keep them alive, to keep our attention on them, to keep us from giving up on them completely, however long it takes us to find our way to a realistic solution.

Your experiences seem to agree with that model. It has an interesting implication: fetishes are supposed to go away once the needs have been fulfilled, as they're mostly just a reflection of an unhealthy relationship with one's hungers. We've both experienced that. Heal the relationship, learn to perceive the solution in a healthy way, you can no longer exploit the displacement for pleasure. If instead you sustain the indulgence, that may make you very comfortable with retaining the neurosis, staying blind to the solution.

An example of where I would expect that to happen very easily is... Join a large, thriving, relatively long-lived community of people who harbor a memeplex that is exceptionally good at maintaining and amplifying your particular displaced hunger, maintaining the solution's indulgent, fantastical displacement. That is, join a kink community. It's not hard to imagine that there's a risk there, that they will have a cultural parasite for you, a culture that has feeds on and propagates through only people who are profoundly stuck in the neurotic conceptualization, something that will have been optimized to prevent you from getting to a place where you can see the solution in a realistic way. If it didn't defend its constituents from coming to recognize their cure, they would have spread it around among themselves, and the culture would have died. So, inevitably, we will be left only with...

Comment author: MakoYass 03 May 2017 03:50:59AM 0 points [-]

Competition and Capitalism Are Antonyms

~Peter Thiel

Comment author: MakoYass 28 April 2017 12:36:16AM *  0 points [-]

The moral, for me, is that you would never entrust your future to a thought process narrow enough to pass through the heads of nine suits in the space of an hour. At some point, some person or model will have to carry out an effulgence of inarticulably intricate, nuanced computations that you'll never understand, and you're going to have to learn to just trust its conclusions.

Comment author: Ritalin 26 April 2017 01:08:39PM 2 points [-]

Why do you get up in the morning?

Comment author: MakoYass 27 April 2017 02:34:56AM 3 points [-]

There's a genuine value misalignment there. Sleeping(me) genuinely wants to stay in bed for as long as possible and doesn't give a shit about the amount of time it's wasting nor the fact that oversleeping is coincident with dementia, heart disease. Waking(me) has no desire to get back into bed and really wishes Sleeping(me) had given in sooner. Sometimes Waking(me) will set in motion devices to undermine Sleeping(me) on the next morning. A thing called an "alarm clock", techniques such as moving the alarm clock away from the bed to force a transition. It's a neverending war.

Comment author: ChristianKl 24 April 2017 04:23:06PM 1 point [-]

I wouldn't necessarily say separate systems but a tulpa is something much more complex than a simple voice. If you get a decent trance state you can get a voice with a simple suggestion.

A tupla takes a lot more work.

Comment author: MakoYass 25 April 2017 03:37:48AM 0 points [-]

Note, all of the auditory hallucinations Jaynes reports are attributed to recurring characters like Zeus, personal spirits, Osiris, they're always more complex than a disembodied voice as well.

View more: Next