All of ProgramCrafter's Comments + Replies

Systemic opacity, state-driven censorship, and state control of the media means AGI development under direct or indirect CCP control would probably be less transparent than in the US, and the world may be less likely to learn about warning shots, wrongheaded decisions, reckless behaviour, etc.

That's screened off by actual evidence, which is, top labs don't publish much no matter where they are, so I'd only agree with "equally opaque".

Aidan McLaughlin (OpenAI): ignore literally all the benchmarks the biggest o3 feature is tool use. Ofc it’s smart, but it’s also just way more useful. >deep research quality in 30 seconds >debugs by googling docs and checking stackoverflow >writes whole python scripts in its CoT for fermi estimates McKay Wrigley: 11/10

Newline formatting is off (and also for many previous posts).

I'm not sure if this will be of any use, since your social skills will surely be warped when you expect iterating on them (in a manner like radical transparency reduces awareness of feelings ).

Most cases, you could build a mini-quadcopter, "teach" it some tricks and try showcasing it, having video as a side effect!

Browsing through recent rejects, I found an interesting comment that suggests an automatic system "build a prior on whether something is true by observing social interactions around it vs competing ideas".

@timur-sadekov While it will fail most certainly, I do remember that the ultimate arbiter is experiment, so I shall reserve the judgement. Instead, I'm calling for you to test the idea over prediction markets (Manifold, for instance) and publish the results.

The AI industry is different—more like biology labs testing new pathogens: The biolab must carefully monitor and control conditions like air pressure, or the pathogen might leak into the external world.

This looks like a cached analogy, because given previous paragraph this would fit more: "the AI industry is different, more like testing planes; their components can be internally tested, but eventually the full plane must be assembled and fly, and then it can damage the surroundings".

Same goes for AI: If we don’t keep a watchful eye, the AI might be able to

... (read more)
2sjadler
I appreciate the feedback. That’s interesting about the plane vs. car analogy - I tended to think about these analogies in terms of life/casualties, and for whatever reason, describing an internal test-flight didn’t rise to that level for me (and if it’s civilian passengers, that’s an external deployment). I also wanted to convey the idea not just that internal testing could cause external harm, but that you might irreparably breach containment. Anyway, appreciate the explanation, and I hope you enjoyed the post overall!

I suggest additional explanation.

The bigger the audience is, the more people there are who won't know a specific idea/concept/word (xkcd's comic #1053 "Ten Thousand" captures this quite succinctly), so you'll simply have to shorten.

I took logarithm of sentence length and linearly fitted it against logarithm of world population (that shouldn't really be precise since authors presumably mostly cared about their society, but that would be more time-expensive to check).

Relevant lines of Python REPL

>>> import math
>>> wps = [49, 50, 42, 20, 21,

... (read more)
5Kaj_Sotala
Wouldn't people not knowing specific words or ideas be equally compatible with "you can't refer to the concept with a single word so you have to explain it, leading to longer sentences"?

Given the gravity of Sam Altman's position at the helm of the company leading the development of an artificial superintelligence which it does not yet know how to align -- to imbue with morality and ethics -- I feel Annie's claims warrant a far greater level of investigation than they've received thus far. 

Then there's a bit of shortage of something public... and ratsphere-adjacent... maybe prediction markets?

I can create (subsidize) a few, given resolution criteria.

Alphabet of LW Rationality, special excerpts

  • Cooperation: you need to cooperate with yourself to do anything. Please do so iff you are building ASI;
  • Defection: it ain't no defection if you're benefitting yourself and advancing your goals;
  • Inquiry: a nice way of making others work for your amusement or knowledge expansion;
  • Koan: if someone is not willing to do research for you, you can waste their time on pondering a compressed idea.

This concludes my April the 1st!

I claim that I exist, and that I am now going to type the next words of my response. Both of those certainly look true. As for whether these beliefs are provable, I do not particularly care; instead, I invoke the nameless:

Every step of your reasoning must cut through to the correct answer in the same movement. More than anything, you must think of carrying your map through to reflecting the territory.

My black-box functions yield a statement "I exist" as true or very probable, and they are also correct in that.

After all, If I exist, I do not want to deny my... (read more)

0milanrosko
Again, read the G-Zombie Argument carefully. You cannot deny your existence. Here is the original argument, more formally... (But there is a more formal version) https://www.lesswrong.com/posts/qBbj6C6sKHnQfbmgY/i-g-zombie If you deny your existence... and you dont exist... AHA! Well then we have a complete system. Which is impossible. But since nobody is reading the paper fully, and everyone makes lound mouth assumptions what I wan't to show with EN... The G-Zombie, is not the P-Zombie argument, but a far more abstract formulation. But these idiots dont get it.  

What's useful about them? If you are going to predict (the belief in) qualia, on the basis of usefulness , you need to state the usefulness.

There might be some usefulness!

The statement I'd consider is "I am now going to type the next characters of my comment". This belief turns out to be true by direct demonstration, it is not provable because I could as well leave the commenting to tomorrow and be thinking "I am now going to sleep", not particularly justifiable in advance, and it is useful for making specific plans that branch less on my own actions.

I object to the original post because of probabilistic beliefs, though.

1milanrosko
Thanks for being thoughtful To your objection: Again, EN knew that you will object. The thing is EN is very abstract: It's like two halting machines who think that they are universal halting machines try to understand what it means that they are not unversal halting machines. They say: Yes but if the halting problem is true, than I will say it's true. I must be a UTM.

I don’t think there is a universal set of emojis that would work on every human, but I totally think that there is a set of such emojis (or something similar) that would work on any given human at any given time, at least a large percentage of the time, if you somehow were able to iterate enough times to figure out what it is.

Probably something with more informational content than emojis, like images (perhaps slightly animated). Trojan Sky, essentially.

The core idea behind confidentiality is to stop social pressure (whether on the children or the parents).

Parents have a strong right to not use genomic engineering technology, and if they do use it then they have a strong right to not alter any given trait.

How to prevent all the judgements better than to make information sharing completely voluntary?

2TsviBT
Yeah this seems like an important question. I'm not sure what to think. Ideally someone with more background in medical ethics could address this. E.g. I'm not sure how to navigate what would happen if, for example, law enforcement claimed it needed access to some info (e.g. to enforce regulations about germline engineering, or to use in forensic investigation of a crime); or if there were a malpractice suit about a germline engineering clinic, or something. I'm also not sure what is standardly done, and what the good and bad results are, in situations where a child might have an interest in their parents not sharing some info about them. But certainly, in a list of innovation-positive ethical guidelines for scientists and clinicians regarding germline engineering, some sort of strong protection of privacy would have to be included. This is a good point, thanks.

Important, and would be nice even if passed as-is! Admittedly, there's some space to have even stronger ideas, like...

"Confidentiality of genomic interventions. Human has natural right for details of which aspects of their genome were custom-chosen, if any, to be kept confidential" (probably also prohibit parents/guardians from disclosing that, since knowledge cannot be sealed back into the box).

1ProgramCrafter
The core idea behind confidentiality is to stop social pressure (whether on the children or the parents). How to prevent all the judgements better than to make information sharing completely voluntary?

A nice scary story! How fortunate that it is fiction...

... or is it? If we get mind uploads, someone will certainly try to gradient-ascent various stimulus (due to simple hostility or Sixth Law of Human Stupidity), and I do believe the underlying fact that a carefully crafted image could hijack mental processes to some point.

It's 16:9 (modulo possible changes in the venue).

I have seen your banner and it is indeed one of the best choices out there! For announcing the event I preferred another one.

Hi! By any chance, do you have HPMOR banners (to display on screen, for instance)?

4Screwtape
What shape is the screen?  This one is probably my favourite for an event banner.

I get the impression that you're conflating two meanings of «personal» - «private» and «individual». The fact that I might feel uncomfortable discussing this in a public forum doesn’t mean it «only works for me» or that it «doesn’t work, but I’m shielded from testing my beliefs due to privacy». There are always anonymous surveys, for example. Perhaps you meant something else?

I meant to say that private values/things are unlikely to coincide between different people, though now I'm a bit less sure.

Moreover, even if I were to provide yet another table of my

... (read more)

Seeing you know the exact numbers, I wonder if you could connect with those other families? It's harder to get the best outcome if players do not cooperate and do not even know the wishes of others. Adding that this would be a valuable socializing opportunity would be somewhat hypocritical from me but it's still so.

Upvoted as a good re-explanation of CEV complexity in simpler terms! (I believe LW will benefit from recalling the long understood things so that it has a chance on predicting future in greater detail.)

In essence, you prove the claim "Coherent Extrapolated Volition would not literally include everything desirable happening effortlessly and everything undesirable going away". Would I be wrong to guess it argues against position in https://www.lesswrong.com/posts/AfAp8mEAbuavuHZMc/for-the-sake-of-pleasure-alone?

That said, current wishes of many people includ... (read more)

8Richard_Kennaway
I suppose it does. That article was not in my mind at the time, but, well, let's just say that I am not a total hedonistic utilitarian, or a utilitarian of any other stripe. "Pleasure" is not among my goals, and the poster's vision of a universe of hedonium is to me one type of dead universe.

And introspectively, I don’t see any barriers to comparing love with orgasm, with good food, with religious ecstasy, all within the same metric, even though I can’t give you numbers for it.

Why not? It'd be interesting to hear valuations from your experience and experiments, if that wasn't very personal.

(On the other hand, if it IS too personal, then who would choose to write the metric down for an automatic system optimizing it by their whims?)

2Greenless Mirror
I get the impression that you're conflating two meanings of «personal» - «private» and «individual». The fact that I might feel uncomfortable discussing this in a public forum doesn’t mean it «only works for me» or that it «doesn’t work, but I’m shielded from testing my beliefs due to privacy». There are always anonymous surveys, for example. Perhaps you meant something else? Moreover, even if I were to provide yet another table of my own subjective experience ratings, like the ones here, you likely wouldn’t find it satisfactory — such tables already exist, with far more respondents than just myself, and you aren’t satisfied. Probably because you disagree with the methodology — for instance, since measuring «what people call pleasurable» is subject to distortions like the compulsions mentioned earlier. But the very fact that we talk about compulsions suggests that there is a causal distinction between pleasure and «things that make us act as if we’re experiencing pleasure». And the more rational we become, the better we get at distinguishing them and calibrating our own utility functions. If we were to measure which brain stimuli would make a person press the «I AM HAPPY» button more forcefully, somewhere around the point of inducing a muscle spasm we’d quickly realize that we’re measuring the wrong thing. There are more complex traps as well. It doesn’t take much reflection to notice that compulsively scratching one’s hands raw for a few hours of relief does not reflect one’s true values. Many describe certain foods as not particularly tasty yet addictive — like eating one potato chip and then feeling compelled to finish the entire bag, even if you don’t actually like it. It takes a certain level of awareness to recognize that social expectations of happiness differ from one’s real happiness, yet psychotherapy seems to handle that successfully. There are systemic modeling errors, such as people preferring a greater amount of pain if its average intensity per epi

The follow-up post has a very relevant comment:

Can you just give every thief a body camera?

Well of course this is illegal under current US laws, however this would help against being unjustly accused as in your example of secondary crime. It would also be helpful against repeat offences for a whole range of other crimes.

I'd maintain that those problems already exist in 20M-people cities and will not necessarily become much worse. However, by increasing city population you bring in more people into the problems, which doesn't seem good.

2samuelshadrach
Got it. I understood what you're trying to say. I agree living in cities has some downsides compared to living in smaller towns, and if you could find a way to get the best of both instead it could be better than either.

Is there any engineering challenge such as water supply that prevents this from happening? Or is it just lack of any political elites with willingness + engg knowledge + governing sufficient funds?

That dichotomy is not exhaustive, and I believe going through with the proposal will necesarily make the city inhabitants worse off.

  1. Humans' social machinery is not suited to live in such large cities, as of the current generations. Who to get acquainted with, in the first place? Isn't there lots of opportunity cost to any event?
  2. Humans' biomachinery is not suited
... (read more)
1samuelshadrach
Sorry I didn’t understand your comment at all. Why are 1, 2 and 4 bigger problems in 1 billion population city versus say a 20 million population city?

[Un]surprisingly, there's already a Sequences article on this, namely Is That Your True Rejection?.

(I thought this comment would be more useful with call-for-action "so how should we rewrite that article and make it common knowledge for everyone who joined LW recently?" but was too lazy to write it.)

In general if you "defect" because you thought the other party would that is quite sketchy. But what if proof comes out they really were about to defect on you?

By the way, if we consider game theory and logic to be any relevant, then there's a corollary of Löb's Theorem: if you defect given proof that counterparty will defect, and another party will defect given proof that you will, then you both will, logically, defect against each other, with no choice in the matter. (And if you additionally declare that you cooperate given proof that partner will cooper... (read more)

Weak-upvoted because I believe this topic merits some discussion, but the discourse level should be higher since setting NSFW boundaries for user relates to many other topics:

  1. Estimating social effect of imposing a certain boundary.
    Will stopping rough roleplaying scenarios lead to less people being psychopaths? That seems to be an empirical question, since intuitively effect might go either way - doing the same in real world instead OR internalizing rough and inconsiderate actions as not normal.
  2. Simulated people's opinion on being placed in the user-requeste
... (read more)

That's all conditional on P = NP, isn't it? Also, which part do you consider weaker: digital signatures or hash functions?

2Ape in the coat
Not necessarily. There may be a fast solution for some specific cases, related to the vulnerabilities in the protocol. And then there is the question of brute force computational power, due to having a dyson swarm around the Sun.

Line breaks seem to be broken (unless it was your intention to list all the Offers-Mundane-Utility-s and so on in a single paragraph).

Acknowledge it is not visible anymore!

Hi! I believe this post, not one for the 2021 review, is meant to be pinned at the front page?

3Raemon
That... is not supposed to be visible at all. Is it still there when you refresh?

I'd like there to be a reaction of "Not Exhaustive", meant for a step where comment (or top-level post, for that matter) missed an important case - how a particular situation could play out, perhaps, or an essential system's component is not listed. An example use: on statement "to prevent any data leaks, one must protect how their systems transfer data and how they process it" with the missed component being protection of storage as well.

 

I recall wishing for it like three times since the New Year, with the current trigger being this comment:

Elon alr

... (read more)
4habryka
(I like it, seems like a cool idea and maybe worth a try, and indeed a common thing that people mess up)

Now I feel like rationality itself is an infohazard. I mean, rationality itself won't hurt you if you are sufficiently sane, but if you start talking about it, insufficiently sane people will listen, too. And that will have horrible consequences. (And when I try to find a way to navigate around this, such as talking openly only to certifiably sane people, that seems like the totally cultish thing to do.)

There is an alternative way, the other extreme: get more and more rationalists.
If the formed communities do not share the moral inclinations of LW communit... (read more)

5Viliam
From my experience, the rationality community in Vienna does not share any of the craziness in Bay Area that I read about, so yeah, it seems plausible that different communities will end up significantly different. I think there is a strong founder effect... the new members will choose whether they join or not depending on how comfortable they feel among the existing members. Decisions like "we have these rules / we don't have any rules", "there are people responsible for organization and safety / everyone needs to take care of themselves" once established, easily become "the way this is done here". But you are also limited by the pool you are recruiting the potential new members from. Could be, there are simply not enough people to make a local rationality community. Could be, the local memes are so strong (e.g. positive attitude towards drug use, or wokeness) that in practice you cannot push against them without actively rejecting most of wannabe members, which would be a weird dynamic. (You already need to push strongly against people who simply do not get what rationality means, but are trying to join anyway.)

Actually, AIs can use other kinds of land (to suggest from the top of the head, sky islands over oceans, or hot air balloons for a more compact option) to be run, which are not usable by humans. There have to be a whole lot of datacenters to make people short on land - unless there are new large factories built.

It seems that an alternative to AI unlearning is often overlooked: just remove dataset parts which contain sensitive (or, to that matter, false) information or move training on it towards beginning to aid with language syntax only. I don't think a bit of inference throughout the dataset is any more expensive than training on it.

3saahir.vazirani
Typically, the information being unlearnt is from the initial training with mass amounts of data from the internet so it may be difficult to pinpoint what exactly to remove while training.

In practice I do not think this matters, but it does indicate that we’re sleeping on the job – all the sources you need for this are public, why are we not including them.

I'd reserve judgement whether that matters, but I can attest a large part of content is indeed skipped... probably for those same reasons why market didn't react to DeepSeek in advance: people are just not good at knowing distant information which might still be helpful to them.

If you want to figure out how to achieve good results when making the AI handle various human conflicts, you can't really know how to adapt and improve it without actually involving it in those conflicts.


I disagree. There is such a lot of conflicts (some kinds make it into writing, some just happen) of different scales, both in history and now; I believe they span human conflict space almost fully. Just aggregating this information could lead to very good advice on handling everything, which AI could act upon if it so needed.

But if I know that there are external factors, I know the bullet will deviate for sure. I don't know where but I know it will.

You assume that blur kernel is non-monotonic, and this is our entire disagreement. I guess that different tasks have different noise structure (for instance, if somehow noise geometrically increased -  - we wouldn't ever return to an exact point we had left).

Visualization of circle and disk blur kernels.

However, if noise is composed from many i.i.d. small parts, then it has normal distribution which is monotonic in the relevant sense.

2AnthonyC
I mentioned this in my comment above, but I think it might be worthwhile to differentiate more explicitly between probability distributions and probability density functions. You can have a monotonically-decreasing probability density function F(r) (aka the probability of being in some range is the integral of F(r) over that range, integral over all r values is normalized to 1) and have the expected value of r be as large as you want. That's because the expected value is the integral of r*F(r), not the value or integral of F(r). I believe the expected value of r in the stated scenario is large enough that missing is the most likely outcome by far. I am seeing some people argue that the expected distribution is F(r,θ) in a way that is non-uniform in θ, which seems plausible. But I haven't yet seen anyone give an argument for the claim that the aimed-at point is not the peak of the probability density function, or that we have access to information that allows us to conclude that integrating the density function over the larger-and-aimed-at target region will not give us a higher value than integrating over the smaller-and-not-aimed-at child region

Can you think of any good reason why I should think that?

Intuition. Imagine a picture with bright spot in the center, and blur it. The brightest point will still be in center (before rounding pixel values off to the nearest integer, that is; only then may a disk of exactly equiprobable points form).

My answer: because strictly monotonic[1] probability distribution prior to accounting for external factors (either "there might be negligible aiming errors" or "the bullet will fly exactly where needed" are suitable) will remain strictly monotonic when blur... (read more)

2Jim Buhler
Ok so that's defo what I think assuming no external factors, yes. But if I know that there are external factors, I know the bullet will deviate for sure. I don't know where but I know it will. And it might luckily deviate a bit back and forth and come back exactly where I aimed, but I don't get how I can rationally believe that's any more likely than it doing something else and landing 10 centimeters more on the right. And I feel like what everyone in the comments so far is saying is basically "Well, POI!", taking it for granted/self-obvious, but afaict, no one has actually justified why we should use POI rather than simply remain radically agnostic on whether the bullet is more likely to hit the target than the kid. I feel like your intuition pump, for example, is implicitly assuming POI and is sort of justifying POI with POI.

Like why are time translations so much more important for our general work than space translations?

I'd imagine that happens because we are able to coordinate our work across time (essentially, execute some actions), while work coordination across space-separated instances is much harder (now, it is part of IT's domain under name of "scalability").

An interesting framing! I agree with it.

As another example: in principle, one could make a web server use an LLM connected to database to serve any requests, not coding anything. It would even work... till the point someone would convince the model to rewrite the database to their whims! (A second problem is that normal site should be focused on something, in line with famous "if you can explain anything, your knowledge is zero".)

That article is suspiciously scarce on what microcontrols units... well, glory to LLMs for decent macro management then! (Though I believe that capability is still easier to get without text neural networks.)

Answer by ProgramCrafter10

In StarCraft II, adding LLMs (to do/aid game-time thinking) will not help the agent in any way, I believe. That happens because inference has a quite large latency, especially as most of prompt changes with all the units moving, so tactical moves are out; strategic questions "what is the other player building" and "how many units do they already have" are better answered by card-counting counting visible units and inferring what's the proportion of remaining resources (or scouting if possible).

I guess it is possible that bots' algorithms are improved with LLMs but that requires a high-quality insight; not convinced that o1 or o3 give such insights.

9gwern
Ma et al 2023 is relevant here.

I don't think so as I had success explaining away the paradox with concept of "different levels of detail" - saying that free will is a very high-level concept and further observations reveal a lower-level view, calling upon analogy with algorithmic programming's segment tree.

(Segment tree is a data structure that replaces an array, allowing to modify its values and compute a given function over all array elements efficiently. It is based on tree of nodes, each of those representing a certain subarray; each position is therefore handled by several - specif... (read more)

Doesn't the "threat" to delete the model have to be DT-credible instead of "credible conditioned on being human-made", given that LW with all its discussion about threat resistance and ignoring is in training sets?

(If I remember correctly, a decision theory must ignore "you're threatened to not do X, and the other agent is claiming to respond in such a way that even they lose in expectation" and "another agent [self-]modifies/instantiates an agent making them prefer that you don't do X".)

The surreal version of the VNM representation theorem in "Surreal Decisions" (https://arxiv.org/abs/2111.00862) seems to still have a surreal version of the Archimedean axiom.

That's right! However it is not really a problem unless we can obtain surreal probabilities from the real world; and if all our priors and evidence are just real numbers, updates won't lead us into the surreal area. (And it seems non-real-valued probabilities don't help us in infinite domains, as I've written in https://www.lesswrong.com/posts/sZneDLRBaDndHJxa7/open-thread-fall-2024?c... (read more)

1Pretentious Penguin
So would it be accurate to say that a preference over lotteries (where each lottery involves only real-valued probabilities) satisfies the axioms of the VNM theorem (except for the Archimedean property) if and only if that preference is equivalent to maximizing the expectation value of a surreal-valued utility function? Re the parent example, I agree that changing in an expectable way is problematic to rational optimizing, but I think "what kind of agent am I happy about being?" is a distinct question from "what kinds of agents exist among minds in the world?".

Yes, many people will have problems with the Archimedes' axiom because it implies that everything has a price (that any good option can be probability-diluted enough that a mediocre is chosen instead), and people don't take it kindly when you tell "you absolutely must have a trade-off between value A and value B"  - especially if they really don't have a trade-off, but also if they don't want to admit or consciously estimate it.

Thankfully, that VNM property is not that critical for rational decision-making because we can simply use surreal numbers ins... (read more)

4Pretentious Penguin
What is the precise statement for being able to use surreal numbers when we remove the Archimedean axiom? The surreal version of the VNM representation theorem in "Surreal Decisions" (https://arxiv.org/abs/2111.00862) seems to still have a surreal version of the Archimedean axiom. Re the parent example, I was imagining that the 2-priority utility function for the parent only applied after they already had children, and that their utility function before having children is able to trade off between not having children, having some who live, and having some who die. Anecdotally it seems a lot of new parents experience diachronic inconsistency in their preferences.

But technology is not a good alternative to good decision making and informed values.

After thinking on this a bit, I've somewhat changed my mind.

(Epistemic status: filtered evidence.)

Technology and other progress has two general directions: a) more power for those who are able to wield it; b) increasing forgiveness, distance to failure. For some reason, I thought that b) was a given at least on average. However, now it came to mind that it's possible for someone to
1) get two dates to accidentally overlap (or before confirming with partners-to-be that poly ... (read more)

5jimmy
That sounds basically right to me, which is why I put effort into learning (and teaching) to enjoy the right things. I'm pretty proud of the fact that both my little girls like "liver treats".   I think that's right, but also "more distance to failure" doesn't help so much if you use your newfangled automobile to cover that distance more quickly. It's easier to avoid failure, but also easier to fail. A gun makes it easier to defend yourself, and also requires you to grow up until you can make those calls correctly one hundred percent of the time. With great power comes great responsibility, and all that. I'll take the car, and the gun, and the society that trusts people with cars and guns and other technologically enabled freedoms. But only because I think we can aspire to such responsibilities, and notice when they're not met. All the enabling with none of the sobering fear of recklessness isn't a combination I'm a fan of. With respect to the "why do you believe this" question on my previous comment about promiscuity being statistically linked with marital dissatisfaction, I'm not very good at keeping citations on hand so I can't tell you which studies I've seen, but here's what chatgpt found for me when I asked for studies on the correlation. https://www.jstor.org/stable/3600089 https://unews.utah.edu/u-researcher-more-sex-partners-before-marriage-doesnt-necessarily-lead-to-divorce/ https://ifstudies.org/blog/testing-common-theories-on-the-relationship-between-premarital-sex-and-marital-stability https://www.proquest.com/openview/46b66af73b830380aca0e6fbc3b597e3/1 I don't actually lean that hard on the empirical regularity though, because such things are complicated and messy (e.g. the example I gave of a man with a relatively high partner count succeeding because he took an anti-promiscuous stance). The main reason I believe that pills don't remove all the costs of promiscuity is that I can see some of the causal factors at work and have experience actual

I object to the framing of society being all-wise, and instead believe that for most issues it's possible to get the benefits of both ways given some innovators on that issue. For example, visual communication was either face-to-face or heavily resource-bounded till the computer era - then there were problems of quality and price, but those have been almost fully solved in our days.
Consequently, I'd prefer "bunch of candy and no diabetes still" outcome, and there are some lines of research/ideas into how this can be done.
As for "nonmarital sex <...> ... (read more)

jimmy163

I object to the framing of society being all-wise,

Society certainly is not all-wise, and I did not frame it as such. But it is wiser than the person who thinks "Trying heroin seems like a good idea", and then proceeds to treat heroin as if it's the most important thing in the universe.

Is it wiser than you, in some limited way in some limited context that you are unaware of? Is it less wise, in other ways? I'd bet on "both" before either.

Consequently, I'd prefer "bunch of candy and no diabetes still" outcome, and there are some lines of research/ideas into

... (read more)

Nicely written!

A nitpick: I believe "Voluntary cooperation" shouldn't always be equal to "Pareto preferred". Consider an Ultimatum game, where two people have 10 goodness tokens to split; the first person suggests a split (just once), then the second may accept or reject (when rejecting, all tokens are discarded). 9+1 is Pareto superior to 0+0 but one shouldn't [100%] accept 9+1 lest that becomes anything they are ever suggested. Summarizable with "Don't do unto yourself what you wouldn't want to be done unto you", or something like that.

7Allison Duettmann
Agree. Upholding voluntary cooperation should be our meta strategy whether or not it leads to pareto-preferred outcomes but it's a very nice additional feature that it often does :)
Load More