All of quiet_NaN's Comments + Replies

Unlike Word, the human genome is self-hosting. That means that it is paying fair and square for any complexity advantage it might have -- if Microsoft found that the x86 was not expressive enough to code in a space-efficient manner, they could likewise implement more complex machinery to host it.

Of course, the core fact is that the DNA of eukaryotes looks memory efficient compared to the bloat of word.

There was a time when Word was shipped on floppy disks. From what I recall, it came on multiple floppies, but on the order of ten, not a thousand. With these... (read more)

I think formally, the Kolmogorov complexity would have to be stated as the length of a description of a Turing Machine (not that this gets completely rid of any wiggle room).

Of course, TMs do not offer a great gaming experience.

"The operating system and the hardware" is certainly an upper bound, but also quite certainly to be overkill.

Your floating point unit or your network stack are not going to be very busy while you play tetris.

If you cut it down to the essentials (getting rid of things like scores which have to displayed as characters, or background g... (read more)

Agreed.

If the authors claim that adding randomness in the territory in classical mechanics requires making it more complex, they should also notice that for quantum mechanics, removing the probability from the territory for QM (like Bohmian mechanics) tends to make the the theories more complex. 

Also, QM is not a weird edge case to be discarded at leisure, it is to the best of our knowledge a fundamental aspect of what we call reality. Sidelining it is like arguing "any substance can be divided into arbitrary small portions" -- sure, as far as everyda... (read more)

8Daniel Herrmann
I agree with both of you  --- QM is one of our most successful physical theories, and we should absolutely take it seriously! We de-empahsized QM in the post so we could focus on the de Finetti perspective, and what it teaches us about chance in many contexts. QM is also very much worth discussing --- it would just be a longer, different, more nuanced post.  It is certainly true that certain theories of QM --- such as the GRW one mentioned in footnote 8 of the post  --- do have chance as a fundamental part of the theory. Insofar as we assign positive probability to such theories, we should not rule out chance as being part of the world in a fundamental way.  Indeed, we tried to point out in the post that the de Finetti theorem doesn't rule out chances, it just shows we don't need them in order to apply our standard statistical reasoning. In many contexts --- such as the first two bullet points in the comment to which I am replying --- I think that the de Finetti result gives us strong evidence that we shouldn't reify chance.  I also think --- and we tried to say this in the post --- that it is an open question and active debate how much this very pragmatic reduction of chance can extend to the QM context. Indeed, it might very well be that the last two bullet points above do involve chance being genuinely in the territory.  So I suspect we pretty much agree the broad point --- QM definitely gives us some evidence that chances are really out there, but there are also non-chancey candidates. We tried to mention QM and indicate that things get subtle there without it distracting from the main text.  Some remarks on the other parts of the comments are below, but they are more for fun & completeness, as they get in the weeds a bit.  *** In response to the discussion of whether or not adding randomness or removing randomness makes something more complex, we didn't make any such claim.  Complexity isn't a super motivating property for me in thinking about fundament

Okay. So from what I understand, you want to use a magnetic effect observed in plasma as a primary energy source.

Generally, a source of energy works by taking a fuel which contains energy and turning it into a less energetic waste product. For example, carbon and oxygen can be burned to form CO2. Or one can split some uranium nucleus into two fragments which are more stable and reap the energy difference as heat.

Likewise, a wind turbine will consume some of the kinetic energy of the air, and a solar panel will take energy from photons. For a fusion reactor... (read more)

1[anonymous]
Title: Response: “But Where Does the Energy Actually Come From?”   First, thanks for articulating this question so clearly—it’s central to any proposed energy device. Let me restate it:   If we’re not transmuting matter (like burning carbon or fusing hydrogen), and we’re not tapping a natural flow (like sunlight or wind), then what “fuel” are we actually using to get net energy out?   Short answer: This concept is essentially a new mechanism to convert externally supplied magnetic or electrical energy into usable power via magnetic reconnection, rather than a new fundamental energy source. It’s best viewed as a type of “pulsed power” device: you charge up the magnetic field, trigger reconnection, and then guide the released energy outside. That stored energy must come from somewhere—e.g., external coils or circuits that initially pump energy into the plasma’s B-field.   Below is the longer explanation.   1. The Analogy: A Magnetic “Capacitor”   Think of the proposed device like a capacitor bank in an electrical circuit. Normally, you: 1. Use an external power supply to charge the capacitor. 2. Then discharge the capacitor into a load, harnessing the stored energy.   Net “new” energy does not magically appear; you are just transferring energy you paid for at step (1). If your charging and discharging steps are efficient, you might shape when and how energy is delivered in a useful way (e.g. short, high-power pulses).   Magnetic Field as Storage   In our “pulsed MHD” design, the magnetic field is effectively our “capacitor.” You wind up big coils around the plasma vessel, feed them electrical current, and build a strong B-field inside. That energy is stored in the field (just as a capacitor stores energy in an electric field). Then, you deliberately induce magnetic reconnection events to discharge that stored energy in a short, intense pulse—and crucially, you set up boundary conditions so that the discharge primarily goes into a current t
quiet_NaN*10

Seconded. Also, in the second picture, that line is missing, so it seems that it is just Zvi complaining about the "win probability"?

My guess is that the numbers (sans the weird negative sign) might indicate the returns in percent for betting on either team. Then, if the odds were really 50:50 and the bookmaker was not taking a cut, they should be 200 each? So 160 would be fair if the first team had a win probability of 0.625, while 125 would be fair if the other team had a win probability of 0.8. Of course, these add up to more than one, which is to be ex... (read more)

1Don P.
Those odds are in the confusing "American format", in which a positive number is "how much would you win (in addition to your bet amount) on a $100 bet", and the negative number is -- careful here! -- how much would you have to bet in order to win $100, again in addition to getting your bet back.  There are calculators to get the equivalence, since -- especially for the negative odds -- it's not real intuitive.  So a 50/50 event should be +100 each way, and of course it never is.  In this case, -160 would be fair odds for a 60% chance event, and +125 would be fair for a 44.4% chance event, giving a "hold" of 4.4% (60+44.4 - 100), which honestly isn't terrible as such things go.  I think the %win thing on that screen might be from a different source than the betting odds altogether in this case; the classic 50/50 bet is generally priced at -110/-110, which works out to a 4.8% hold.   You'll note that because of the way the +/- odds work, it's really hard to instantly grasp how good/bad most odds are.  (It's not "halfway between the two", for one thing.)  In Europe/Asia the common format is a decimal number telling you how much you multiply your stake by, including getting the stake back, so those would be [checks online calculator] 1.63 and 2.25.  The single advantage of the American system is that you can see what a fair bet should be, because one is just the negation of the other. (For two-way results.)  In this case it would be +-138.5.
quiet_NaN2-1

There is also a quantum version of that puzzle.

I have two identical particles of non-zero spin in identical states (except possibly for the spin direction). One of them is spin up. What is the probability that both of them are spin up.

For fermions, that probability is zero, of course. Pauli exclusion principle.

For bosons, ...

... the key insight is that you can not distinguish them. The possible wave functions are either (spin-up, spin-up) or (spin-up,spin-down)=(spin-down,spin-up). Hence, you get p=1/2. (From this, we can conclude that boys (p=1/3) are made up from 2/3 bosons and 1/3 fermions.)

Let us assume that the utility of personal wealth is logarithmic, which is intuitive enough: 10k$ matter a lot more to you if you are broke than if your net worth is 1M$.

Then by your definition of exploitation, every transaction where a poor person pays a rich person and enhances their personal wealth in the process is exploitive. The worker surely needs the rent money more than the landlord, so the landlord should cut the rent to the point where he does not make a profit. Likewise the physician providing aid to the poor, or the CEO selling smartphones to ... (read more)

2Darmani
A gap in the proposed definition of exploitation is that it assumes some natural starting point of negotiation, and only evaluates divergence from that natural starting point. In the landlord case, fair-market value of rent is a natural starting point, and landlords don't have enough of a superior negotiating position to force rent upwards. (If they did by means of supply scarcity, then that higher point would definitionally be the new FMV.)  Ergo, no exploitation. On the other hand, if the landlord tried to increase the rent on a renewing tenant much more than is typical precisely because they know the tenant has circumstances (e.g.: physical injury) that make moving out much harder than normal, then that would be exploitative per this definition.

Some comments.

 

[...] We will quickly hit superintelligence, and, assuming the superintelligence is aligned, live in a post-scarcity technological wonderland where everything is possible.

Note, firstly, that money will continue being a thing, at least unless we have one single AI system doing all economic planning. Prices are largely about communicating information. If there are many actors and they trade with each other, the strong assumption should be that there are prices (even if humans do not see them or interact with them). Remember too that howev

... (read more)
quiet_NaN164

Some additional context.

This fantasy world is copied from a role-playing game setting—a fact I discovered when Planecrash literally linked to a Wiki article to explain part of the in-universe setting.

The world of Golarian is a (or the?) setting of the Pathfinder role playing game, which is a fork of the D&D 3.5 rules[1] (but notably different from Forgotten Realms, which is owned by WotC/Hasbro). The core setting is defined in some twenty-odd books which cover everything from the political landscape in dozens of polities to detailed rule for how m... (read more)

6FiftyTwo
The setting for Planecrash is called "Glowlarion" sometimes and is shared with a lot of other glowfics, and makes some systematic changes from the original Paizo canon, mostly in terms of making it more similar to real life history of the same period, more internally coherent and with the gods and metaphysics being more impactful.  There's a brief outline of some of the changes here: https://docs.google.com/document/d/1ZGaV1suMeHrDlsYovZbG4c4tdMVgdq0HzgRX0HUGYkU/edit?tab=t.0  These are the two oldest non-crossover threads I can find: https://www.glowfic.com/posts/3456 by lintamande and apprenticebard (who wrote Korva in planecrash) and https://www.glowfic.com/posts/3538 by lintamande and Alicorn (who wrote Luminosity, the HPMOR style reworking of Twilight).  Incomplete list of other notable threads: https://glowficwiki.noblejury.com/books/dungeons-and-dragons/page/notable-threads   
quiet_NaN202

 One big aspect of Yudkowskian decision theory is how to respond to threats. Following causal decision theory means you can neither make credible threats nor commit to deterrence to counter threats. Yudkowsky endorses not responding to threats to avoid incentivising them, while also having deterrence commitments to maintain good equilibria. He also implies this is a consequence of using a sensible functional decision theory. But there's a tension here: your deterrence commitment could be interpreted as a threat by someone else, or visa versa. 

I h... (read more)

4Martin Randall
Covered in the glowfic. Here is how it goes down in Dath Ilan: And in Golarion: They just ignore the effort difference and go for 50:50 splits. Fair over the long term, robust to deception and self-deception, low cognitive effort. ---------------------------------------- The Dath Ilani kids are wrong according to Shapley Values (confirmed as the Dath Ilan philosophy here). Let's suppose that Aylick and Brogue are paired up on a box where Aylick had to put in three jellychips worth of effort and Brogue had to put in one jellychip worth of effort. Then their total gains from trade are 12-4=8. The Shapley division is then 4 each, which can be achieved as follows: * Aylick gets seven jellychips. Less her three units of effort, her total reward is four. * Brogue gets five jellychips. Less his one unit of effort, his total reward is four. The Dath Ilan Child division is nine to three, which I think is only justified with the politician's fallacy. But they are children.
5Ben Livengood
I think it might be as simple as not making threats against agents with compatible values. In all of Yudkowsky's fiction the distinction between threats (and unilateral actions removing consent from another party) and deterrence comes down to incompatible values. The baby-eating aliens are denied access to a significant portion of the universe (a unilateral harm to them) over irreconcilable values differences. Harry Potter transfigures Voldemort away semi-permanently non-consensually because of irreconcilable values differences. Carissa and friends deny many of the gods their desired utility over value conflict. Planecrash fleshes out the metamorality with the presumed external simulators who only enumerate the worlds satisfying enough of their values, with the negative-utilitarians having probably the strongest "threat" acausally by being more selective. Cooperation happens where there is at least some overlap in values and so some gains from trade to be made. If there are no possible mutual gains from trade then the rational action is to defect at a per-agent cost up to the absolute value of the negative utility of letting the opposing agent achieve their own utility. Not quite a threat, but a reality about irreconcilable values.
1Tapatakt
IIRC, it was covered in Planecrash also!

Relatedly, if you perform an experiment n times, and the probability of success is p, and the expected number of total successes kp is much smaller than one, then kp is a reasonable measure of getting at least once success, because the probability of getting more than one success can be neglected. 

For example, if Bob plays the lottery for ten days, and each days has a 1:1000,000 chance of winning, then overall he will have a chance of 100,000 of winning once. 

This is also why micromorts are roughly additive: if travelling by railway has a mortali... (read more)

2andrew sauer
So, travelling 1Tm with the railway you have a 63% chance of dying according to the math in the post

Getting down-voted to -27 is an achievement. Most things judged 'bad AI takes' only go to -11 or so, even that recent  P=NP proof only got to -25. Of course, if the author is right, then downvoting further is providing helpful incentives to him. 

I think that bullying is quite distinct from status hierarchies. The latter are unavoidable. There will always be some clique of cool kids in the class who will not invite the non-cool kids to their parties. This is ok. Sometimes, status is correlated with behaviors which are pro-social (kids not smoking;... (read more)

-1Alexej Gerstmaier
"Bullying has distinct negative connotation" I mention the concept of Russell Conjugation multiple times in my article. Did you read it? "Bullies have bad intentions" Intentions don't matter, results do. That's why capitalism works

I see this as less of an endorsement of linear models and more of a scathing review of expert performance. 

This. Basically, if your job is to do predictions, and the accuracy of your predictions is not measured, then (at least the prediction part of) your job is bullshit. 

I think that if you compare simple linear models in domains where people actuall... (read more)

quiet_NaN3811

What I don't understand is why there should be a link between trapped priors and an moral philosophy. 

I mean, if moral realism was correct, i.e. if moral tenets such as "don't eat pork", "don't have sex with your sister", or "avoid killing sentient beings" had an universal truth value for all beings capable of moral behavior, then one might argue that the reason why people's ethics differ is that they have trapped priors which prevent them from recognizing these universal truths. 

This might be my trapped priors talking, but I am a non-cognitivist... (read more)

1Christian Z R
'I simply believe that assigning truth values to moral sentences such as "killing is wrong" is pointless, and they are better parsed as prescriptive sentences such as "don't kill" or "boo on killing". ' Going to bring in a point I stole from David Friedmann: If I see that an apple is red, and almost everybody else agree that the apple is red, and the only person who disagrees also tend to disagree with most people about all colors and so is probably color blind, then it makes sense to say that it is true that the apple is red.    -Jesus, Muhammed and Luther: Muhammed did support offensive warfare, but apart from that his religious rules might have been a step up from earlier arabic society. I have noticed that modern Islamic countries actually doesn't have a lot of peacetime violence or crime, compared to equally rich or developed countries. And Martin Luther was opposed to rebellions exactly because he thought anarchy and violent religious movements were worse than the status quo. He did support peaceful movements for peasant rights.  _________________ Finally, why would spirituality only help you overcome 'maladaptive' trapped priors? Might it not just as well cure adaptive, but unwanted ones?
Unreal120

My second point is that if moral realism was true, and one of the key roles of religion was to free people from trapped priors so they could recognize these universal moral truths, then at least during the founding of religions, we should see some evidence of higher moral standards before they invariably mutate into institutions devoid of moral truths. I would argue that either, our commonly accepted humanitarian moral values are all wrong or this mutation process happened almost instantly:

 

This is easy to research. 

I will name a few ways the Bud... (read more)

2Unreal
  I don't claim to be a moral realist or any other -ist that we currently have words for. I do follow the Buddha's teachings on morals and ethics. So I will share from that perspective, which I have reason to believe to be true and beneficial to take on, for anyone interested in becoming more ethical, wise, and kind.  "Don't eat pork" is something I'd call an ethical rule, set for a specific time and place, which is a valid manifestation of morality. "Avoiding killing" and "Avoid stealing" (etc) are held, in Buddhism, as "ethical precepts." They aren't rules, but they're like...  a) Each precept is a game in and of itself with many levels b) It is generally considered good to use this life and future lives to deepen one's practice of each of the precepts (to take on the huge mission of perfecting our choices to be more in alignment with the real thing these statements are pointing at). It's also friendly to help others do the same. c) It's not about being a stickler to the letter of the law. The deeper you investigate each precept, you actually have to let go of your ideas of what it means to "be doing it right." It's not about getting fixated on rules, heuristics, or norms. There's something more real and true being pointed to that cannot be predicted, pre-determined, etc.  Moral codes are not intrinsically subjective. But I would also not make claims about them being objective. We are caught in a sinkhole dichotomy between subjectivity and objectivity. Western thinking needs to find a way out of this. Too many philosophical discussions get stuck on these concepts. They're useful to a degree, but we need to be able to discard them when they become useless. "Killing is wrong" is a true statement. It's not subjectively true; it's not objectively true. It's true in a sense that doesn't neatly fit into either of those categories. 
6JenniferRM
I'm not sure about the rest of it, but this caught my eye: I had a similar thought, and was trying to figure out if I could find a single good person to formally and efficiently coordinate with in a non-trivial pre-existing institution full of "safely good and sane people". I'm still searching. If anyone has a solid lead on this, please DM me, maybe? Something you might expect is that many such "hypothetically existing hypothetically good people" would be willing to die slightly earlier for a good enough cause (especially late in life when their life expectancy is low, and especially for very high stakes issues where a lot of leverage is possible) but they wouldn't waste lives, because waste is ceteris paribus bad, and so... so... what about martyrs who are also leaders? This line of thinking is how I learned about Martin The Confessor, the last Pope to ever die for his beliefs. Since 655 AD is much much earlier than 2024 AD, it would seem that Catholicism no longer "has the sauce" so to speak? Also, slightly relatedly, I'm more glad that I otherwise might be that in this timeline the bullet missed Trump. In other very nearby timelines I'm pretty sure the whole idea of using physical courage to detect morally good leadership in a morally good group would be much more controversial than the principle is here, now, in this timeline, where no one has trapped priors about it that are being actively pumped full of energy by the media, with the creation of new social traumas, and so on... ...not that elected secular leaders of mere nation states would have any obvious formal duties to specifically be the person to benevolently serve literally all good beings as a focal point. To get that formula to basically work, in a way that it kinda seems to work with US elections, since many US Presidents are assassinated in ways they could probably predict were possible (modulo this currently only working within the intrinsically "partial" nature of US elections, since these
2mako yass
The connection to moral systems could be due to the fact that curing people of trapped priors or other narcissism-like self-defending pathologies is hard and punishing and you won't do it for them unless you have a lot of love and faith in you. I wonder if it also has something to do with certain kinds of information being locally nonexcludable goods, they have a cost to spread, but the value of the information is never obvious to a potential buyer until after the transfer has taken place. A person only pays their teacher back if the teacher can convey a sense of moral responsibility to do so. Finally, harari's definition of religion is just a system of ideas that brings order between people. This is usually a much more useful definition than definitions like "claims about the supernatural" or whatever. In this frame, many truths, "trade allows mutual benefit", or [the english language] or [how to not be cripplingly insane] are religious in that it benefits all of us a little bit if more people have these ideas installed.
4zhukeepa
Regarding your second point, I'm leaving this comment as a placeholder to indicate my intention to give a proper response at some point. My views here have some subtlely that I want to make sure I unpack correctly, and it's getting late here! 
2zhukeepa
In response to your third point, I want to echo ABlue's comment about the compatibility of the trapped prior view and the evopsych view. I also want to emphasize that my usage of "trapped prior" includes genetically pre-specified priors, like a fear of snakes, which I think can be overriden.  In any case, I don't see why priors that predispose us to e.g. adultery couldn't be similarly overriden. I wonder if our main source of disagreement has to do with the feasibility of overriding "hard-wired" evolutionary priors? 
4zhukeepa
In response to your first point, I think of moral codes as being contextual more than I think of them as being subjective, but I do think of them as fundamentally being about pragmatism ("let's all agree to coordinate in ABC way to solve PQR problem in XYZ environment, and socially punish people who aren't willing to do so"). I also think religions often make the mistake of generalizing moral codes beyond the contexts in which they arose as helpful adaptations.  I think of decision theory as being the basis for morality -- see e.g. Critch's take here and Richard Ngo's take here. I evaluate how ethical people are based on how good they are at paying causal costs for larger acausal gains. 
ABlue138

The adulterer, the slave owner and the wartime rapist all have solid evolutionary reasons to engage in behaviors most of us might find immoral. I think their moral blind spots are likely not caused by trapped priors, like an exaggerated fear of dogs is.

I don't think the evopsych and trapped-prior views are incompatible. A selection pressure towards immoral behavior could select for genes/memes that tend to result in certain kinds of trapped prior.

Note: there is an AI audio version of this text over here: https://askwhocastsai.substack.com/p/eliezer-yudkowsky-tweet-jul-21-2024

I find the AI narrations offered by askwho generally ok, worse than what a skilled narrator (or team) could do but much better than what I could accomplish. 

[...] somehow humanity's 100-fold productivity increase (since the days of agriculture) didn't eliminate poverty.

That feels to me about as convincing as saying: "Chemical fertilizers have not eliminated hunger, just the other weekend I was stuck on a campus with a broken vending machine." 

I mean, sure, both the broken vending machine and actual starvation can be called hunger, just as both working 60h/week to make ends meet or sending your surviving kids into the mines or prostituting them could be called poverty, but the implication that either scour... (read more)

Critically, the gene editing of the red blood cells can be done in the lab; trying to devise an injectable or oral substance that would actually transport the gene-editing machinery to an arbitrary part of the body is much harder.

 

I am totally confused by this. Mature red blood cells don't contain a nucleus, and hence no DNA. There is nothing to edit. Injecting blood cells produced by gene-edited bone marrow in vitro might work, but would only be a therapy, not a cure: it would have to be repeated regularly. The cure would be to replace the bone marro... (read more)

I thought this first too. I checked on Wikipedia:

Adult stem cells are found in a few select locations in the body, known as niches, such as those in the bone marrow or gonads. They exist to replenish rapidly lost cell types and are multipotent or unipotent, meaning they only differentiate into a few cell types or one type of cell. In mammals, they include, among others, hematopoietic stem cells, which replenish blood and immune cells, basal cells, which maintain the skin epithelium [...].

I am pretty sure that the thing a skin cell makes per default when it splits is more skin cells, so you are likely correct. 

See here. Of course, that article is a bit light on information on detection thresholds, false-positive rates and so on as compared to dogs, mass spectrometry or chemical detection methods. 

I will also note that humans have 10-20M olfactory receptor neurons, while bees have 1M neurons in total. Probably bees are under more evolutionary pressure to make optimal use of their olcfactory neurons, though. 

Dear Review Bot,

please avoid double-posting. 

On the other hand, I don't think voting you to -6 is fair, so I upvoted you. 

quiet_NaN2-2

My take on sniffer dogs is that frequently, what they are best at is picking up is unconscious tells from their handler. In so far as they do, they are merely science!-washing the (possibly meritful) biases of the police officier. 

Packaging something really air-tight without outside contamination is indeed far from trivial. For example, the swipe tests taken at airports are useful because while it is certainly possible to pack a briefcase full of explosives without any residue on the outside is certainly possible, most of the people who could manage t... (read more)

7Ben
The idea that the sniffer dog picks up on what the handler is thinking and plays it out for them is very interesting, and maybe does indeed happen sometimes. But I think you are probably overcorrecting somewhat. Sniffer dogs do actually smell things. In much more low-stakes situations I have seen one in New Zealand successfully identify several people getting off a flight who had forgotten about food in their backpacks (they have strict laws against food going in in case you bring a new blight or pest or whatever).  So my read is that sniffer dogs are at least good enough at actual sniffing to demand some kind of response from would be smugglers (eg. extra plastic wrapping).

My first question is about the title picture. I have some priors on how a computer tomography machine for vehicles would look like. Basically, you want to take x-ray images from many different directions. The medical setup where you have a ring which contains the x-ray source on one side and the detectors on the other side, and rotate that ring to take images from multiple directions before moving the patient perpendicular to the ring to record the next slice exists for a reason: high resolution x-ray detectors are expensive. If we scaled this up to a car ... (read more)

1FireStormOOO
You're very likely correct IMO.  The only thing I see pulling in the other direction is that cars are far more standardized than humans, and a database of detailed blueprints for every make and model could drastically reduce the resolution needed for usefulness.  Especially if the action on a cursory detection is "get the people out of the area and scan it harder", not "rip the vehicle apart".

The deliberately clumsy term "AInotkilleveryoneism" seems good for this, in any context you can get away with it. 

 

Hard disagree. The position "AI might kill all humans in the near future" is still quite some inferential distance away from the mainstream even if presented in a respectable academic veneer. 

We do not have weirdness points to spend on deliberately clumsy terms, even on LW. Journalists (when they are not busy doxxing people) can read LW too, and if they read that the worry about AI as an extinction risk is commonly called notkil... (read more)

2Seth Herd
I think you're right. Unfortunately I'm not sure "AI as an extinction risk" is much better. It's still a weird thing to posit, by standard intuitions.

It is also useful for a lot of practical problems, where you can treat  as being essentially zero and  as being essentially infinite. If you want to get anywhere with any practical problem (like calculating how long a car will take to come to a stop), half of the job is to know which approximations ("cheats") are okay to use. If you want to solve the fully generalized problem (for a car near the Planck units or something), you will find that you would need a theory of everything (that is quantum mechanics plus general relativity) to ... (read more)

I think that "AI Alignment" is a useful label for the somewhat related problems around P1-P6. Having a term for the broader thing seems really useful. 

Of course, sometimes you want labels to refer to a fairly narrow thing, like the label "Continuum Hypothesis". But broad labels are generally useful. Take "ethics", another broad field label. Nominative ethics, applied ethics, meta-ethics, descriptive ethics, value theory, moral psychology, et cetera. I someone tells me "I study ethics" this narrows down what problems they are likely to work on, but not... (read more)

1particlemania
I would agree that it would be good and reasonable to have a term to refer to the family of scientific and philosophical problem spanned by this space. At the same time, as the post says, the issue is when there is semantic dilution, people talking past each other, and coordination-inhibiting ambiguity. Now take a look at something I could check with a simple search: an ICML Workshop that uses the term alignment mostly to mean P3 (task-reliability) https://arlet-workshop.github.io/ One might want to use alignment one way or the other, and be careful of the limited overlap with P3 in our own registers, but by the time the larger AI community has picked up on the use-semantics of 'RLHF is an alignment technique' and associated alignment primarily with task-reliability, you'd need some linguistic interventions and deliberation to clear the air.
quiet_NaN2511

I think an AI is slightly more likely to wipe out or capture humanity than it is to wipe out all life on the planet.

While any true Scottsman ASI is so far above us humans as we are above ants and does not need to worry about any meatbags plotting its downfall, as we don't generally worry about ants, it is entirely possible that the first AI which has a serious shot at taking over the world is not quite at that level yet. Perhaps it is only as smart as von Neumann and a thousand times faster. 

To such an AI, the continued thriving of humans poses all so... (read more)

Cassette AI: “Dude I just matched with a model”

“No way”

“Yeah large language”


This made me laugh out loud.

Otherwise, my idea for a dating system would be that given that the majority of texts written will invariably end up being LLM-generated, it would be better if every participant openly had an AI system as their agent. Then the AI systems of both participants could chat and figure out how their user would rate the other user based on their past ratings of suggestions. If the users end up being rated among each others five most viable candidates, 

Of c... (read more)

I was fully expecting having to write yet another comment about how human-level AI will not be very useful for a nuclear weapon program. I concede that the dangers mentioned instead (someone putting an AI in charge of a reactor or nuke) seem much more realistic. 

Of course, the utility of avoiding sub-extinction negative outcomes with AI in the near future is highly dependent on p(doom). For example, if there is no x-risk, then the first order effects of avoiding locally bad outcomes related to CBRN hazards are clearly beneficial. 

On the other han... (read more)

Edit: looks like was already raised by Dacyn and answered to my satisfaction by Robert_AIZI. Correctly applying the fundamental theorem of calculus will indeed prevent that troublesome zero from appearing in the RHS in the first place, which seems much preferable to dealing with it later. 

My real analysis might be a bit rusty, but I think defining I as the definite integral breaks the magic trick. 

I mean, in the last line of the 'proof',  gets applied to the zero function. 

Any definitive integral of the zero function is zer... (read more)

quiet_NaN3-1

I think I have two disagreements with your assessment. 

First, the probability of a random independent AI researcher or hobbyist discovering a neat hack to make AI training cheaper and taking over. GPT4 took 100M$ to train and is not enough to go FOOM. To train the same thing within the budget of the median hobbyist would require algorithmic advantages of three or four orders of magnitude. 

Historically, significant progress has been made by hobbyists and early pioneers, but mostly in areas which were not under intense scrutiny by established acade... (read more)

quiet_NaN53

Maybe GPT-5 will be extremely good at interpretability, such that it can recursively self improve by rewriting its own weights.

I am by no means an expert on machine learning, but this sentence reads weird to me. 

I mean, it seems possible that a part of a NN develops some self-reinforcing feature which uses the gradient descent (or whatever is used in training) to go into a particular direction and take over the NN, like a human adrift on a raft in the ocean might decide to build a sail to make the raft go into a particular direction. 

Or is that s... (read more)

1Joseph Miller
I was thinking of a scenario where OpenAI deliberately gives it access to its own weights to see if it can self improve. I agree that it would be more likely to just speed up normal ML research.
quiet_NaN41

I think that it is obvious that Middle-Endianness is a satisfactory compromise between Big and Little Endian. 

More seriously, it depends on what you want to do with the number. If you want to use it in a precise calculation, such as adding it to another number, you obviously want to process the least significant digits of the inputs first (which is what bit serial processors literally do). 

If I want to know if a serially transmitted number is below or above a threshold, it would make sense to transmit it MSB first (with a fixed length). 

Of c... (read more)

quiet_NaN72

The sum of two numbers should have a precision no higher than the operand with the highest precision. For example, adding 0.1 + 0.2 should yield 0.3, not 0.30000000000000004.

I would argue that the precision should be capped at the lowest precision of the operands. In physics, if you add to lengths, 0.123m+0.123456m should be rounded to 0.246m.

Also, IEEE754 fundamentally does not contain information about the precision of a number. If you want to track that information correctly, you can use two floating point numbers and do interval arithmetic. There is ev... (read more)

quiet_NaN20

In the subagent view, a financial precommitment another subagent has arranged for the sole purpose of coercing you into one course of action is a threat. 

Plenty of branches of decision theory advise you to disregard threats because consistently doing so will mean that instances of you will more rarely find themselves in the position to be threatened.

Of course, one can discuss how rational these subagents are in the first place. The "stay in bed, watch netflix and eat potato chips" subagent is probably not very concerned with high level abstract planning and might have a bad discount function for future benefits and not be overall that interested in the utility he get from being principled.

quiet_NaN20

To whomever overall-downvoted this comment, I do not think that this is a troll. 

Being a depressed person, I can totally see this being real. Personally, I would try to start slow with positive reinforcement. If video games are the only thing which you can get yourself to do, start there. Try to do something intellectually interesting in them. Implement a four bit adder in dwarf fortress using cat logic. Play KSP with the Principia mod. Write a mod for a game. Use math or Monte Carlo simulations to figure out the best way to accomplish something in a ... (read more)

5Elizabeth
I don't think the original comment was a troll, but I also don't think it was a helpful contribution on this post. OP specifically framed the post as their own experience, not a universal cure. Comments explaining why it won't work for a specific person aren't relevant.
3CronoDAS
My depression is currently well-controlled at the moment, and I actually have found various methods to help me get things done, since I don't respond well to the simplest versions of carrot-and-stick methods. The most pleasant is finding someone else to do it with me (or at least act involved while I do the actual work). On the other hand, there have been times when procrastinating actually gives me a thrill, like I'm getting away with something. Mediocre video games become much more appealing when I have work to avoid.
quiet_NaN1-3

You quoted:

the vehicle can cruise at Mach 2.8 while consuming less than half the energy per passenger of a Boeing 747 at a cruise speed of Mach 0.81


This is not how Mach works. You are subsonic iff your Mach number is smaller than one. The fact that you would be supersonic if you were flying in a different medium has no bearing on your Mach number. 

 I would also like to point out that while hydrogen on its own is rather inert and harmless, its reputation in transportation as a gas which stays inert under all practical conditions is not entirely un... (read more)

2bhauth
Those Mach numbers are for the relevant speed in air. I would have written that differently, but that's how the cited paper worded things. Mostly-sealing against part of the tube before cutting it is less problematic than dealing with a large pressure difference. Aerodynamic support and propulsion in hydrogen is less expensive than magnetic propulsion and support in a vacuum-filled tube. Building an unpressurized tube is cheaper than a tube that doesn't buckle under compressive forces. And so on.
quiet_NaN20

If this was true, how could we tell? In other words, is this a testable hypothesis?

This. Physics runs on falsifiable predictions. If 'consciousness can affect quantum outcomes' is any more true than the classic 'there is an invisible dragon in my garage', then discovering that fact would seem easy from an experimentalist standpoint. Sources of quantum randomness (e.g. weak source+detector) are readily available, so any claimant who thinks they can predict or affect their outcomes could probably be tested initially for a few 100$. 

 

General remark:... (read more)

1zhukeepa
Hmm, I notice I may have been a bit unclear in my original post. When I'd said "pseudorandom", I wasn't referring to the use of a pseudo-random number generator instead of a true RNG. I was referring to the "transcript" of relevant quantum events only appearing random, without being "truly random", because of the way in which they were generated (which I'm thinking of as being better described as "sampled from a space parameterizing the possible ways the world could be, conditional on humanity building superintelligence" rather than "close to truly random, or generated by a pseudo-random RNG, except with nudges toward ASI".)  Wouldn't this also serve as an argument against malign consequentialists in the Solomonoff prior, that may make it a priori more likely for us to end up in a world with particular outcomes optimized in their favor?  To be clear, it's also not clear to me that this would result in a lower K-complexity either. My main point is that (1) the null hypothesis of quantum events being independent of consciousness rests on assumptions (like assumptions about what the Solomonoff prior is like) that I think are actually pretty speculative, and that (2) there are speculative ways the Solomonoff prior could be in which our consciousness can influence quantum outcomes.  My goal here is not to make a positive case for consciousness affecting quantum outcomes, as much as it is to question the assumptions behind the case against the world working that way. 
1zhukeepa
Yes, I'm also bearish on consciousness affecting quantum outcomes in ways that are as overt and measurable in the way you're gesturing at. The only thing I was arguing in this post is that the effect size of consciousness on quantum outcomes is maybe more than zero, as opposed to obviously exactly zero. I don't think of myself as having made any arguments that the effect size should be non-negligible, although I also don't think that possibility has been ruled out for non-neglible effect sizes lying somewhere between "completely indistinguishable from no influence at all" and "overt and measurable to the extent a proclaimed psychic could reproducibly affect quantum RNG outcomes". 
quiet_NaN40

Saliva causes cancer, but only if swallowed in small amounts over a long period of time.

(George Carlin)

 

For this to be a risk, the cancer risk would have to be superlinear in the acetaldehyde concentration. In a linear model, the high local concentrations would not matter overall, because the expected number of mutations you get would not depend on how you distribute the carcinogen among your body cells. 

Or the cells in your mouth or throat could be especially vulnerable to cancer. 

From my understanding, having bacteria in your mouth which b... (read more)

quiet_NaN370

One thing to keep in mind is that the delta-v required to reach LEO is some 9.3km/s. (Handy map)

This is an upper limit for what delta-v can be militarily useful in ICBMs for fighting on our rock. 

Going from LEO to the moon requires another 3.1km/s. 

This might not seem much, but makes a huge difference in the payload to thruster ratio due to the rocket equation.

If physics were different and the moon was within reach of ICBMs then I imagine it might have become the default test site for nuclear tipped ICBMs. 

Instead, the question was "do we wa... (read more)

7eukaryote
Thanks for the extra info - this is good stuff! I figured the moon difference might be, like, some extra rocketry on top of ICBMs, but not necessarily a lot - but this makes sense that it's in fact a pretty substantial difference. Yeah, I think people signing onto the OST really helped bury the idea. (It did not stop the USSR from at one point from violating it in 1974-75 by attaching a 23mm gun to a space station. (For "self defense". It was never used.) This probably isn't that related to the larger nukes question, I just learned that recently and thought it was a fun fact.) I appreciate your excellent comment.
quiet_NaN21

I am sure that Putin had something like the Anschluss in mind when he started his invasion. 

Luckily for the west, he was wrong about that. 

From a Machiavellian perspective, the war in Ukraine is good for the West: for a modest investment in resources, we can bind a belligerent Russia while someone else does all the dying. From a humanitarian perspective, war is hell and we should hope for a peace where Putin gets whatever he has managed to grab while the rest of Ukraine joins NATO and will be protected by NATO nukes from further aggression. ... (read more)

quiet_NaN30

Anything related to the Israel/Palestine conflict is invoking politics the mind killer. 

It is the hot button topic number one on the larger internet, from what I can tell. 

"Either the ministry made an honest mistake or the the statistical analysis did" does not seem like the kind of statement most people will agree on. 

Perhaps, but I also feel like this is a real misunderstanding of politics being the mind killer. Rationality is critically important in dealing with real world problems, and that includes problems that have become politicized. The important-to-me thing is that, at least here on Less Wrong, we stay focused, as much as possible, on questions of evidence and reasoning. Posts about whether Israel or Palestine is good/bad should be off limits, but posts about whether Israel or Palestine are making errors in their reporting of facts in ways that can be sussed ou... (read more)

quiet_NaN50

Link. (General motte content warning: this is a forum which has strong free speech norms, which disproportionally attracts people who would find it hard to voice their opinions elsewhere. On a bad day you will read five paragraphs of a comment on the war in Gaza only to realize that this is just the introduction the author's main pet topic of holocaust denial. Also, content warning: discussion is meh.)

I am not sure it is the one I remember reading, not that I remember the discussion much. I normally read the CW thread, and vaguely remember the link going t... (read more)

quiet_NaN20

Regarding assisted suicide, the realistic alternative in the case of the 28 year old would not be that she would live unhappily ever after. The alternative is an an unilateral suicide attempt by her. 

Unilateral suicide attempts impose additional costs on society. The patient can rarely communicate their decision to anyone close to them beforehand because any confidant might have them locked up in psychiatry instead. The lack of ability to talk about any particulars with someone who knows her real identity[1], especially their therapist, will in turn m... (read more)

quiet_NaN30

Anecdata: I have in my freezer deep-frozen cake which has been there fore months. If it was in the fridge (and thus ready to eat) I would eat a piece every time I open the fridge. But I have no compulsion to further the unhealthy eating habits of future me, let that schmuck eat a proper meal instead!

Ice cream I eat directly from the freezer, so that effect is not there for me.

quiet_NaN30

The appropriate lesswrong-adjacent-adjacent place to post this would be the culture war thread of the motte. I think a tweet making similar claims was discussed there before. 

I have some hot takes on this but this is not the place for them.

2Brendan Long
Any chance you can link to that discussion? I'm really curious.
quiet_NaN10

Thanks, this is interesting. 

From my understanding, in no-limit games, one would want to only have some fraction of ones bankroll in chips on the table, so that one can re-buy after losing an all-in bluff. (I would guess that this fraction should be determined by the Kelly criterion or something.)

On the other hand, from browsing Wikipedia, it seems like many poker tournaments prohibit or limit re-buying after going bust. This would indicate that one has limited amounts of opportunity to get familiar with the strategy of the opponents (which could very... (read more)

2Lukas_Gloor
Yeah, you need an enormous bankroll to play $10,000 tournaments. What a lot of pros do is sell action. Let's say you're highly skilled and have a, say, 125% expected return on investment. If you find someone with a big bankroll and they're convinced of your skills, you can you sell them your action at a markup somewhere between 1 and 1.2 to incentivize them to make a profit. I'd say something like 1.1 markup is fairest, so you're paying them a good prize to weather the variance for you.  At 1.1 markup, they pay 1.1x whatever it costs you to buy into the tournament. You can sell a large part of your action but not quite all of it to keep an incentive to play well (if you sold everything at $11,000, you could, if you were shady, just pocket the extra $1,000, go out early on purpose, and register the next tournament where you sold action for another round of instant profit). So, let's say they paid you $8,800 to get 80% of your winnings, so they make an expected profit of ($8,000 * 1.25) - $8,000, which is $1,200.  And  then you yourself still have 20% of your own action, for which you only paid $1,200 (since you got $800 from the 1.1 markup and you invest that into your tournament). Now, you're only in for $1,200 of your own money, but you have 20% of the tournament, so you'd already be highly profitable if you were just breaking even. In addition, as we stipulated, you have an edge on the field expecting 125% ROI, so in expectation, that $1,200 is worth $2,000*1.25, which is $2,500. This still comes with a lot of variance, but your ROI is now so high that Kelly allows you to play a big tournament in this way even if your net worth is <$100k. (This analysis simplified things assuming there's no casino fee. In reality, if a tournament is advertized as a $10k tournament, the buy in tends to be more like $10,500, and $500 is just the casino fee that doesn't go into the prize pool. This makes edges considerably smaller.) Regarding busting a tournament with a risky bluf
quiet_NaN0-1

(sorry for thread necromancy)

Meta: I kind of wonder about the moderation score of gwern's comment. Karma -5, Agreement -10. So someone saw that comment at -4 and thought 'this is still rated too high'.

FWIW, I do not think his comment was bad. A bit tongue in cheek, perhaps, but I think his comment engages with the subject matter of the post more deeply than the parent comment. 

Or some subset of people voting on LW either really like Banana Taffy or really hate gwern, or both. 

quiet_NaN21

Not everyone is out to get you

If your BATNA to winning the bid on that wheelbarrow auction is to order it for 120$ of Amazon with free overnight shipping, then winning the auction for 180$ is net negative for you. 

But if your BATNA is to carry bags of sand on your back all summer, then 180$ for a wheelbarrow is a bloody bargain.

Assuming a toy model where dating preferences follow a global preference ordering ('hotness'), then any person showing any interest in dating you is proof that you can likely do better.[1] But if you follow that rul... (read more)

quiet_NaN50

Poker seems nice as a hobby, but terrible as a job as discussed on the motte

Also, if all bets were placed before the flop, the equilibrium strategy would probably be to bet along some fixed probability distribution depending on your position, the previous bets and what cards you have. Instead, the three rounds of betting after some cards are open on the table make the game much more complicated. If you know you have a winning hand, you do not want your opponent to fold, you want them to match your bet. So you kinda have to balance optimizing for the... (read more)

2Lukas_Gloor
This is pretty accurate. For simplicity, let's assume you have a hand that has a very high likelihood of winning at showdown on pretty much any runout. E.g., you have KK on a flop that is AK4, and your opponent didn't raise you before the flop, so you can mostly rule out AA. (Sure, if an A comes then A6 now beats you, or maybe they'll have 53s for a straight draw to A2345 with a 2 coming, or maybe they somehow backdoor into a different straight or flush depending on the runout and their specific hand – but those outcomes where you end up losing are unlikely enough to not make a significant difference to the math and strategy.) The part about information leakage is indeed important, but rather than adjusting your bet sizing to prevent information leakage (i.e., "make the bet sizing smaller so it's less obvious that I've got a monster"), you simply add the right number of bluffs to your big-bet line to make your opponent's bluff-catching hands exactly indifferent. So, instead of betting small with KK to "keep them in" or "disguise the strength of your hand," you still bomb it, but you'd play the same way with a hand like J5ss (can pick up a 1-to-a-straight draw on all of the following turns: 2,3, T, Q; and can pick up a flush draw on any turn with a spade if there was one spade already on the flop). To optimize for the maximum pot at showdown and maximum likelihood of getting called for all the chips, you want to bet the same proportion of the pot on each street (flop, turn, and river) to get all-in with the last bet. (This is forcing your opponent to defend the most; if you make just one huge bet of all-in right away, your opponent mathematically has to call you with fewer hands to prevent you from automatically profiting with every hand as a bluff.) So, if the pot starts out at 6 big blinds (you raise 2.75x, get called by the big blind, and there's a 0.5 small blind in there as well). Your stack was100 big blinds to start. If you were to bet 100% of the pot on ea
Answer by quiet_NaN21

I think different people mean different things with "causation". 

On the one hand, we have things where A makes B vastly more likely. No lawyer tries to argue that while their client shot the victim in the head (A) and the victim died (B), it could still be the case that the cause of death was old age and their client was simply unlucky. This is the strictest useful definition of causation. 

Things get more complicated when A is just one of many factors contributing to B. Nine (or so) of ten lung carcinoma are "caused" by smoking, we say. But for t... (read more)

Load More