I think formally, the Kolmogorov complexity would have to be stated as the length of a description of a Turing Machine (not that this gets completely rid of any wiggle room).
Of course, TMs do not offer a great gaming experience.
"The operating system and the hardware" is certainly an upper bound, but also quite certainly to be overkill.
Your floating point unit or your network stack are not going to be very busy while you play tetris.
If you cut it down to the essentials (getting rid of things like scores which have to displayed as characters, or background g...
Agreed.
If the authors claim that adding randomness in the territory in classical mechanics requires making it more complex, they should also notice that for quantum mechanics, removing the probability from the territory for QM (like Bohmian mechanics) tends to make the the theories more complex.
Also, QM is not a weird edge case to be discarded at leisure, it is to the best of our knowledge a fundamental aspect of what we call reality. Sidelining it is like arguing "any substance can be divided into arbitrary small portions" -- sure, as far as everyda...
Okay. So from what I understand, you want to use a magnetic effect observed in plasma as a primary energy source.
Generally, a source of energy works by taking a fuel which contains energy and turning it into a less energetic waste product. For example, carbon and oxygen can be burned to form CO2. Or one can split some uranium nucleus into two fragments which are more stable and reap the energy difference as heat.
Likewise, a wind turbine will consume some of the kinetic energy of the air, and a solar panel will take energy from photons. For a fusion reactor...
Seconded. Also, in the second picture, that line is missing, so it seems that it is just Zvi complaining about the "win probability"?
My guess is that the numbers (sans the weird negative sign) might indicate the returns in percent for betting on either team. Then, if the odds were really 50:50 and the bookmaker was not taking a cut, they should be 200 each? So 160 would be fair if the first team had a win probability of 0.625, while 125 would be fair if the other team had a win probability of 0.8. Of course, these add up to more than one, which is to be ex...
There is also a quantum version of that puzzle.
I have two identical particles of non-zero spin in identical states (except possibly for the spin direction). One of them is spin up. What is the probability that both of them are spin up.
For fermions, that probability is zero, of course. Pauli exclusion principle.
For bosons, ...
... the key insight is that you can not distinguish them. The possible wave functions are either (spin-up, spin-up) or (spin-up,spin-down)=(spin-down,spin-up). Hence, you get p=1/2. (From this, we can conclude that boys (p=1/3) are made up from 2/3 bosons and 1/3 fermions.)
Let us assume that the utility of personal wealth is logarithmic, which is intuitive enough: 10k$ matter a lot more to you if you are broke than if your net worth is 1M$.
Then by your definition of exploitation, every transaction where a poor person pays a rich person and enhances their personal wealth in the process is exploitive. The worker surely needs the rent money more than the landlord, so the landlord should cut the rent to the point where he does not make a profit. Likewise the physician providing aid to the poor, or the CEO selling smartphones to ...
Some comments.
...
[...] We will quickly hit superintelligence, and, assuming the superintelligence is aligned, live in a post-scarcity technological wonderland where everything is possible.
Note, firstly, that money will continue being a thing, at least unless we have one single AI system doing all economic planning. Prices are largely about communicating information. If there are many actors and they trade with each other, the strong assumption should be that there are prices (even if humans do not see them or interact with them). Remember too that howev
Some additional context.
This fantasy world is copied from a role-playing game setting—a fact I discovered when Planecrash literally linked to a Wiki article to explain part of the in-universe setting.
The world of Golarian is a (or the?) setting of the Pathfinder role playing game, which is a fork of the D&D 3.5 rules[1] (but notably different from Forgotten Realms, which is owned by WotC/Hasbro). The core setting is defined in some twenty-odd books which cover everything from the political landscape in dozens of polities to detailed rule for how m...
One big aspect of Yudkowskian decision theory is how to respond to threats. Following causal decision theory means you can neither make credible threats nor commit to deterrence to counter threats. Yudkowsky endorses not responding to threats to avoid incentivising them, while also having deterrence commitments to maintain good equilibria. He also implies this is a consequence of using a sensible functional decision theory. But there's a tension here: your deterrence commitment could be interpreted as a threat by someone else, or visa versa.
I h...
Relatedly, if you perform an experiment n times, and the probability of success is p, and the expected number of total successes kp is much smaller than one, then kp is a reasonable measure of getting at least once success, because the probability of getting more than one success can be neglected.
For example, if Bob plays the lottery for ten days, and each days has a 1:1000,000 chance of winning, then overall he will have a chance of 100,000 of winning once.
This is also why micromorts are roughly additive: if travelling by railway has a mortali...
Getting down-voted to -27 is an achievement. Most things judged 'bad AI takes' only go to -11 or so, even that recent P=NP proof only got to -25. Of course, if the author is right, then downvoting further is providing helpful incentives to him.
I think that bullying is quite distinct from status hierarchies. The latter are unavoidable. There will always be some clique of cool kids in the class who will not invite the non-cool kids to their parties. This is ok. Sometimes, status is correlated with behaviors which are pro-social (kids not smoking;...
I see this as less of an endorsement of linear models and more of a scathing review of expert performance.
- When an arithmetic model is calibrated, it is specifically by including feedback from the real-world effects of its predictions. Experts do not, as a rule, seek out any feedback on their calibration.
This. Basically, if your job is to do predictions, and the accuracy of your predictions is not measured, then (at least the prediction part of) your job is bullshit.
I think that if you compare simple linear models in domains where people actuall...
What I don't understand is why there should be a link between trapped priors and an moral philosophy.
I mean, if moral realism was correct, i.e. if moral tenets such as "don't eat pork", "don't have sex with your sister", or "avoid killing sentient beings" had an universal truth value for all beings capable of moral behavior, then one might argue that the reason why people's ethics differ is that they have trapped priors which prevent them from recognizing these universal truths.
This might be my trapped priors talking, but I am a non-cognitivist...
My second point is that if moral realism was true, and one of the key roles of religion was to free people from trapped priors so they could recognize these universal moral truths, then at least during the founding of religions, we should see some evidence of higher moral standards before they invariably mutate into institutions devoid of moral truths. I would argue that either, our commonly accepted humanitarian moral values are all wrong or this mutation process happened almost instantly:
This is easy to research.
I will name a few ways the Bud...
The adulterer, the slave owner and the wartime rapist all have solid evolutionary reasons to engage in behaviors most of us might find immoral. I think their moral blind spots are likely not caused by trapped priors, like an exaggerated fear of dogs is.
I don't think the evopsych and trapped-prior views are incompatible. A selection pressure towards immoral behavior could select for genes/memes that tend to result in certain kinds of trapped prior.
Note: there is an AI audio version of this text over here: https://askwhocastsai.substack.com/p/eliezer-yudkowsky-tweet-jul-21-2024
I find the AI narrations offered by askwho generally ok, worse than what a skilled narrator (or team) could do but much better than what I could accomplish.
[...] somehow humanity's 100-fold productivity increase (since the days of agriculture) didn't eliminate poverty.
That feels to me about as convincing as saying: "Chemical fertilizers have not eliminated hunger, just the other weekend I was stuck on a campus with a broken vending machine."
I mean, sure, both the broken vending machine and actual starvation can be called hunger, just as both working 60h/week to make ends meet or sending your surviving kids into the mines or prostituting them could be called poverty, but the implication that either scour...
Critically, the gene editing of the red blood cells can be done in the lab; trying to devise an injectable or oral substance that would actually transport the gene-editing machinery to an arbitrary part of the body is much harder.
I am totally confused by this. Mature red blood cells don't contain a nucleus, and hence no DNA. There is nothing to edit. Injecting blood cells produced by gene-edited bone marrow in vitro might work, but would only be a therapy, not a cure: it would have to be repeated regularly. The cure would be to replace the bone marro...
I thought this first too. I checked on Wikipedia:
Adult stem cells are found in a few select locations in the body, known as niches, such as those in the bone marrow or gonads. They exist to replenish rapidly lost cell types and are multipotent or unipotent, meaning they only differentiate into a few cell types or one type of cell. In mammals, they include, among others, hematopoietic stem cells, which replenish blood and immune cells, basal cells, which maintain the skin epithelium [...].
I am pretty sure that the thing a skin cell makes per default when it splits is more skin cells, so you are likely correct.
See here. Of course, that article is a bit light on information on detection thresholds, false-positive rates and so on as compared to dogs, mass spectrometry or chemical detection methods.
I will also note that humans have 10-20M olfactory receptor neurons, while bees have 1M neurons in total. Probably bees are under more evolutionary pressure to make optimal use of their olcfactory neurons, though.
Dear Review Bot,
please avoid double-posting.
On the other hand, I don't think voting you to -6 is fair, so I upvoted you.
My take on sniffer dogs is that frequently, what they are best at is picking up is unconscious tells from their handler. In so far as they do, they are merely science!-washing the (possibly meritful) biases of the police officier.
Packaging something really air-tight without outside contamination is indeed far from trivial. For example, the swipe tests taken at airports are useful because while it is certainly possible to pack a briefcase full of explosives without any residue on the outside is certainly possible, most of the people who could manage t...
My first question is about the title picture. I have some priors on how a computer tomography machine for vehicles would look like. Basically, you want to take x-ray images from many different directions. The medical setup where you have a ring which contains the x-ray source on one side and the detectors on the other side, and rotate that ring to take images from multiple directions before moving the patient perpendicular to the ring to record the next slice exists for a reason: high resolution x-ray detectors are expensive. If we scaled this up to a car ...
The deliberately clumsy term "AInotkilleveryoneism" seems good for this, in any context you can get away with it.
Hard disagree. The position "AI might kill all humans in the near future" is still quite some inferential distance away from the mainstream even if presented in a respectable academic veneer.
We do not have weirdness points to spend on deliberately clumsy terms, even on LW. Journalists (when they are not busy doxxing people) can read LW too, and if they read that the worry about AI as an extinction risk is commonly called notkil...
It is also useful for a lot of practical problems, where you can treat as being essentially zero and as being essentially infinite. If you want to get anywhere with any practical problem (like calculating how long a car will take to come to a stop), half of the job is to know which approximations ("cheats") are okay to use. If you want to solve the fully generalized problem (for a car near the Planck units or something), you will find that you would need a theory of everything (that is quantum mechanics plus general relativity) to ...
I think that "AI Alignment" is a useful label for the somewhat related problems around P1-P6. Having a term for the broader thing seems really useful.
Of course, sometimes you want labels to refer to a fairly narrow thing, like the label "Continuum Hypothesis". But broad labels are generally useful. Take "ethics", another broad field label. Nominative ethics, applied ethics, meta-ethics, descriptive ethics, value theory, moral psychology, et cetera. I someone tells me "I study ethics" this narrows down what problems they are likely to work on, but not...
I think an AI is slightly more likely to wipe out or capture humanity than it is to wipe out all life on the planet.
While any true Scottsman ASI is so far above us humans as we are above ants and does not need to worry about any meatbags plotting its downfall, as we don't generally worry about ants, it is entirely possible that the first AI which has a serious shot at taking over the world is not quite at that level yet. Perhaps it is only as smart as von Neumann and a thousand times faster.
To such an AI, the continued thriving of humans poses all so...
Cassette AI: “Dude I just matched with a model”
“No way”
“Yeah large language”
This made me laugh out loud.
Otherwise, my idea for a dating system would be that given that the majority of texts written will invariably end up being LLM-generated, it would be better if every participant openly had an AI system as their agent. Then the AI systems of both participants could chat and figure out how their user would rate the other user based on their past ratings of suggestions. If the users end up being rated among each others five most viable candidates,
Of c...
I was fully expecting having to write yet another comment about how human-level AI will not be very useful for a nuclear weapon program. I concede that the dangers mentioned instead (someone putting an AI in charge of a reactor or nuke) seem much more realistic.
Of course, the utility of avoiding sub-extinction negative outcomes with AI in the near future is highly dependent on p(doom). For example, if there is no x-risk, then the first order effects of avoiding locally bad outcomes related to CBRN hazards are clearly beneficial.
On the other han...
Edit: looks like was already raised by Dacyn and answered to my satisfaction by Robert_AIZI. Correctly applying the fundamental theorem of calculus will indeed prevent that troublesome zero from appearing in the RHS in the first place, which seems much preferable to dealing with it later.
My real analysis might be a bit rusty, but I think defining I as the definite integral breaks the magic trick.
I mean, in the last line of the 'proof', gets applied to the zero function.
Any definitive integral of the zero function is zer...
I think I have two disagreements with your assessment.
First, the probability of a random independent AI researcher or hobbyist discovering a neat hack to make AI training cheaper and taking over. GPT4 took 100M$ to train and is not enough to go FOOM. To train the same thing within the budget of the median hobbyist would require algorithmic advantages of three or four orders of magnitude.
Historically, significant progress has been made by hobbyists and early pioneers, but mostly in areas which were not under intense scrutiny by established acade...
Maybe GPT-5 will be extremely good at interpretability, such that it can recursively self improve by rewriting its own weights.
I am by no means an expert on machine learning, but this sentence reads weird to me.
I mean, it seems possible that a part of a NN develops some self-reinforcing feature which uses the gradient descent (or whatever is used in training) to go into a particular direction and take over the NN, like a human adrift on a raft in the ocean might decide to build a sail to make the raft go into a particular direction.
Or is that s...
I think that it is obvious that Middle-Endianness is a satisfactory compromise between Big and Little Endian.
More seriously, it depends on what you want to do with the number. If you want to use it in a precise calculation, such as adding it to another number, you obviously want to process the least significant digits of the inputs first (which is what bit serial processors literally do).
If I want to know if a serially transmitted number is below or above a threshold, it would make sense to transmit it MSB first (with a fixed length).
Of c...
The sum of two numbers should have a precision no higher than the operand with the highest precision. For example, adding 0.1 + 0.2 should yield 0.3, not 0.30000000000000004.
I would argue that the precision should be capped at the lowest precision of the operands. In physics, if you add to lengths, 0.123m+0.123456m should be rounded to 0.246m.
Also, IEEE754 fundamentally does not contain information about the precision of a number. If you want to track that information correctly, you can use two floating point numbers and do interval arithmetic. There is ev...
In the subagent view, a financial precommitment another subagent has arranged for the sole purpose of coercing you into one course of action is a threat.
Plenty of branches of decision theory advise you to disregard threats because consistently doing so will mean that instances of you will more rarely find themselves in the position to be threatened.
Of course, one can discuss how rational these subagents are in the first place. The "stay in bed, watch netflix and eat potato chips" subagent is probably not very concerned with high level abstract planning and might have a bad discount function for future benefits and not be overall that interested in the utility he get from being principled.
To whomever overall-downvoted this comment, I do not think that this is a troll.
Being a depressed person, I can totally see this being real. Personally, I would try to start slow with positive reinforcement. If video games are the only thing which you can get yourself to do, start there. Try to do something intellectually interesting in them. Implement a four bit adder in dwarf fortress using cat logic. Play KSP with the Principia mod. Write a mod for a game. Use math or Monte Carlo simulations to figure out the best way to accomplish something in a ...
You quoted:
the vehicle can cruise at Mach 2.8 while consuming less than half the energy per passenger of a Boeing 747 at a cruise speed of Mach 0.81
This is not how Mach works. You are subsonic iff your Mach number is smaller than one. The fact that you would be supersonic if you were flying in a different medium has no bearing on your Mach number.
I would also like to point out that while hydrogen on its own is rather inert and harmless, its reputation in transportation as a gas which stays inert under all practical conditions is not entirely un...
If this was true, how could we tell? In other words, is this a testable hypothesis?
This. Physics runs on falsifiable predictions. If 'consciousness can affect quantum outcomes' is any more true than the classic 'there is an invisible dragon in my garage', then discovering that fact would seem easy from an experimentalist standpoint. Sources of quantum randomness (e.g. weak source+detector) are readily available, so any claimant who thinks they can predict or affect their outcomes could probably be tested initially for a few 100$.
General remark:...
Saliva causes cancer, but only if swallowed in small amounts over a long period of time.
(George Carlin)
For this to be a risk, the cancer risk would have to be superlinear in the acetaldehyde concentration. In a linear model, the high local concentrations would not matter overall, because the expected number of mutations you get would not depend on how you distribute the carcinogen among your body cells.
Or the cells in your mouth or throat could be especially vulnerable to cancer.
From my understanding, having bacteria in your mouth which b...
One thing to keep in mind is that the delta-v required to reach LEO is some 9.3km/s. (Handy map)
This is an upper limit for what delta-v can be militarily useful in ICBMs for fighting on our rock.
Going from LEO to the moon requires another 3.1km/s.
This might not seem much, but makes a huge difference in the payload to thruster ratio due to the rocket equation.
If physics were different and the moon was within reach of ICBMs then I imagine it might have become the default test site for nuclear tipped ICBMs.
Instead, the question was "do we wa...
I am sure that Putin had something like the Anschluss in mind when he started his invasion.
Luckily for the west, he was wrong about that.
From a Machiavellian perspective, the war in Ukraine is good for the West: for a modest investment in resources, we can bind a belligerent Russia while someone else does all the dying. From a humanitarian perspective, war is hell and we should hope for a peace where Putin gets whatever he has managed to grab while the rest of Ukraine joins NATO and will be protected by NATO nukes from further aggression. ...
Anything related to the Israel/Palestine conflict is invoking politics the mind killer.
It is the hot button topic number one on the larger internet, from what I can tell.
"Either the ministry made an honest mistake or the the statistical analysis did" does not seem like the kind of statement most people will agree on.
Perhaps, but I also feel like this is a real misunderstanding of politics being the mind killer. Rationality is critically important in dealing with real world problems, and that includes problems that have become politicized. The important-to-me thing is that, at least here on Less Wrong, we stay focused, as much as possible, on questions of evidence and reasoning. Posts about whether Israel or Palestine is good/bad should be off limits, but posts about whether Israel or Palestine are making errors in their reporting of facts in ways that can be sussed ou...
Link. (General motte content warning: this is a forum which has strong free speech norms, which disproportionally attracts people who would find it hard to voice their opinions elsewhere. On a bad day you will read five paragraphs of a comment on the war in Gaza only to realize that this is just the introduction the author's main pet topic of holocaust denial. Also, content warning: discussion is meh.)
I am not sure it is the one I remember reading, not that I remember the discussion much. I normally read the CW thread, and vaguely remember the link going t...
Regarding assisted suicide, the realistic alternative in the case of the 28 year old would not be that she would live unhappily ever after. The alternative is an an unilateral suicide attempt by her.
Unilateral suicide attempts impose additional costs on society. The patient can rarely communicate their decision to anyone close to them beforehand because any confidant might have them locked up in psychiatry instead. The lack of ability to talk about any particulars with someone who knows her real identity[1], especially their therapist, will in turn m...
Anecdata: I have in my freezer deep-frozen cake which has been there fore months. If it was in the fridge (and thus ready to eat) I would eat a piece every time I open the fridge. But I have no compulsion to further the unhealthy eating habits of future me, let that schmuck eat a proper meal instead!
Ice cream I eat directly from the freezer, so that effect is not there for me.
The appropriate lesswrong-adjacent-adjacent place to post this would be the culture war thread of the motte. I think a tweet making similar claims was discussed there before.
I have some hot takes on this but this is not the place for them.
Thanks, this is interesting.
From my understanding, in no-limit games, one would want to only have some fraction of ones bankroll in chips on the table, so that one can re-buy after losing an all-in bluff. (I would guess that this fraction should be determined by the Kelly criterion or something.)
On the other hand, from browsing Wikipedia, it seems like many poker tournaments prohibit or limit re-buying after going bust. This would indicate that one has limited amounts of opportunity to get familiar with the strategy of the opponents (which could very...
(sorry for thread necromancy)
Meta: I kind of wonder about the moderation score of gwern's comment. Karma -5, Agreement -10. So someone saw that comment at -4 and thought 'this is still rated too high'.
FWIW, I do not think his comment was bad. A bit tongue in cheek, perhaps, but I think his comment engages with the subject matter of the post more deeply than the parent comment.
Or some subset of people voting on LW either really like Banana Taffy or really hate gwern, or both.
Not everyone is out to get you.
If your BATNA to winning the bid on that wheelbarrow auction is to order it for 120$ of Amazon with free overnight shipping, then winning the auction for 180$ is net negative for you.
But if your BATNA is to carry bags of sand on your back all summer, then 180$ for a wheelbarrow is a bloody bargain.
Assuming a toy model where dating preferences follow a global preference ordering ('hotness'), then any person showing any interest in dating you is proof that you can likely do better.[1] But if you follow that rul...
Poker seems nice as a hobby, but terrible as a job as discussed on the motte.
Also, if all bets were placed before the flop, the equilibrium strategy would probably be to bet along some fixed probability distribution depending on your position, the previous bets and what cards you have. Instead, the three rounds of betting after some cards are open on the table make the game much more complicated. If you know you have a winning hand, you do not want your opponent to fold, you want them to match your bet. So you kinda have to balance optimizing for the...
I think different people mean different things with "causation".
On the one hand, we have things where A makes B vastly more likely. No lawyer tries to argue that while their client shot the victim in the head (A) and the victim died (B), it could still be the case that the cause of death was old age and their client was simply unlucky. This is the strictest useful definition of causation.
Things get more complicated when A is just one of many factors contributing to B. Nine (or so) of ten lung carcinoma are "caused" by smoking, we say. But for t...
Unlike Word, the human genome is self-hosting. That means that it is paying fair and square for any complexity advantage it might have -- if Microsoft found that the x86 was not expressive enough to code in a space-efficient manner, they could likewise implement more complex machinery to host it.
Of course, the core fact is that the DNA of eukaryotes looks memory efficient compared to the bloat of word.
There was a time when Word was shipped on floppy disks. From what I recall, it came on multiple floppies, but on the order of ten, not a thousand. With these... (read more)