That's only true if the probability is a continuous function - perhaps the probability instantaneously went from below 28% to above 28%.
I’m claiming that we should only ever reason about infinity by induction-type proofs. Due to the structure of the thought experiment, the only thing that is possible to use for to count in this way is galaxies, so (I claim) counting galaxies is the only thing that you’re allowed to use for moral reasoning. Since all of the galaxies in each universe are moral equivalents (either all happy but one or all miserable but one), how you rearrange galaxies doesn’t affect the outcome.
(To be clear, I agree that if you rearrange people under the concepts of infinity ...
I don’t think that it does? There are infinitely many arrangements, but the same proof by induction applies to any possible arrangement.
I have an argument for a way in which infinity can be used but which doesn't imply any of the negative conclusions. I'm not convinced of its reasonableness or correctness though.
I propose that infinity ethics should only be reasoned about by use of proof through induction. When done this way, the only way to reason about HEAVEN and HELL is by matching up galaxies in each universe, and doing induction across all of the elements:
Theorem: The universe HEAVEN that contains n galaxies is a better universe than HELL which contains n galaxies. We will formalize t...
One downside to using video games to measure "intelligence" is that they often rely on skills that aren't generally included in "intelligence", like how fast and precise you can move your fingers. If someone has poor hand-eye coordination, they'll perform less well on many video games than people who have good hand-eye coordination.
A related problem is that video games in general have a large element of a "shared language", where someone who plays lots of video games will be able to use skills from those when playing a new video game. I know people that ar...
often rely on skills that aren't generally included in "intelligence", like how fast and precise you can move your fingers
That's a funny example considering that (negative one times a type of) reaction time is correlated with measures of g-factor at about .
There's not direct rationality commentary in the post, but there's plenty of other posts on LW that also aren't direct rationality commentary (for example, a large majority of posts here about COVID-19). I think that this post is a good fit because it provides tools for understanding this conflict and others like it, which I didn't possess before and now somewhat do.
It's not directly relevant to my life, but that's fine. I imagine that for some here it might actually be relevant, because of connections through things like effective altruism (maybe it helps grant makers decide where to send funds to aid the Sudanese people?).
Interesting post, thanks!
A couple of formatting notes:
This post gives a context to the deep dives that should be minimally accessible to a general audience. For an explanation of why the war began, see this other post.
It seems like there should be a link here, but there isn't one.
Also, all of the footnotes don't link to each other properly, so currently one has to manually scroll down to the footnotes and then scroll back up. LessWrong has a footnote feature that you could use, which makes the reading experience nicer.
It used to be called Find Friends on iOS, but they rebranded it, presumably because family was a better market fit.
There are others like that too, like Life360, and they’re quite popular. They solve the problem of parents wanting to know where their kids are. It’s perhaps overly zealous on the parents part, but it’s a real desire that the apps are solving.
Metaculus isn’t very precise near zero, so it doesn’t make sense to multiply it out.
Also, there’s currently a mild outbreak, while most of the time there’s no outbreak (or less of one), so the risk for the next half year is elevated compared to normal.
I'm not familiar with how Stockfish is trained, but does it have intentional training for how to play with queen odds? If not, then it might be able to start trouncing you if it were trained to play with it, instead of having to "figure out" new strategies uniquely.
Are there other types of energy storage besides lithium batteries that are plausibly cheap enough (with near-term technological development) to cover the multiple days of storage case?
(Legitimately curious, I'm not very familiar with the topic.)
Yes, compressed natural gas in underground caverns is cheap enough for seasonal energy storage.
But of course, you meant "storage that can be efficiently filled using electricity". That's a difficult question. In theory, thermal energy storage using molten salt or hot sand could work, and maybe a sufficiently cheap flow battery chemistry is possible. In theory, much better water electrolysis and hydrogen fuel cells are possible; there just currently aren't any plausible candidates for that.
But currently, even affordable 14-hour storage is rather challenging.
If you're on the open-air viewing platform, it might be feasible to use something like a sextant or shadow lengths to figure out the height from the platform to the top, and then use a different tool to figure out the height of the platform.
I often realize that I've had a headache for a while and had not noticed it. It has real effects - I'm feeling grumpy, I'm not being productive - but it's been filtered out before my conscious brain noticed it. I think it's unreasonable to say that I didn't have a headache, just because my conscious brain didn't notice it, when the unconscious parts of my brain very much did notice it.
After a split-brain surgery, patients can experience someone on one side of their body and not notice it with the portion of the brain that is controlling speaking, tha...
The problem is that prior to ~1990, there were lots of supposed photographs of Bigfoot, and now there are ~none. So Bigfoots would have to previously been common close to humans but are now uncommon, or all the photos were fake but the other evidence was real. Plus, all of that other evidence has also died out (now that it's less plausible that they couldn't have taken any photos). So it's possible still that Bigfoot exists, but you have to start by throwing out all of the evidence that people have that Bigfoot exists, and then why believe in Bigfoot?
I really enjoyed the parts of the post that weren't related to consciousness, and it helped me think more about the assumptions I have about how the universe works. The Feynman quote was new for me, so thank you for sharing that!
However, when you brought consciousness into the post, it brought along additional assumptions that the rest of the post wasn't relying on, weakening the post as a whole. Additionally, LessWrong has a long history of debating whether consciousness is "emergent" or not. Most readers here already hold fixed positions on the debate an...
Any position that could be considered safe enough to back a market is only going to appreciate in proportion to inflation, which would just make the market zero-sum after adjusting for inflation. Something like ETH or gold wouldn't be a good solution because it's going to be massively distorted on questions that are correlated with the performance of that asset, plus there's always the possibility that they just go down, which would be the opposite of what you want.
I haven't read Fossil Future, but it sounds like he's ignoring the option of combining solar and wind with batteries (and other types of electrical storage, like pumped water). The technology is available today and can be more easily deployed than fossil fuels at this point.
Parts of this are easily falsifiable through the fact that organ transplant recipients sometimes get donor’s memories and preferences
The citation is to an unreputable journal. Some of their sources might have basis (though a lot of them also seem unreputable), but I wouldn't take this at face value.
There can also be meaning that the author simply didn't intend. In biblical interpretation, for instance, there have been many different (and conflicting!) interpretations given to texts that were written with a completely different intent. One reader reads the story of Adam and Eve as a text that supports feminism, another reader sees the opposite, and the original writer didn't intend to give either meaning. But both readers still get those meanings from the text.
Interestingly, it apparently used to be Zebra, but is now Zulu. I'm not sure why they switched over, but it seems to be the predominant choice since the early 1950s.
I understand that definition, which is why I’m confused for why you brought up the behavior of bacteria as evidence for why bacteria has experience. I don’t think any non-animals have experience, and I think many animals (like sponges) also don’t. As I see it, bacteria are more akin to natural chemical reactions than they are to humans.
I brought up the simulation of a bacteria because an atom-for-atom simulation of a bacteria is completely identical to a bacteria - the thing that has experience is represented in the atoms of the bacteria, so a perfect simulation of a bacteria must also internally experience things.
If bacteria have experience, then I see no reason to say that a computer program doesn’t have experience. If you want to say that a bacteria has experience based on guesses from its actions, then why not say that a computer program has experience based on its words?
From a different angle, suppose that we have a computer program that can perfectly simulate a bacteria. Does that bacteria have experience? I don’t see any reason why not, since it will demonstrate all the same ability to act on intention. And if so, then why couldn’t a different computer progra...
If you look far enough back in time, humans are are descended from animals akin to sponges that seem to me like they couldn’t possibly have experience. They don’t even have neurons. If you go back even further we’re the descendants of single celled organisms that absolutely don’t have experience. But at some point along the line, animals developed the ability to have experience. If you believe in a higher being, then maybe it introduced it, or maybe some other metaphysical cause, but otherwise it seems like qualia has to arise spontaneously from the evolut...
Nit: "if he does that then Caplan won't get paid back, even if Caplin wins the bet" misspells "Caplan" in the second instance.
Cable companies are forcing you to pay for channels you don’t want. Cable companies are using unbundling to mislead customers and charge extra for basic channels everyone should have.
I think this would be more acceptable if either everything was bundled or nothing was. But generally speaking companies bundle channels that few people want, to give the appearance of a really good deal, and unbundle the really popular channels (like sports channels) to profit. So you sign up for a TV package that has "hundreds of channels", but you get lots of channels that you don't care about and none of the channels you really want. You're screwed both ways.
I think you're totally spot on about ChatGPT and near term LLMs. The technology is still super far away from anything that could actually replace a programmer because of all of the complexities involved.
Where I think you go wrong is looking at the long term future AIs. As a black box, at work I take in instructions on Slack (text), look at the existing code and documentation (text), and produce merge requests, documentation, and requests for more detailed requirements (text). Nothing there requires some essentially human element - the AI just needs to be g...
Prediction market on whether the lawsuit will succeed:
https://manifold.markets/Gabrielle/will-the-justice-department-win-its
I’m not a legal expert, but I expect that this sort of lawsuit, involving coordination between multiple states’ attorneys general and the department of justice, would take months of planning and would have to have started before public-facing products like ChatGPT were even released.
The feared outcome looks something like this:
We're worried about AI getting too powerful, but logically that means humans are getting too powerful, right?
One of the big fears with AI alignment is that the latter doesn't logically proceed from the first. If you're trying to create an AI that makes paperclips and then it kills all humans because it wasn't aligned (with any human's actual goals), it was powerful in a way that no human was. You do definitely need to worry about what goal the AI is aligned with, but even more important than that is ensuring that you can align an AI to any human's preferences, or else the worry about which goal is pointless.
The Flynn effect isn't really meaningful outside of IQ tests. Most medieval and early modern peasants were uneducated and didn't know much about the world far from their home, but they definitely weren't dumb. If you look at the actual techniques they used to run their farms, they're very impressive and require a good deal of knowledge and fairly abstract thinking to do optimally, which they often did.
Also, many of the weaving patterns that they've been doing for thousands of years are very complex, much more complex than a basic knitting stitch.
- At least 90% of internet users could solve it within one minute.
While I understand the reasoning behind this bar, having a bar greater than something like 99.99% of internet users is strongly discriminatory and regressive. Captchas are used to gate parts of the internet that are required for daily life. For instance, almost all free email services require filling out captchas, and many government agencies now require you to have an email address to interact with them. A bar that cuts out a meaningful number of humans means that those humans become unable t...
Workers at a business are generally more aligned with each other than they are with the shareholders of the business. For example, if the company is debating a policy that has a 51% chance of doubling profit and a 49% chance of bankrupting the company, I would expect most shareholders to be in favor (since it's positive EV for them). But for worker-owners, that's a 49% chance of losing their job and a 51% chance of increasing salary but not doubling (since it's profit that is doubling, not revenue, and their salaries are part of the expenses), so I would e...
I think the biggest issue in software development is the winner-takes-all position with many internet businesses. For the business to survive, you have to take the whole market, which means you need to have lots of capital to expand quickly, which means you need venture capital. It's the same problem that self-funded startups have. People generally agree that self-funded startups are better to work at, but they can't grow quite as fast as VC-funded startups and lose the race. But that doesn't apply outside of the software sphere (which is why VC primarily ...
So Diplomacy is not a computationally complex game, it's a game about out-strategizing your opponents where roughly all of the strategy is convincing other of your opponents to work with you. There are no new tactics to invent and an AI can't really see deeper into the game than other players, it just has to be more persuasive and make decisions about the right people at the right time. You often have to do things like plan ahead to make your actions so that in a future turn someone else will choose to ally with you. The AI didn't do any specific psycholog...
What does this picture [pale blue dot] make you think about?
This one in particular seems unhelpful, since the picture is only meaningful if the viewer knows what it's a photo of. Sagan's description of it does a lot to imbue so much emotion into it.
That seems like a really limiting definition of intelligence. Stephen Hawking, even when he was very disabled, was certainly intelligent. However, his ability to be agentic was only possible thanks to the technology he relied on (his wheelchair and his speaking device). If that had been taken away from him, he would no longer have had any ability to alter the future, but he would certainly still have been just as intelligent.
I don’t have any experience with data centers or with deploying machine learning at scale. However, I would expect that for performance reasons it is much more efficient to have a local cache of the current data and then either have a manual redeploy at a fixed schedule or have the software refresh the cache automatically after some amount of time.
I would also imagine that reacting immediately could result in feedback loops where the AI overreacts to recent actions.
A mitigating factor for the criminality is that smarter people are usually less in need of committing crimes. Society values conventional intelligence and usually will pay well for it, so someone who is smarter will tend to get better jobs and make more money, so they won't need to resort to crime (especially petty crime).
My understanding of Spanish (also not a Spanish speaker) is that it's a palatal nasal /ɲ/, not a palatalized alveolar nasal /nʲ/. With a palatal nasal, you're making the sound with the tip of your tongue at the soft palate (the soft part at the top of your mouth, behind the alveolar ridge). With a palatalized nasal, it's a "secondary" articulation, with the body of your tongue moving to the soft palate.
That said, the Spanish ñ is a good example of a palatal or palatalized sound for an English speaker.
Yeah, that's absolutely more correct, but it is at least a little helpful for a monolingual English speaker to understand what palatalization is.
Not sure I can explain it in text to a native English speaker what palatalization is; you would need to hear actual examples.
There are some examples in English. It's not quite the same as how Slavic languages work*, but it's close enough to get the idea: If you compare "cute" and "coot", the "k" sound in "cute" is palatalized while the "k" sound in "coot" is not. Another example would be "feud" and "food".
British vs American English differ sometimes in palatalization. For instance, in British English (RP), "tube" is pronounced with a palatalized "t" ...
The risk is a good point given some of the uncertainties we’re dealing with right now. I’d estimate maybe 1% risk of those per year (more weighted towards the latter half of the time frame, but I’ll assume that it’s constant), so perhaps with a discounting rate of that it would need to be more like $1400. That’s still much less than the assumption.
Looking at my consumption right now, I objectively would not spend the $1000 on something that lasts for more than 30 years, so I believe that shouldn’t be relevant. To make this more direct, we could phrase it as something like “a $1000 vacation now or a $1400 vacation in 30 years”, though that ignores consumption offsetting.
For the point about smoothing consumption, does that actually work given that retirement savings are usually invested and are expected to give returns higher than inflation? For instance, my current savings plan means that although my income is going to go up, and my amount saved will go up proportionally, the majority of my money when I retire will be from early in my career.
For a more specific example, consider two situations where I'm working until I'm 65 and have returns of 6% per annum (and taking all dollar amounts to be inflation adjusted):
if I put things in my cart, don't check out, and come back the next day, I'm going to be frustrated if the site has forgotten my selections!
Ironically, I get frustrated by the inverse of this. If I put something in my shopping cart, I definitely don’t still want it tomorrow. I keep on accidentally having items that I don’t want in my order when I check out, and then I have to go back through all the order steps to remove the item (since there’s hardly ever a removal button from the credit card screen). It’s so frustrating! I don’t want you to remember things about me from previous days, just get rid of it all.
A single human is always going to have a risk of a sudden mental break, or perhaps simply not having been trustworthy in the first place. So it seems to me like a system where the most knowledgeable person has the single decision is always going to be somewhat more risky than a situation where that most knowledgeable person also has to double check with literally anyone else. If you make sure that the two people are always together, it doesn’t hurt anything (other than the salary for that person, I suppose, but that’s negligible).
For political reasons, we ...
The policy could just be “at least one person has to agree with the President to launch the nuclear arsenal”. It probably doesn’t change the game that much, but it at least gets rid of the possible risk that the President has a sudden mental break and decides to launch missiles for no reason. Notably it doesn’t hurt the ability to respond to an attack, since in that situation there would undoubtedly be at least one aide willing to agree, presumably almost all of them.
Actually consulting with the aide isn’t necessary, just an extra button press to ensure that something completely crazy doesn’t happen.
What I’m referring to is the two-man rule: https://en.m.wikipedia.org/wiki/Two-man_rule
US military policy requires that for a nuclear weapon to actually be launched, two people at the silo or on the submarine have to coordinate to launch the missile. The decision still comes from a single person (the President), but the people who follow out the order have to be double checked, so that a single crazy serviceman doesn’t launch a missile.
It wouldn’t be crazy for the President to require a second person to help make the decision, since the President is going ...
I disagree with basically all of them.
As I see it, the large majority of government employees are neither incompetent nor corrupt, and the Federal government overall works extremely well given all of the tasks that it's asked to do. The president is supposed to execute the will of the legislature according to the law (which he isn't, he's shutting down agencies that Congress has created and subverting other agencies to not do what Congress has instructed them to do). Musk did a bad job of it with Twitter (it's less profitable now than it was when he bought... (read more)