All of gbear605's Comments + Replies

I disagree with basically all of them.

As I see it, the large majority of government employees are neither incompetent nor corrupt, and the Federal government overall works extremely well given all of the tasks that it's asked to do. The president is supposed to execute the will of the legislature according to the law (which he isn't, he's shutting down agencies that Congress has created and subverting other agencies to not do what Congress has instructed them to do). Musk did a bad job of it with Twitter (it's less profitable now than it was when he bought... (read more)

2Simon Berens
Re twitter’s profitability, Musk about doubled EBITDA despite revenue halving, i.e. he more than tripled EBITDA margin https://www.teslarati.com/elon-musk-x-doubled-ebitda-since-2022-takeover-report/amp
2Maxwell Peterson
Well okay then :)! You giving a disagree-vote makes a lot of sense. Thanks for explaining.
gbear6051510

That's only true if the probability is a continuous function - perhaps the probability instantaneously went from below 28% to above 28%.

2Razied
Oh, true! I was going to reply that since probability is just a function of a physical system, and the physical system is continuous, then probability is continuous... but if you change an integer variable in C from 35 to 5343 or whatever, there's no real sense in which the variable goes through all intermediate values, even if the laws of physics are continuous.

I’m claiming that we should only ever reason about infinity by induction-type proofs. Due to the structure of the thought experiment, the only thing that is possible to use for to count in this way is galaxies, so (I claim) counting galaxies is the only thing that you’re allowed to use for moral reasoning. Since all of the galaxies in each universe are moral equivalents (either all happy but one or all miserable but one), how you rearrange galaxies doesn’t affect the outcome.

(To be clear, I agree that if you rearrange people under the concepts of infinity ... (read more)

1omnizoid
Why is the only thing that we can use galaxies?  We can compare people in any ways.   If you rearrange people, standard mathematics says that you can turn HEAVEN into HELL.  Infinity/1 billion = infinity.  You have to change the math of infinity, not just the math of ethics where you add up infinity. 

I don’t think that it does? There are infinitely many arrangements, but the same proof by induction applies to any possible arrangement.

2omnizoid
Wait, do you agree that rearranged heaven gets hell?  If so, you either have to deny that HEAVEN>HELL or that arrangement matters.   You're assuming we're comparing them by galaxies.  But there's no natural way to individuate that explains why we should do that.  

I have an argument for a way in which infinity can be used but which doesn't imply any of the negative conclusions. I'm not convinced of its reasonableness or correctness though.

I propose that infinity ethics should only be reasoned about by use of proof through induction. When done this way, the only way to reason about HEAVEN and HELL is by matching up galaxies in each universe, and doing induction across all of the elements:

Theorem: The universe HEAVEN that contains n galaxies is a better universe than HELL which contains n galaxies. We will formalize t... (read more)

1omnizoid
That implies that order matters!  If you rearange heaven, you get hell.   There are other problems with ordering--some series can sum to any number depending on arrangement,. 
gbear6052726

One downside to using video games to measure "intelligence" is that they often rely on skills that aren't generally included in "intelligence", like how fast and precise you can move your fingers. If someone has poor hand-eye coordination, they'll perform less well on many video games than people who have good hand-eye coordination.

A related problem is that video games in general have a large element of a "shared language", where someone who plays lots of video games will be able to use skills from those when playing a new video game. I know people that ar... (read more)

often rely on skills that aren't generally included in "intelligence", like how fast and precise you can move your fingers

That's a funny example considering that (negative one times a type of) reaction time is correlated with measures of g-factor at about .

5Boris Kashirin
Have you played something like Slay the spire? Or Mechabellum that is popular right now? Deck builders don't require coordination at all but demands understanding of tradeoffs and managing risks. If anything those skills are neglected parts of intelligence. And how high is barrier of entry to something like Super Auto Pets?

There's not direct rationality commentary in the post, but there's plenty of other posts on LW that also aren't direct rationality commentary (for example, a large majority of posts here about COVID-19). I think that this post is a good fit because it provides tools for understanding this conflict and others like it, which I didn't possess before and now somewhat do.

It's not directly relevant to my life, but that's fine. I imagine that for some here it might actually be relevant, because of connections through things like effective altruism (maybe it helps grant makers decide where to send funds to aid the Sudanese people?).

Interesting post, thanks!

A couple of formatting notes:

This post gives a context to the deep dives that should be minimally accessible to a general audience. For an explanation of why the war began, see this other post.

It seems like there should be a link here, but there isn't one.

Also, all of the footnotes don't link to each other properly, so currently one has to manually scroll down to the footnotes and then scroll back up. LessWrong has a footnote feature that you could use, which makes the reading experience nicer.

It used to be called Find Friends on iOS, but they rebranded it, presumably because family was a better market fit.

There are others like that too, like Life360, and they’re quite popular. They solve the problem of parents wanting to know where their kids are. It’s perhaps overly zealous on the parents part, but it’s a real desire that the apps are solving.

Metaculus isn’t very precise near zero, so it doesn’t make sense to multiply it out.

Also, there’s currently a mild outbreak, while most of the time there’s no outbreak (or less of one), so the risk for the next half year is elevated compared to normal.

2avturchin
In the case of H5N1 we could suggest exponential growth of adaptation to mammals and humans as well as the number of infected birds, and ion that case the probability will be higher in the next few years.

I'm not familiar with how Stockfish is trained, but does it have intentional training for how to play with queen odds? If not, then it might be able to start trouncing you if it were trained to play with it, instead of having to "figure out" new strategies uniquely. 

1O O
Stockfish isn’t using deep learning afaik. It’s mostly just bruteforcing.

Are there other types of energy storage besides lithium batteries that are plausibly cheap enough (with near-term technological development) to cover the multiple days of storage case?

(Legitimately curious, I'm not very familiar with the topic.)

bhauth120

Yes, compressed natural gas in underground caverns is cheap enough for seasonal energy storage.

But of course, you meant "storage that can be efficiently filled using electricity". That's a difficult question. In theory, thermal energy storage using molten salt or hot sand could work, and maybe a sufficiently cheap flow battery chemistry is possible. In theory, much better water electrolysis and hydrogen fuel cells are possible; there just currently aren't any plausible candidates for that.

But currently, even affordable 14-hour storage is rather challenging.

If you're on the open-air viewing platform, it might be feasible to use something like a sextant or shadow lengths to figure out the height from the platform to the top, and then use a different tool to figure out the height of the platform.

3nim
From the photo of the tower's shadow in this article, I have two further guesses about the relative heights of the viewing platform and the pointy bits: 1. At some time in the year, the building's shadow will probably show the viewing deck height and pointy bit height, so they could in theory be triangulated 2. Due to the surrounding urban development, it looks wildly unlikely that the shadow will hit any surface it's actually useful to measure it on. It might not be legal to use from the viewing platform to the pointy bits, and it might not work in broad daylight, but a laser distance meter with ~50m range can be had for around $20 at the low end and fits in a pocket ;) A sextant is much less likely to cause problems by interfering with other tech, though.

I often realize that I've had a headache for a while and had not noticed it. It has real effects - I'm feeling grumpy, I'm not being productive - but it's been filtered out before my conscious brain noticed it. I think it's unreasonable to say that I didn't have a headache, just because my conscious brain didn't notice it, when the unconscious parts of my brain very much did notice it. 

After a split-brain surgery, patients can experience someone on one side of their body and not notice it with the portion of the brain that is controlling speaking, tha... (read more)

The problem is that prior to ~1990, there were lots of supposed photographs of Bigfoot, and now there are ~none. So Bigfoots would have to previously been common close to humans but are now uncommon, or all the photos were fake but the other evidence was real. Plus, all of that other evidence has also died out (now that it's less plausible that they couldn't have taken any photos). So it's possible still that Bigfoot exists, but you have to start by throwing out all of the evidence that people have that Bigfoot exists, and then why believe in Bigfoot?

I really enjoyed the parts of the post that weren't related to consciousness, and it helped me think more about the assumptions I have about how the universe works. The Feynman quote was new for me, so thank you for sharing that!

However, when you brought consciousness into the post, it brought along additional assumptions that the rest of the post wasn't relying on, weakening the post as a whole. Additionally, LessWrong has a long history of debating whether consciousness is "emergent" or not. Most readers here already hold fixed positions on the debate an... (read more)

1Neil
That makes sense, thanks a lot for the feedback! I will be much more careful next time and try to keep sweeping assumptions out. Some ideas in here could definitely have worked without bringing in one of the most fundamental and notoriously hardest questions to answer. Good day! 

Any position that could be considered safe enough to back a market is only going to appreciate in proportion to inflation, which would just make the market zero-sum after adjusting for inflation. Something like ETH or gold wouldn't be a good solution because it's going to be massively distorted on questions that are correlated with the performance of that asset, plus there's always the possibility that they just go down, which would be the opposite of what you want.

1Qumeric
Why does it have to be "safe enough"? If all market participants agree to bet using the same asset, it can bear any degree of risk.  I think I should have said that a good prediction market allows users to choose what asset will a particular "pair" use. It will cause a liquidity split which is also a problem, but it's also manageable and, in my opinion, it would be much closer to an imaginary perfect solution than "bet only USD".  I am not sure I understand your second sentence, but my guess is that this problem will also go away if each market "pair" uses a single (but customizable) asset. If I got it wrong, could you please clarify?

I haven't read Fossil Future, but it sounds like he's ignoring the option of combining solar and wind with batteries (and other types of electrical storage, like pumped water). The technology is available today and can be more easily deployed than fossil fuels at this point.

3ChristianKl
If you only have solar + wind + batteries, you have a problem when you have a week of bad weather. Batteries can effectively move energy that's produced at noon to the night but they are not cost effective for charging batteries in summer to be used in bad months in the winter. 
3Sable
While I think Epstein's treatment of solar/wind and batteries is too brief, his main points are: 1. Large portions of the energy we need have nothing to do with the grid. Specifically, transportation (global shipping, flight) and industrial process heat (to make steel, concrete, etc.) comprise a large percentage of our energy needs and solar/wind are pretty useless (far too inefficient) for meeting those needs. 2. Epstein also points out that replacing current fossil fuels with solar/wind + batteries will require massive amounts of a) batteries, b) transmission lines, and c) solar and wind farms, which the environmental movement seem to oppose locally whenever possible. Just because the technology exists doesn't mean we're capable, as a society, of deploying it at scale.

Parts of this are easily falsifiable through the fact that organ transplant recipients sometimes get donor’s memories and preferences

The citation is to an unreputable journal. Some of their sources might have basis (though a lot of them also seem unreputable), but I wouldn't take this at face value. 

There can also be meaning that the author simply didn't intend. In biblical interpretation, for instance, there have been many different (and conflicting!) interpretations given to texts that were written with a completely different intent. One reader reads the story of Adam and Eve as a text that supports feminism, another reader sees the opposite, and the original writer didn't intend to give either meaning. But both readers still get those meanings from the text.

1[anonymous]
But that's because the meaning is underdetermined, there is information (explicit meaning) within the texts that constraints the space of interpretations, but it still allows for several different ones. How much the text is underdetermined is both a function of the text and of the reader, the reader may lack (as I said) cultural or idiosyncratic context, acquaintance with the object of reference; or the text (which is what provides the new information) being too short to disambiguate.

Interestingly, it apparently used to be Zebra, but is now Zulu. I'm not sure why they switched over, but it seems to be the predominant choice since the early 1950s. 

2Cedar
somewhat far-fetched guess: internet -> everybody does astrology now -> zebra gets confused with Libra -> replacement with Zulu

I understand that definition, which is why I’m confused for why you brought up the behavior of bacteria as evidence for why bacteria has experience. I don’t think any non-animals have experience, and I think many animals (like sponges) also don’t. As I see it, bacteria are more akin to natural chemical reactions than they are to humans.

I brought up the simulation of a bacteria because an atom-for-atom simulation of a bacteria is completely identical to a bacteria - the thing that has experience is represented in the atoms of the bacteria, so a perfect simulation of a bacteria must also internally experience things.

If bacteria have experience, then I see no reason to say that a computer program doesn’t have experience. If you want to say that a bacteria has experience based on guesses from its actions, then why not say that a computer program has experience based on its words?

From a different angle, suppose that we have a computer program that can perfectly simulate a bacteria. Does that bacteria have experience? I don’t see any reason why not, since it will demonstrate all the same ability to act on intention. And if so, then why couldn’t a different computer progra... (read more)

If you look far enough back in time, humans are are descended from animals akin to sponges that seem to me like they couldn’t possibly have experience. They don’t even have neurons. If you go back even further we’re the descendants of single celled organisms that absolutely don’t have experience. But at some point along the line, animals developed the ability to have experience. If you believe in a higher being, then maybe it introduced it, or maybe some other metaphysical cause, but otherwise it seems like qualia has to arise spontaneously from the evolut... (read more)

-1JacobW38
My disagreement is here. Anyone with a microscope can still look at them today. The ones that can move clearly demonstrate acting on intention in a recognizable way. They have survival instincts just like an insect or a mouse or a bird. It'd be completely illogical not to generalize downward that the ones that don't move also exercise intention in other ways to survive. I see zero reason to dispute the assumption that experience co-originated with biology. I find the notion of "half consciousness" irredeemably incoherent. Different levels of capacity, of course, but experience itself is a binary bit that has to either be 1 or 0.

Nit: "if he does that then Caplan won't get paid back, even if Caplin wins the bet" misspells "Caplan" in the second instance.

4lsusr
Fixed. Thank you.

Cable companies are forcing you to pay for channels you don’t want. Cable companies are using unbundling to mislead customers and charge extra for basic channels everyone should have.

I think this would be more acceptable if either everything was bundled or nothing was. But generally speaking companies bundle channels that few people want, to give the appearance of a really good deal, and unbundle the really popular channels (like sports channels) to profit. So you sign up for a TV package that has "hundreds of channels", but you get lots of channels that you don't care about and none of the channels you really want. You're screwed both ways.

I think you're totally spot on about ChatGPT and near term LLMs. The technology is still super far away from anything that could actually replace a programmer because of all of the complexities involved.

Where I think you go wrong is looking at the long term future AIs. As a black box, at work I take in instructions on Slack (text), look at the existing code and documentation (text), and produce merge requests, documentation, and requests for more detailed requirements (text). Nothing there requires some essentially human element - the AI just needs to be g... (read more)

Prediction market on whether the lawsuit will succeed:

https://manifold.markets/Gabrielle/will-the-justice-department-win-its

I’m not a legal expert, but I expect that this sort of lawsuit, involving coordination between multiple states’ attorneys general and the department of justice, would take months of planning and would have to have started before public-facing products like ChatGPT were even released.

1trevor
That actually goes a long way towards answering the question. This means that in order for it to be connected, the lawsuit would have been on the backburner and the OpenAI-MSFT partnership somehow was either the straw that broke the camel's back, or it mostly-by-itself triggered a lawsuit that was held in reserve against google. Highly relevant info either way, thank you.

The feared outcome looks something like this:

  • A paperclip manufacturing company puts an AI in charge of optimizing its paperclip production.
  • The AI optimizes the factory and then realizes that it could make more paperclips by turning more factories into paperclips. To do that, it has to be in charge of those factories, and humans won’t let it do that. So it needs to take control of those factories by force, without humans being able to stop it.
  • The AI develops a super virus that will be an epidemic to wipe out humanity.
  • The AI contacts a genetics lab and
... (read more)
1Program Den
I get the premise, and it's a fun one to think about, but what springs to mind is Phase 1: collect underpants Phase 2: ??? Phase 3: kill all humans As you note, we don't have nukes connected to the internet. But we do use systems to determine when to launch nukes, and our senses/sensors are fallible, etc., which we've (barely— almost suspiciously "barely", if you catch my drift[1]) managed to not interpret in a manner that caused us to change the season to "winter: nuclear style". Really I'm doing the same thing as the alignment debate is on about, but about the alignment debate itself. Like, right now, it's not too dangerous, because the voices calling for draconian solutions to the problem are not very loud.  But this could change.  And kind of is, at least in that they are getting louder.  Or that you have artists wanting to harden IP law in a way that historically has only hurt artists (as opposed to corporations or Big Art, if you will) gaining a bit of steam. These worrying signs seem to me to be more concrete than the, similar, but not as old, nor as concrete, worrisome signs of computer programs getting too much power and running amok[2].     1. ^ we are living in a simulation with some interesting rules we are designed not to notice 2. ^ If only because it hasn't happened yet— no mentats or cylons or borg history — tho also arguably we don't know if it's possible… whereas authoritarian regimes certainly are possible and seem to be popular as of late[3]. 3. ^ hoping this observation is just confirmation bias and not a "real" trend. #fingerscrossed

We're worried about AI getting too powerful, but logically that means humans are getting too powerful, right?

One of the big fears with AI alignment is that the latter doesn't logically proceed from the first. If you're trying to create an AI that makes paperclips and then it kills all humans because it wasn't aligned (with any human's actual goals), it was powerful in a way that no human was. You do definitely need to worry about what goal the AI is aligned with, but even more important than that is ensuring that you can align an AI to any human's preferences, or else the worry about which goal is pointless.

1Program Den
I think the human has to have the power first, logically, for the AI to have the power. Like, if we put a computer model in charge of our nuclear arsenal, I could see the potential for Bad Stuff.  Beyond all the movies we have of just humans being in charge of it (and the documented near catastrophic failures of said systems— which could have potentially made the Earth a Rough Place for Life for a while).  I just don't see us putting anything besides a human's finger on the button, as it were.   By definition, if the model kills everyone instead of make paperclips, it's a bad one, and why on Earth would we put a bad model in charge of something that can kill everyone?  Because really, it was smart — not just smart, but sentient! — and it lied to us, so we thought it was good, and gave it more and more responsibilities until it showed its true colors and… It seems as if the easy solution is: don't put the paperclip making model in charge of a system that can wipe out humanity (again, the closest I can think of is nukes, tho the biological warfare is probably a more salient example/worry of late).  But like, it wouldn't be the "AI" unleashing a super-bio-weapon, right?  It would be the human who thought the model they used to generate the germ had correctly generated the cure to the common cold, or whatever.  Skipping straight to human trials because it made mice look and act a decade younger or whatnot. I agree we need to be careful with our tech, and really I worry about how we do that— evil AI tho? not so much so

The Flynn effect isn't really meaningful outside of IQ tests. Most medieval and early modern peasants were uneducated and didn't know much about the world far from their home, but they definitely weren't dumb. If you look at the actual techniques they used to run their farms, they're very impressive and require a good deal of knowledge and fairly abstract thinking to do optimally, which they often did. 

Also, many of the weaving patterns that they've been doing for thousands of years are very complex, much more complex than a basic knitting stitch.

-5Angela Pretorius
  • At least 90% of internet users could solve it within one minute.

While I understand the reasoning behind this bar, having a bar greater than something like 99.99% of internet users is strongly discriminatory and regressive. Captchas are used to gate parts of the internet that are required for daily life. For instance, almost all free email services require filling out captchas, and many government agencies now require you to have an email address to interact with them. A bar that cuts out a meaningful number of humans means that those humans become unable t... (read more)

5Bruce G
If only 90% can solve the captcha within one minute, it does not follow that the other 10% are completely unable to solve it and faced with "yet another barrier to living in our modern society". It could be that the other 10% just need a longer time period to solve it (which might still be relatively trivial, like needing 2 or 3 minutes) or they may need multiple tries. If we are talking about someone at the extreme low end of the captcha proficiency distribution, such that the person can not even solve in a half hour something that 90% of the population can answer in 60 seconds, then I would expect that person to already need assistance with setting up an email account/completing government forms online/etc, so whoever is helping them with that would also help with the captcha. (I am also assuming that this post is only for vision-based captchas, and blind people would still take a hearing-based alternative.)

Workers at a business are generally more aligned with each other than they are with the shareholders of the business. For example, if the company is debating a policy that has a 51% chance of doubling profit and a 49% chance of bankrupting the company, I would expect most shareholders to be in favor (since it's positive EV for them). But for worker-owners, that's a 49% chance of losing their job and a 51% chance of increasing salary but not doubling (since it's profit that is doubling, not revenue, and their salaries are part of the expenses), so I would e... (read more)

2Dagon
I always like it when I can upvote and disagree :)  I think you have to be in VERY far mode, and still squint a bit, to think of that as "alignment" to the degree that distinguishes socialist from conventional organizations.  Sure, employees as a group will prefer higher median wages over more profits (though maybe not if they're actual owners to a great degree), but I have yet to see a large organization where workers care all that much about other workers (distant ones, with different roles, who compete for prestige and compensation even while cooperating for  delivery).   Conventional org owners/leaders care a lot about worker retention and productivity, which is often summarized as "satisfaction".  I have seen no evidence in my <mumble> years at companies big and small, including both tech and non-tech workers that office workers care more about warehouse workers than senior management does.  There is probably slightly more for warehouse workers caring about workers in other warehouses, but even then, there's cut-throat hatred for closing "my" warehouse rather than someone else's.

I think the biggest issue in software development is the winner-takes-all position with many internet businesses. For the business to survive, you have to take the whole market, which means you need to have lots of capital to expand quickly, which means you need venture capital. It's the same problem that self-funded startups have. People generally agree that self-funded startups are better to work at, but they can't grow quite as fast as VC-funded startups and lose the race. But that doesn't apply outside of the software sphere (which is why VC primarily ... (read more)

So Diplomacy is not a computationally complex game, it's a game about out-strategizing your opponents where roughly all of the strategy is convincing other of your opponents to work with you. There are no new tactics to invent and an AI can't really see deeper into the game than other players, it just has to be more persuasive and make decisions about the right people at the right time. You often have to do things like plan ahead to make your actions so that in a future turn someone else will choose to ally with you. The AI didn't do any specific psycholog... (read more)

 What does this picture [pale blue dot] make you think about?

This one in particular seems unhelpful, since the picture is only meaningful if the viewer knows what it's a photo of. Sagan's description of it does a lot to imbue so much emotion into it.

1T431
Thank you for your input on this. The idea is to show people something like the following image [see below] and give a few words of background on it before asking for their thoughts. I agree that this part wouldn't be too helpful for getting people's takes on the future, but my thinking is that it might be nice to see some people's reactions to such an image. Any more thoughts on the entire action sequence?

That seems like a really limiting definition of intelligence. Stephen Hawking, even when he was very disabled, was certainly intelligent. However, his ability to be agentic was only possible thanks to the technology he relied on (his wheelchair and his speaking device). If that had been taken away from him, he would no longer have had any ability to alter the future, but he would certainly still have been just as intelligent. 

2jacob_cannell
It's just the difference between potential and actualized.
Answer by gbear60570

I don’t have any experience with data centers or with deploying machine learning at scale. However, I would expect that for performance reasons it is much more efficient to have a local cache of the current data and then either have a manual redeploy at a fixed schedule or have the software refresh the cache automatically after some amount of time.

I would also imagine that reacting immediately could result in feedback loops where the AI overreacts to recent actions.

A mitigating factor for the criminality is that smarter people are usually less in need of committing crimes. Society values conventional intelligence and usually will pay well for it, so someone who is smarter will tend to get better jobs and make more money, so they won't need to resort to crime (especially petty crime).

4Dagon
It could also be that smarter people get caught less often, for any given level of criminality.
3lalaithion
Additionally, if you have a problem which can be solved by either (a) crime or (b) doing something complicated to fix it, your ability to do (b) is higher the smarter you are.

My understanding of Spanish (also not a Spanish speaker) is that it's a palatal nasal /ɲ/, not a palatalized alveolar nasal /nʲ/. With a palatal nasal, you're making the sound with the tip of your tongue at the soft palate (the soft part at the top of your mouth, behind the alveolar ridge). With a palatalized nasal, it's a "secondary" articulation, with the body of your tongue moving to the soft palate.

That said, the Spanish ñ is a good example of a palatal or palatalized sound for an English speaker.

2A1987dM
And Irish (Gaelic) has both! (/ɲ/ is slender ng, /nʲ/ is slender n)

Yeah, that's absolutely more correct, but it is at least a little helpful for a monolingual English speaker to understand what palatalization is.

4Viliam
Perhaps many Americans know at least some basics of Spanish? I think the Spanish ñ letter, as in "el niño", is proper palatalization. (But I do not speak Spanish.)

Not sure I can explain it in text to a native English speaker what palatalization is; you would need to hear actual examples.

 

There are some examples in English. It's not quite the same as how Slavic languages work*, but it's close enough to get the idea: If you compare "cute" and "coot", the "k" sound in "cute" is palatalized while the "k" sound in "coot" is not. Another example would be "feud" and "food".

British vs American English differ sometimes in palatalization. For instance, in British English (RP), "tube" is pronounced with a palatalized "t" ... (read more)

8Measure
I would just call this an extra 'y' sound before the vowel. ([ˈkjuːt] vs. [ˈkuːt])
2Viliam
This explains something I was confused about, thank you.

The risk is a good point given some of the uncertainties we’re dealing with right now. I’d estimate maybe 1% risk of those per year (more weighted towards the latter half of the time frame, but I’ll assume that it’s constant), so perhaps with a discounting rate of that it would need to be more like $1400. That’s still much less than the assumption.

Looking at my consumption right now, I objectively would not spend the $1000 on something that lasts for more than 30 years, so I believe that shouldn’t be relevant. To make this more direct, we could phrase it as something like “a $1000 vacation now or a $1400 vacation in 30 years”, though that ignores consumption offsetting.

For the point about smoothing consumption, does that actually work given that retirement savings are usually invested and are expected to give returns higher than inflation? For instance, my current savings plan means that although my income is going to go up, and my amount saved will go up proportionally, the majority of my money when I retire will be from early in my career. 

For a more specific example, consider two situations where I'm working until I'm 65 and have returns of 6% per annum (and taking all dollar amounts to be inflation adjusted):

  • I s
... (read more)
6Andrew Currall
This sounds nuts to me. Firstly, what about risk? You might be dead in 30 years. We might have moved to a different economy where money is worthless. You might personally not value money (or not value the kind of things you can get with money) as much. Admittedly there's also some upside risk, but it's clearly lower than the downside.  We're ignoring investment possibilities, of course. But even then, in any case, if you have £1000 now, you can use it to buy something that would last more than 30 years and benefit you over that time. 

if I put things in my cart, don't check out, and come back the next day, I'm going to be frustrated if the site has forgotten my selections!

Ironically, I get frustrated by the inverse of this. If I put something in my shopping cart, I definitely don’t still want it tomorrow. I keep on accidentally having items that I don’t want in my order when I check out, and then I have to go back through all the order steps to remove the item (since there’s hardly ever a removal button from the credit card screen). It’s so frustrating! I don’t want you to remember things about me from previous days, just get rid of it all.

2jefftk
I see that for longer periods, but even overnight?

A single human is always going to have a risk of a sudden mental break, or perhaps simply not having been trustworthy in the first place. So it seems to me like a system where the most knowledgeable person has the single decision is always going to be somewhat more risky than a situation where that most knowledgeable person also has to double check with literally anyone else. If you make sure that the two people are always together, it doesn’t hurt anything (other than the salary for that person, I suppose, but that’s negligible).

For political reasons, we ... (read more)

1M. Y. Zuo
There is the problem of the less knowledgeable being deceived by a false alarm or ignoring a genuine alarm. Since the consequences are so enormous for either case, due to competitive dynamics between multiple countries, it still doesn't seem desirable, or even credible, to entrust this to anything larger then a small group at best.  In the case of extreme time pressure, such as the hypothetical 5 minute warning, trying to coordinate between a small group of hastily assembled non-experts, under the most extreme duress imaginable, will likely increase the probability of both immensely undesirable scenarios. (Assuming they can even be assembled and communicate quickly enough) On the other hand, this removes the single point of failure, and leaving it to a single individual does have the other downsides you mentioned.  So there may not be a clear answer, if we assume communication speeds are sufficient, leaving it to a political choice.  Perhaps this might have been feasible before the invention of the internet.  Nowadays, this seems practically impossible, as anyone competent enough to understand building half a weapon will be very likely capable of extrapolating to the full weapon in short order. Also, more than likely capable of bypassing any blocks society may establish to prevent communication between those with complementary knowledge. Even if it was split 10 ways, the delay may only be a few years to decades until the knowledge is reassembled.

The policy could just be “at least one person has to agree with the President to launch the nuclear arsenal”. It probably doesn’t change the game that much, but it at least gets rid of the possible risk that the President has a sudden mental break and decides to launch missiles for no reason. Notably it doesn’t hurt the ability to respond to an attack, since in that situation there would undoubtedly be at least one aide willing to agree, presumably almost all of them.

Actually consulting with the aide isn’t necessary, just an extra button press to ensure that something completely crazy doesn’t happen.

-1M. Y. Zuo
But the probability of a false alarm can never be reduced to zero.  In this case wouldn't it be most desirable to have the most knowledgeable person, with the best internal estimate of the probability of a false alarm, to make the final decision? Leaving it to anyone other than the person with the best estimate seems to be intentionally tolerating a higher than minimal possibility of senseless catastrophe.

What I’m referring to is the two-man rule: https://en.m.wikipedia.org/wiki/Two-man_rule

US military policy requires that for a nuclear weapon to actually be launched, two people at the silo or on the submarine have to coordinate to launch the missile. The decision still comes from a single person (the President), but the people who follow out the order have to be double checked, so that a single crazy serviceman doesn’t launch a missile.

It wouldn’t be crazy for the President to require a second person to help make the decision, since the President is going ... (read more)

1M. Y. Zuo
'Consulting' with any random aide that happens to be the nearest on duty seems even less desirable then making the decision alone. If you mean a rotating staff of knowledgeable military attaches or similar, maybe. If they literally stay nearby 24/7.  But then wouldn't it be the military attache making the final decision, since they will always have the more up-to-date knowledge that cannot be fully elaborated in a few minutes?
Load More