I'm going to give one of the easy answers that probably a substantial amount of the respondents here can give: programming. Partly the point is to illustrate what sort of answer I am looking for.
One of the main tasks a programmer handles is fixing bugs. Often bugs are discovered because the user has encountered a situation where the program does something undesirable (for instance crashing or giving the wrong output). Usually this just annoys the user and nothing more happens, but sometimes the user reports the error and what they did before it happened to customer support, and then customer support enters it into a list of tasks that the programmers can look at.
The programmers may estimate the priority of various bugs that are reported. This priority can be estimated by various things, such as how commonly users report the bug (or statistically how common it is in logs emitted by the program), how severe it is described in affecting the usability of the software, and how much programmer time it is expected to take to solve it.
The bugs (and other tasks such as features) etc. are then assigned to programmers in the team based on things like familiarity with the parts of the software (often programmers mainly work with only one part of the programs they make), and the programmers work to fix them.
Typically the first step in fixing a bug is to reproduce it, so we know how to tell that it has been correctly fixed, and so we can inspect the program while it runs to figure out why the bug happens. The programmer may try to follow the steps that the user described in order to reproduce the bug, or they may use their knowledge of how the program works to infer different ways of reproducing the bug. If they cannot reproduce the bug, they may decide not to fix it, or send questions back to the customer to get more info on how to reproduce it.
In order to understand how the bug happens, they may read the code to see if they can deduce the rules that caused the bug. They may also run the program in a step-by-step manner, seeing how each variable affects each other. At times this can be difficult, for <complex reasons that I'd ideally explain but I'm on my phone right now so cutting it out for brevity>.
Once the bug has been understood, the programmer has to come up with a fix. Sometimes fixes a simple, e.g. changing the code to eliminate a silly programmer mistake. Other times the bug originates inevitably from other logic, and one has to break that other logic to fix it. The bug can also occur due to missing code, e.g. maybe only a special case was handled in the past, but now a more general case should be handled.
The fix must then be written into code, which involves a bunch of complex problem solving that I'm not even sure I know how to describe in non-technical terms. It's pretty important though so I should probably return to it after I'm off my phone. (Or if you feel like describing it, dear reader, I would encourage you to do so in a comment.)
Often programmers will also write a piece of code that can test the fix. This means that over time, the project can end up accumulating enormous numbers of automatic tests. And that's the next step: typically after a change is made, all the tests are run to ensure that nothing breaks.
Before the code is made a permanent part of the project, typically other programmers on the team will look at it and give comments on how to make it easier for them to understand, how to make it faster, and so on, such that the final code is reasonably good.
While doing all of these things, programmers typically have to manage a lot of technical things. For example, if the programmer is working on a web app, then the web app is typically run by multiple programs that work together. In order to run the app to e.g. reproduce a bug, the programmer may therefore need to configure the different pieces so they know how to find each other. Often configuration is partly handled by automatic tools, but these tools are often buggy and the programmer has to know how to navigate around those bugs.
This isn't a complete description of what programmers do, rather it is a description of one slice of programmer work (solving bugs).
Ok, then if we set finance aside, we're left with a set of resources that can be used for all sorts of purposes. Minerals, land, plants, animals, equipment, people, skills, knowledge, etc.
Then some entity creates some form of money, and we all agree to use it as the unity of account and medium of exchange and store of value. That can happen in different ways, like the one I mentioned in the previous comment. At this point, people want money because money is the tool that lets them get other things that have value to them.
People start producing things of their choice and selling them for a price of their choice. By this process they learn, individually and collectively, what prices other people are willing to pay. Scale it up and we get market prices and efficient markets for goods and services. Everything has a price relative to everything else. That price changes over time for all sorts of reasons.
The government can manipulate the overall level of prices, aka inflation and deflation, by altering the amount of money in circulation: unless we increase our productive capacity or decrease consumption, more money chasing the same goods and services means prices go up. It can also change relative prices through selective taxes and subsidies. Or by purchasing stuff, which affects prices the way private actors do, by raising demand for it.
If things I want to buy or normally buy get more expensive I either change what I buy, decrease total consumption, draw down savings, go into debt, or find a way to increase my income (aka start producing more or better value for sale in the labor market). Everyone else doing the same gradually shifts the patterns of consumption and production. Hopefully in ways that increase long-term productive capacity. Repeat forever. The whole edifice of finance and business and the relevant branches of law are built to facilitate this.
Interest rates are kind of a weird tool to use to affect the money supply, because it’s hard to know at any given moment what level they need to be above or below to actually decrease or increase the amount of money in circulation (which IIUC is one of the arguments in favor of switching to NGDP level targeting), but basically that’s the idea. Money gets created when the Fed lends it into existence, to the Treasury or to banks. It goes into circulation when banks lend it out or the Treasury spends it, and leaves circulation when the Fed gets paid back.
Interest rates are the price of money, and the higher the price of money, the lower the amount of money people and governments can borrow and want to borrow. A house’s list price looks a lot more attractive with a 5% mortgage than a 10% mortgage, so if mortgage rates go up, house prices go down until people buy them, or they don't get sold at all. In a well-functioning market, builders then build fewer new houses because they can’t get as high a price for them. A factory that wants to buy a capex-heavy system to increase output might do it if they can pay it back at 5% interest over 10 years, but the output increase may not be worth it if you have to pay 10%.
(And if you're also wondering how the heck the IRA is supposed to fight inflation with lots of subsidies, it's industrial policy, an attempt to increase productive capacity by lowering the price of investing in things that let us make more and better stuff that we want people to make more and better of. This is a long-term strategy whose effects would show up over years, and IMO too many countries have been neglecting industrial policy needs for a long time.)