All of weathersystems's Comments + Replies

Heh, I got the same feeling from the Dutch people I met. My ex wife once did a corporate training thing where they were learning about the power of "yes and" in improve and in working with others. She and one other European person (from Switzerland maybe?) were both kinda upset about it and decided to turn their improve into a "no but" version.

Ya I definitely took agreeableness == good as just an obvious fact until that relationship.

This isn't as strong of an argument as I once thought


What is the "this" you're referring to? As far as I can tell I haven't presented an argument.

1Noosphere89
Specifically, I was talking about the argument against technological progress based on the Vulnerable World Hypothesis.

Do you have a link to the job posting?

4jefftk
Sharing the link would disclose who I was talking to, sorry!

I would say it feels like my brain's built in values are mostly a big subgoal stomp, of mutually contradictory, inconsistent, and changeable values. [...]

it feels like my brain has this longing to find a small, principled, consistent set of terminal values that I could use to make decisions instead. 


Here's a slate star codex piece on our best guess on how our motivational system works: https://slatestarcodex.com/2018/02/07/guyenet-on-motivation/. It's essentially just a bunch of small mostly independent modules all fighting for control of the body to ... (read more)

1Konstantin Weitz
Thanks Viliam and weathersystems! Sorry it took me a little while to respond, I wanted to make sure I had read and understood the pointers to the related work you guys provided.   I spent some time digging deeper into inclusive fitness. Social Evolution and Inclusive Fitness Theory: An Introduction by James A. R. Marshall provides a good summary. There are indeed proofs which show that evolution selects for those individuals that always maximize inclusive genetic fitness in the moment. That said, these proofs assume that maximizing offspring in the moment won't hurt your chances of creating offspring in the future.  So these proofs don't really apply to the situations that humans find themselves in. Raping the nearest person may give you the highest inclusive genetic fitness in the moment, but you will go to jail and won't be able to reproduce any further, so this behavior won't be favored by evolution in the long run (thank god). So yeah, definitely don't maximize short term inclusive genetic fitness. So what about maximizing inclusive genetic fitness in the long term, say 1 billion years? I couldn't find any papers analyzing what evolution would do to a strategy like that, but it intuitively sounds a lot better.  If that was your terminal goal, you would probably want to push science forward, advance technology, spread the human race as far out into the universe as you can, etc. Honestly, those sound like pretty reasonable things I'd be happy to support. I completely agree that this is a possible (even likely) scenario. I really don't know, that's why I'm asking :)  I agree that it's going to be impossible to completely change my behavior, just because I change my value system. e.g. no matter what the terminal goal would be, I'd still spend many hours a day sleeping, I'd still need narcotics to go through surgery, and I still wouldn't be able to eat food that tastes absolutely terrible. That said, I'd say there's like 80% of my after tax income, 100% of m
Answer by weathersystems40

While these sound good, the rationale for why these are good goals is usually pretty hand wavy (or maybe I just don't understand it).

 

At some point you just got to start with some values. You can't "justify" all of your values. You got to start somewhere. And there is no "research" that could tell you what values to start with.

Luckily, you already have some core values.

The goals you should pursue are the ones that help you realize those values. 

 

but there are a ton of important questions where I don't even know what the goal is

You seem to th... (read more)

7Viliam
The next step is to distinguish between terminal and instrumental values, or perhaps we could call them "goals" and "strategies". Which things you want because they feel intrinsically valuable, and which things you want because they seem like a good idea to help you achieve the former. For example, a goal may be "to be respected by others", and a possible strategy is "get formal education". It may be a bit complicated to disentangle, but imagine something like this: Someone that asks you "if you 100% knew that people will always respect you, would you still want to get formal education?" and you say "well, I am also curious about how things work, and I also want to get a good job with a good salary" and they say "ok, so imagine that you 100% knew that people will respect you, and you can always find everything clearly explained on Wikipedia or Khan Academy, and the jobs would accept you based on what you already accomplished, ignoring your diploma... would you still want to get formal education?" -- and if you say "no", then education was just your strategy, not your goal. On the other hand, if someone asks "why do you want to be respected by others" and you say "I guess it makes people more likely to listen to me, and it makes me feel safe" and they say "so if you 100% knew that you are perfectly safe, and people would always listen to what you say, they just will really disrespect you, would that be okay for you?" and you say "no; I just want to feel respected even if it serves no specific purpose", then it is your goal. Sometimes people go a bit too far and say "well, what everyone actually wants is to feel good, isn't it?". And while it is true that getting what you wanted usually makes you feel good, a mere feel-good pill is not what we actually want. If you feel disrespected, you wouldn't ask for a pill that makes you falsely believe that you are respected. Feelings are (imprecise) indicators of our values, not the values themselves.

Maybe a dumb question. What's an EM researcher? Google search didn't do me any good.

2Johannes C. Mayer
I added a link, that should have been there from the start, thanks.
6MSRayne
Em as in "age of Em" from Robin Hanson's book - emulated / uploaded human.
7Charlie Steiner
More typically written with capitalization "Em" or "em," it's short for "emulation" or "emulated human."

What do you think about the vulnerable world hypothesis? Bostrom defines the vulnerable world hypothesis as: 

If technological development continues then a set of capabilities will at some point be attained that make the devastation of civilization extremely likely, unless civilization sufficiently exits the semian-archic default condition.

(There's a good collection of links about the VWH on the EA forum). And he defines "semi-anarchic default condition" as having 3 features:

1. Limited capacity for preventive policing. States do not have sufficiently r

... (read more)
1Noosphere89
Sorry for coming in late, but I think that this isn't as strong of an argument as I once thought, since there are issues with the solutions of VWH, which I'll describe here:

I'm not so sure I get your meaning. Is your knowledge of the taste of salt based on communication?

Usually people make precisely the opposite claim. That no amount of communication can teach you what something subjectively feels like if you haven't had the experience yourself.

I do find it difficult to describe "subjective experience" to people who don't quickly get the idea. This is better than anything I could write: https://plato.stanford.edu/entries/qualia/. 

1Kenny
I've updated somewhat – based on this video (of all things): * Stephen Wolfram: Complexity and the Fabric of Reality | Lex Fridman Podcast #234 - YouTube My tentative new idea is (along the lines of) 'subjective experience' is akin to a 'story that could be told' from the perspective (POV) of the 'experiencer'. There would then be a 'spectrum' of 'sentience' corresponding to the 'complexity' of stories that could be told about different kinds of things. The 'story' of a rock or a photon is very different, and much simpler, than even a bacterium, let alone megafauna or humans. 'Consciousness' tho would be, basically, 'being a storyteller'. But without consciousness, there can't be any awareness (or self awareness) of 'sentience' or 'subjective experience'. Non-conscious sentience just is sentient, but not also (self-)aware of its own sentience. Consciousness does tho provide some (limited) way to 'share' subjective experiences. And maybe there's some kind of ('future-tech') way we could more directly share experiences; 'telling a story' is basically all we have now.

The quotes above are not the complete conversation. In the section of the discussion about AGI, Blake says:

Blake: Because the set of all possible tasks will include some really bizarre stuff that we certainly don’t need our AI systems to do. And in that case, we can ask, “Well, might there be a system that is good at all the sorts of tasks that we might want it to do?” Here, we don’t have a mathematical proof, but again, I suspect Yann’s intuition is similar to mine, which is that you could have systems that are good at a remarkably wide range of things, b

... (read more)
3Michaël Trazzi
Thanks for bringing up the rest of the conversation. It is indeed unfortunate that I cut out certain quotes from their full context. For completness sake, here is the full excerpt without interruptions, including my prompts. Emphasis mine.
4Yair Halberstadt
In that case it's still an extremely poor argument. He's successfully pointed out that something nobody ever cared about can't exist (due to the free lunch theorem). We know this argument doesn't apply to humans since humans are better at all the things he discussed than apes, and polymaths are better at all the things he discussed than your average human. So he's basically got no evidence at all for his assertion, and the no free lunch theorem is completely irrelevant.

Why would self-awareness be an indication of sentience? 

By sentience, do you mean having subjective experience? (That's how I read you)

I just don't see any necessary connection at all between self-awareness and subjective experience. Sometimes they go together, but I see no reason why they couldn't come apart. 

1Kenny
Hmmm I've very confused by what "subjective experience" means – in a (possibly, hypothetically) technical sense. It seems/feels like our knowledge of subjective experiences is entirely dependent on communication (via something like human language) and that other exceptional cases rely on a kind of 'generalization via analogy'. If I had to guess, the 'threshold' of subjective experience would be the point beyond which a system could 'tell' something, i.e. either 'someone' else or just 'itself', about the 'experience'. Without that, how are we sure that image classifiers don't also have subjective experience? Maybe subjective experience is literally a 'story' being told.
Answer by weathersystems20
  • https://github.com/search for when stackoverflow fails me. Sometimes when I'm trying to figure out how to use some library with not great documentation, there are good examples in other people's code that aren't yet on stackoverflow.
  • product reviews on reddit (google search something like "light phone review site:reddit.com")

     
2Chloe Thompson
Oooo cool I didn't know this github trick!

Ah. Ya that makes sense. It sounds like it's not so much about what to do in the moment of panic as what to focus on throughout your day-to-day life. Let yourself be interested in and pay attention to things other than that you feel bad all the time. Don't let your pain be your main/only focus.

I read it as an analogy to a programming stack trace, but with motivations. Often times you're motivated to do A in order to get B in order to get C, where one thing is desired only as a means to get something else. Presumably these chains of desire bottom out in some terminal desires, things that are desired for their own sake, not because of some other thing it gets you.

So one example could be, "I want to get a job, in order to get money, in order to be able to feed myself." 

I'm not sure if that's what they meant. I'm often kind of skeptical of that... (read more)

4romeostevensit
Processes for doing this notably do not do it via explanation.

Thanks for writing this. As someone who went through something very similar, I largely agree with what you wrote here.

To make the "accept the panic" bit a more concrete: following someone's advice, when I'd start to panic, I'd sit down and imagine I was strapped to the chair. I'd imagine my feelings were a giant wave washing over me, but that I couldn't avoid them, because I was strapped to the chair. The wave wouldn't kill me though, just feel uncomfortable. I'd repeat that in my head "this is uncomfortable but not dangerous. this is uncomfortable but not... (read more)

2monkymind
I fully agree with your point that "Distract Yourself" seems like bad advice.  I misremembered Claire's steps pretty significantly. Here they are: 1. face the symptoms – do not run away. 2. accept what is taking place – do not fight 3. float with your feelings – do not tense 4. Let time pass – do not be impatient. I think I misremembered because in her exploration of step 3 "float with your feelings". She mentions in engaging in an activity that isn't thinking about what you're experiencing. Not in hopes that you will distract yourself, but so that you can begin living a normal life i.e. learn to do, while experiencing suffering. I'll edit the post. Thanks!

There are a few things that sound similar to what you're talking about. The first is the process of writing an RFC: https://github.com/inasafe/inasafe/wiki/How-to-write-an-RFC. Also wikipedia must need to do many of the things you describe, so looking into how they reach consensus may be interesting for you.  Also, there are attempts to have more of a direct democracy style governance in the US, and they have certain procedures that you may want to look into: https://www.newyorker.com/news/the-future-of-democracy/politics-without-politicians

I do like ... (read more)

Answer by weathersystems30

I'm still not clear on what exactly you're wanting to do with Github. 

  • Can you give an example use case for your project?
  • What do you see the "templates" doing in this project?
1cod3d
Here's an article (https://arstechnica.com/tech-policy/2018/11/how-i-changed-the-law-with-a-github-pull-request/?comments=1) on how Washington DC is using GitHub to update and maintain its laws. The suggestion from the article is that citizens would be able to make changes and take a more active involvement in the creation of laws. I'm not necessarily suggesting the possibility because there's a number of strong reasons why this might not be a good idea (if you read the comments). Could something be applied to collective reasoning? The templates in this sense could be used as the format to reach consensus (like a law?). Let's say a group is discussing a political topic and all parties involved have mutually agreed to a number of objectives of the dialogue. Including mutual respect for differing opinions and the need to upheld rigor and principles to maximize the chances of all agreeing and having the optimal outcome. In this sense, prior to the discussion, there would be formats to follow to reach an agreement. So, depending on the topic and which appropriate template is chosen the chances of success are 'almost' guaranteed because the underlining logic is agreed upon and already proven.  Therefore, the question could be, is there a format of taking differing opinions (inputs) at certain stages of an argument, which if the evidence and results (output) are agreed upon can solve the initial topics question and then be applied to any number of topics (if in a certain format). In this sense, you would be 'coding' or adding to the original document your position and reasoning of certain subsets of the overarching logic of the argument. These would be agreed upon prior to when the template is chosen. Meaning you could complete a number of reasoning practices before the different parties are actually engaged in the mental activities of evaluating judgment and critique etc (arguing).
Answer by weathersystems10

What do you mean by "reach out to people"? Usually that just means contact them. But here you seem to mean something different.

1mukashi
You are right, I will clarify the question. Thank you!

Thanks. The "drawing what you see" vs "drawing what you think" distinction combined with the images helped me understand the idea better.

This seems somewhat related to what Scott Alexander called "concept shaped holes." So you're saying that some people have a "concept of how to draw what you see" shaped hole, and that Edwards has some techniques of helping you fill that gap.

Are you specifically looking for conceptual shifts that would allow you to do something better? Or is just being able to understand something you previously didn't understand enough? L... (read more)

2cousin_it
Specifically looking for conceptual shifts that allow you to do something better.

Thanks for writing up your thoughts here. I hope you wont mind a little push-back.

There's a premise underlying much of your thought that I don't think is true.

But as the world of Social Studies consists of the interactions of persons, places, and things, they are subject to the Laws of Physics, and so the tenants of Physics must apply.

I don't really see how the laws of physics apply to social interactions. To me it sounds like you're mixing up different levels of description without any reason.

Yes, at bottom we're all made up of physical stuff that physics... (read more)

I think some question in this area would work well for this collaboration I'm proposing: https://www.lesswrong.com/posts/oqSMn6WEXPdDEvyyt/what-question-would-you-like-to-collaborate-on

If you add a question there and it gets picked I'd be happy to work on this with you.

2DirectedEvolution
Hi weathersystems, I like this idea. I have a few reactions to it. First, it sounds like to be a success, you just need to find one other person to collaborate with. If you can find that person, go for it! Secondly, if your goal is to get more people interested and more questions submitted, I think it's worth taking more time to have individual conversations with specific people about topics you think they'd be interested in collaborating on based on their post history. Sussing out their level of interest, availability, and what sort of collaborative partner they'd like to find would be good. When I think about who I'd like to collaborate with via LW, I think about other writers who have independently written insightful posts on topics very close to my own interests. To me, that signals potentially fruitful ground for collaboration. I'm starting to lean away from the model that long comment chains are the best way to do LW discussion, and toward the model that the most meaningful conversation on LW happens with full, well-considered blog posts responding to other full blog posts. 

Ya I thought it was worth a try. Looks like exactly one person is putting forward a question so far. Do you have any questions you'd be interested in working on?

Thanks for being the first person to submit a question! 

It turns people who have "no drawing talent" into people who can easily draw anything they see, not by strenuous exercise, but by a conceptual shift that can be achieved in a few hours.


Did that work for you, or do you know of any evidence that that's the case? I'm skeptical that a few hours can allow anyone to "draw anything they see" but would be happy to change my mind on that. I guess you didn't say how well they'd be able to draw after just a few hours of "conceptual shift." But I read you as... (read more)

3cousin_it
It worked on me. The change was surprisingly fast, in a couple days I went from "no drawing talent, stick figures only" to one-minute sketches similar to this or this (not mine, but should give the idea). Getting to this level doesn't require any technique, it's purely a conceptual shift. You learn how to trick your mind into "drawing what you see" instead of "drawing what you think". Betty Edwards describes this shift very clearly and gives a couple counterintuitive exercises for achieving it. I wouldn't be surprised if some people got it in an hour. The result isn't "drawing very well" (which takes more and different kinds of work), but I'm pretty confident that I can look at anything and make a pencil drawing that looks roughly similar. It doesn't even matter what! When you "draw what you see", you no longer care if it's a person or tree or car or whatever, it's all just a bunch of shapes in your visual field that you copy to paper. In singing, there's a similar concept of "singing with breath support", which is also a kind of primitive indivisible feeling that good singers have. But as far as I know, nobody has found a description of it that would reliably work on beginners.
Answer by weathersystems10
  • going for a walk
  • taking a long bath or shower
  • going to the gym
  • taking a nap if I'm tired

I'm a bit worried that my question will be picked and then I'll be the only one working on it. So to give this thing a better chance of at least two people collaborating, I'm not submitting a question.

Thanks. I'd heard of wikispore, but not wikifunctions. That looks cool.

"I wrote first wrote"

Thanks for the post!

3Ruby
Thanks for mentioning the typo!

A really easy way to set up your own wiki is to use a github repo. You can make it private if you don't want people to see it. If you use markdown and use the .md file extension, github will show the pages nicely and will even make links to other pages work.

do you ever go back to old free form notes and find yourself unable to reconstruct what you originally meant?

I don't think I've ever had that problem.

Or find the task of wading through your old free form notes unpleasant, since they're not polished?

I think it's fun. I've never found it unpleasant. And i... (read more)

Also make sure to check out the other posts with the note taking tag if you haven't seen them already: https://www.lesswrong.com/tag/note-taking

Answer by weathersystems30

I like using a wiki for notes. Something like this: http://evergreennotes.com/. There are a lot of ways to set up a wiki.
 

1) How consistently do you take notes when you're reading up on a new skill or subject?

I take notes for things that I want to eventually write something about, so for most things I don't end up taking notes.
 

2) Do you regularly refer back to old notes?

Sure. Especially keeping track of relevant sources is super useful for future me.
 

3) Do you approach note-taking differently for different subjects or purposes?

For notes tha... (read more)

2DirectedEvolution
I like the idea of setting up a wiki or using a wiki-like note taking app. I use Evernote a bit like that, crosslinking pages, but it's not really optimized for that. My main concern with using an app like Evergreen Notes is that a hobby project built by one person seems like a fragile place to leave a part of my brain. With your method, do you ever go back to old free form notes and find yourself unable to reconstruct what you originally meant? Or find the task of wading through your old free form notes unpleasant, since they're not polished?
4weathersystems
Also make sure to check out the other posts with the note taking tag if you haven't seen them already: https://www.lesswrong.com/tag/note-taking
Answer by weathersystems10

If you're just looking for the arguments. This are what you're looking for:
https://plato.stanford.edu/entries/moral-anti-realism

How do you know that disinterested (not game-theoretic or instrumental) altruism is irrational / doesn't make any sense?

What is "disinterested altruism"? And why do you think it's connected to moral anti-realism?

1Michele Campolo
I can't say I am an expert on realism and antirealism, but I have already spent time on metaethics textbooks and learning about metaethics in general. With this question I wanted to get an idea of what are the main arguments on LW, and maybe find new ideas I hadn't considered. I see a relation with realism. If certain pieces of knowledge about the physical world (how human and animal cognition works) can motivate a class of agents that we would also recognise as unbiased and rational, that would be a form of altruism that is not instrumental and not related to game theory.

Thx. I'll check it out.

I agree. My two questions with regards to that are:
 

  1. Would they accept this as a sister project? The last time they took on a sister project was something like 10 years ago (iirc)
  2. Would it be better placed as it's own Wikimedia project or could it be merged with Wikiversity?
4ChristianKl
Last year a new movement strategy for Wikimedia was decided and part of it is the desire to add new forms of content. One new project that's at the moment in the process of formation is Wikifunctions.  I don't know Wikiversity well. Given that WikiJournals decided to work within that umbrellla, it might make sense to do this also under it's umbrella.  There's Wikispore that was created as a testbed for new Wikis. 

StackExchange only flags duplicates, that's true, but the reason is so that search is more efficient, not less. The duplicate serves as a signpost pointing to the canonical question.


Ya I get that. But why keep all the answers and stuff from the duplicates? My idea with the question wiki was to keep the duplicate question page (because maybe it's worded a bit differently and would show up differently in searches), have a pointer to the canonical question, and remove the rest of the content on that page, combining it with the canonical question page.

Also, St

... (read more)
1Nathan Arthur
You may also be interested in the StackOverflow Documentation project (now defunct). I think it attempted to do something closer to what you're suggesting.

Ya I think you're basically right here. Which is why I'm not really hoping to "grow large enough to be comparable to Stack Exchange and still remain good." In fact even growing large enough and being sucky seems very hard.

My goal is just to make something that's useful to individuals. I figure if I get use out of the thing when working alone, maybe other people would too.

I'm not sure I'm getting your question.

I think mediawiki (the software that runs both wikipedia and this question wiki) only allows text by default. But there's no reason why the pages can't just link to relevant sources. And in fact probably some questions should be answered with just one link to the relevant wikipedia page. 

Ideally pages should synthesize relevant sources but I think just listing sources is better than nothing.

4Viliam
MediaWiki supports plugins, so in theory you could write your own plugin with any functionality you need... in practice, this could turn out to be a lot of work.

Sure. But the question is can you know everything it knows and not be as good as it? That is, does understanding the go bot in your sense imply that you could play an even game against it?

2DanielFilan
I imagine so. One complication is that it can do more computation than you.

Ah ya I see what you're saying. Ya that's definitely right. Certainly the most common kind of question asker online just wants to ask the highest number of the most qualified people their question and that's it. Unless/until the site has a large user base that won't really be possible on the wiki.

Still, I think as long as the thing is useful to some people it may be able to grow. But it may be useful to organize my thoughts better on exactly what the value is for single users.

One example that comes to mind is the polymath project. They found it useful to start a wiki to organize their projects. If anyone else wants to come along and do a similar thing, they can just use this wiki instead of making their own.

By "network effect" do you mean this? I take the network effect to be a problem here only if the wiki requires a large amount of people to be useful. 

My hope is that the wiki should be useful even for a very small number of people. For example, I get use out of it myself just as a place to put some notes that I want to show to people and as a way of organizing my own questions.

2benjaminikuta
It could very well be useful for that, but most people want to reach a larger audience when asking questions. 

I'm a bit confused. What's the difference between "knowing everything that the best go bot knows" and "being able to play an even game against a go bot."? I think they're basically the same. It seems to me that you can't know everything the go bot knows without being able to beat any professional go player.

Or am I missing something?

2DanielFilan
You could plausibly play an even game against a go bot without knowing everything it knows.

Hi y'all.

Recently I've become very interested in open research. A friend of mine gave me the tip to check out lesswrong. 

I found that lesswrong has been interested in trying to support collaborative open research (one, two, three) for a few years at least. That was the original idea behind lesswrong.com/questions. Recently Ruby explained some of their problems getting this sort of thing going with the previous approach and sketched a feature he's calling "Research Agendas." I think something like his Research Agendas seems quite useful. 

So that's... (read more)

I added in a few more of the questions from the template that seem relevant. Including the one about possible difficulties. I think what's there cover's your trade-off.

I was thinking that the template would be something where you could just keep the sections that seem relevant and delete the rest. 

But I guess even that would start to get annoying if the thing was super long. That's a good consideration to keep in mind.

Answer by weathersystems10

What factors do you expect have the highest likelihood of severely compromising your own quality and/or duration of life, within the next 1, 5, or 10 years?

A family member dying.

Contracting a serious disease, or becoming severely injured from an accident. 

Some incident (medical or otherwise) will use the rest of my savings and put me in financial instability.

How do these risks change your behavior compared to how you expect you'd act if they were less relevant to you?

I basically never think about these risks. I guess the money one I do a bit. I use fa... (read more)

I added "Given these problems, why are people still tolerating the status quo (if they are)?" to the template. Does that capture your idea well enough?

2nim
I think that's a good snapshot of the concept I'm trying to get at. It asks what benefits the status quo may silently be providing, which a competitor would have to match or exceed to gain acceptance.

You have spelled "stakeholders" as "steak-holders", which is charming but may reduce credibility in some circumstances.

Heh. Funny mistake. Thanks.

A suggested improvement to the template: When examining the status quo, also ask "for what related problems does the status quo have a built-in solution?".

I want to make sure I understand your point here. Is the idea that sometimes we see that a system isn't solving some problem well enough, and so try to fix it. But we don't take into account the fact that the system isn't just trying to solve that problem, but ... (read more)

1nim
Yikes, I see why -- I worded the concept quite poorly. The example I was trying to describe is in software engineering, where you have an ancient crufty mess that you're trying to rewrite in some snazzy new language. You think you can rewrite it and make it super simple, and so you write the new thing the simple way that "should work", but when you run the old code's tests against it (or when you put it to use in the real world...) you discover that the reason the old code was such a mess was partly that it had a bunch of logic to handle various edge cases that the application had hit in the past. An alternative phrasing might be: "Where are the gaps between how I think the status quo 'should' work, and how it actually does?". Often, established systems are silently compensating for all kinds of problems that happen infrequently enough for any one person to forget that the problem exists when trying to replace the system.
2karlkeefer
Not OP, but I read their comment about related problems as something more like this: The system in question likely already has feedback or correction mechanisms that respond to other potential problems - asking about those mechanisms might reveal strengths of the system that can be easily adapted for your purposes. I'm not sure how easy it will be to find these, though, as the best-functioning ones might be invisible if they actually eliminate the other problems completely. That might not be their intent, but I think it's also a useful consideration so even if my interpretation isn't matched I hope this comment is still useful :)
Answer by weathersystems10

Maybe it would help if you shared what you've been able to find out so far?

Load More