Wiki Contributions

Comments

Sorted by

Thanks for the advice, but as I've replied in the comments part, I'm now exactly stuck on the "build the actual thing" stage. I knew bodging around would be productive for most of the tasks, but I feel I may not gain real personal growth by just doing this.

"Personal" in "Describing myself without fitting in the possible category" fashion. Not really mean that I'm atypical, in fact, I do wish my problems were typical to an extent so that I would have credible guides in resolving this problem.

I'll have an example that I personally experienced recently, omitting some details related to my PII.

Coding: I'm relatively familiar with data processing using Python. Reading from Text/JSON/Spreadsheet, sanitizing, running query and analyze (using modules and sometimes even interacting with scaffold NN models), formatting results. Normally not over 300 lines, mostly just piecing sample codes gathered from documents, other projects, and IntelliSense. But when a temporary analysis pipeline becomes a production practice, I'm required to upgrade the whole thing to the production level: Procedural must be wrapped with OOP abstraction for calling from other parts; Sanitazion designed for static data becomes generic for various inputs; Procedure shall be equipped thread-safe and concurrency-safe; etc., which requires vast rewrite and even if I manage to achieve some of the goals, the change would often render other change unusable.

I would like to write library-level projects in a modern and well-designed fashion, and my daily tasks often contain the kind of job that would make this kind of perfection worthy, not just worthy, but even critical. But as I stated in the question body, I'm not able to orchestrate them all together.

I'm relatively accustomed to this style of learning when building skills. But not minds. For example, I can't think of a way I'll accustom myself to practicing design patterns towards a fictional task, rather than do it in the dirtiest way possible in plain procedural. Yes, most learning material would include an example, but other than following the author's thinking while knowing that "I'm learning this thing so the answer is that", I can't use the learned pattern when dealing with real tasks.

More than not I would use a checklist to measure whether my creation meets the practical goal against usability, safety, and performance, but this might not result in a clean and concise style of coding: I'm just patching in the beginning.

Laws about "must do so" will never be more effective than "mustn't do so". There will always be cases that the purchaser refuses to provide the required service, even if law-enforcing departments take compulsory measures. Cashing out every property purchaser has and still being unable to fulfill what laws have demanded, then there will be problems. Not to mention the potential coverage limit law-enforcing departments could have, and bureaucracy problems when dealing with those types of cases.

"exploitation" is defined for those decisions because life rights (Including one's actual life like suicide, significant and irreversible physical damage like selling kidneys, and special body right like sex trading), as the most special right among given rights, are most prone to force. Exploitation includes being driven by the environment such as bad financial conditions. Under these conditions, governments or other equity organizations should be ready to help rather than letting one suffer, from the lawmaker's perspective.

Yes, we are living in an imperfect world, thus people will fall into desperate situations, but laws exist to prevent the world from further degrading.

From another perspective, if life can be legally made to profit, there will be incentives for others to force those who can't resist doing so, essentially turning mankind into goods. If this is still not bad, then I assume massacres are also bearable?

After some reading, I found a post Scott wrote. I think this is the answer I needed, pretty much similar to your answer. Thanks!

Yes, I've done some reflection and recognized that my question is actually the "how to expand rationalism thinking paradigm", which is a long-standing problem not solved by the rationalist community. This is just another way of not inquiring about it correctly.

"educate" is used here because I found these kinds of discussions would not be easily conducted if the AI part were introduced before any actual progress can be made. Or, to frame it that way, my fellow tends to panic if they are introduced to generative ML models and related artwork advancements, and refuse to listen to any technical explanation that might make them more understood. There was no manipulative intention, and I'm willing to change my interaction method if the current one seems manipulative.

And that arose a question: If there's no "absolute truth", then how "relative" the truth most people agree on (such as 1+1=2 mathematically) would be?

Sorry if this question seems too naive as I'm at an early stage of exploring philosophy, and any other views other than objectivity under the positivism view seems not convincing to me.

Load More