Posts

Sorted by New

Wiki Contributions

Comments

Inyuki6mo-10

Such subliminal summanipulation is pretty natural for a probablist. Thinking of cumulative effects of all inputs that an individual we care of is exposed to comes natural to lovers, parents, and big brothers, however, only more resourced can reliably afford to produce the inputs and carry it out such attacks, rather than just observe and care about these cumulative effects.

Inyuki5y10

Well, my main point was, that error can be of arbitrary type, one may be of the modeling of what is ("Map"), another of modeling what we want to be ("Territory"), and one can think of infinite number of various types of "errors", - logical, ethical, pragmatic, moral, ecological, cultural, situational, .. the list goes on and on. And, if each type of error we think of "suboptimality", then "less err" or "less wrong" would be etymologically equivalent to "optimize". So, we're a community for optimization. And that's actually equivalent to intelligence.

No matter, if we are seeking for truth or pragmatics, the methods of rationality remain largely the same -- it's the general mathematical methods of optimizing.

Inyuki5y20

New terminology only makes sense when phenomena they describe have new qualities on top of the basic phenomena. Every process is an optimizer, because anything that changes states, optimizes towards something (say, new state). Thus, "maximizer," "intelligent agent" etc. may be said to be redundant.

Inyuki5y10
I feel a bit confused reading this. The notion of an expected utility maximiser is standard in game theory and economics. Or maybe you find the concept unsatisfactory in some other way?

The latter. Optimization is more general than expected utility maximization. By applying expected utility theory, one is trying to minimize the expected distance to a set of conditions (goal), rather than distance to a set of conditions (state) in abstract general sense.

The original post (OP) is about refactoring the knowledge tree in order to make the discussions less biased and more accessible across disciplines. For example, the use of abbreviations like "OP" may make it less accessible across audiences. Similarly, using well-defined concepts like "agent" may make discussions less accessible to those who know just informal definitions (similar to how the mathematical abstractions of point and interval may be confusing to the un-initiated).

The concepts of "states" and "processes" may be less confusing, because they are generic, and don't seem to have other interpretations within similar domains in everyday life, unlike "environments", "agents", "intervals", "points" and "goals" do.

Inyuki8y00

Yes, I do understand the phrase 'defining a process' so broadly as to not suggest temporality. Just like defining an order for a set in mathematics doesn't require the concept of time.

Indeed, just because we can show an example of how an illusion of time could be constructed in a system without time, would not seem to imply that our world is also such system.

So, yes, it doesn't makes sense, as long as you don't show that our perceived world is derived from a system with same properties. ( I'm referring to something like this: https://groups.google.com/d/msg/everything-list/3ZdcQpJCPpE/Kwfh69V4Y24J ).

You can view everything as one thing.

Inyuki8y-20

I made a thought experiment with a system that has no time, making it appear to have time. Take the sequence of natural numbers. It doesn't change, but it implies the existence of all positive rationals. This implication is instantaneous, but generating them requires defining a process. There is an eternity in an instance.

Inyuki8y-20

We know that time is an illusion. Is "illusion" not the same as "simulation"?

Inyuki8y-10

Is inability to travel back in time - evidence that we're a simulation? Btw., wording "simulation in which we live" would imply that we're somehow separate from the simulation. It could well be that we ourselves do not exist without the simulation, and are merely the properties of simulation, - simulated beings.

Inyuki9y-20

Great. I didn't read the book yet, but where I think we fail the most, is underestimating the investment into new technologies. It is often through new technologies that we can solve a problem at large, and often, to develop these new technologies may require much less than buying the existing technology solutions in bulk,... if we could be just a little more creative in our altruism. So, I would like to propose another term: Effectively Creative Altruism (ECA).

ECA would rely thinking how to solve a problem once and for all, and not in some isolated case. For example, an effectively creative thinker who is strongly upset about the harm that mosquitoes transmitting malaria do, would tend to come up with more general solutions, like genetically modified mosquitoes, that pass on deadly genes, and destroy them all.

An ECA thinker would, instead of seeing the simple numbers of how much investment saves how many lives according to current best statistics, would consider, what technology under development would save many more lives, if it received the little money it needs to get developed and scaled.

For example, how much do we need until we can mass-produce and introduce use the paper microscopes.

While a simple Effective Altruist relies on well-known statistics, an Effectively Creative Altruist would rely on as-of-yet unrejected hypotheses that follow from well-founded creative reasoning, and donating for such innovation, and that require that little bit of financial support and effort to verify.

My point is -- we should not reject great ideas, because they have no statistical evidence yet.

Load More