All of Bruno Mailly's Comments + Replies

For such (most) people, reality is social, rather than something you understand/ control.

Reminiscent of [CODING HORROR] Separating Programming Sheep from Non-Programming Goats

Ask programming students what a trivial code snippet of an unknown language does.

  • Some form a consistent model.
    Right or wrong, these can learn programming.
  • Others imagine a different working every time they encounter the same instruction.
    These will fail no matter what.
    I suspect they treat it as a discussion, where repeating a question means a new answer is wanted.

More generally, it seems we should be avoiding anything while distracted.

It makes sense that it would mess our learning, as it makes attributing cause & consequence confusing.

But it may also mess replaying our learned skills, as it is a big cause of accidents.

Advertisement.

AKA parasitic manipulation so normalized it invades every medium and pollutes our minds by hogging our attention, numbing our moral sense of honesty, and preventing a factual information system from forming.

Trivial inconveniences are alive and kicking in digital piracy, where one always has to jump through hoops such as using obscure services, softwares, settings or procedures.

I suspect it is to fend off the least motivated users: numerous enough to bring attention, and most likely to expose the den in the wrong place.

I suspect it is a form of subtle "ancestral tribe police".

Throwing trivial inconveniences at offenders is a good way to hint they are out of line, avoiding:

  • Direct confrontation, with risk of fuss and escalation.
  • Posing as authority, with risk of dedication or consequences.
  • Goofing on the tribe policy, as such enforcing requires repeating and consensus.
  • Misunderstandings, as a dim offender will eventually just give up with no need to understand.

Anyways, if the 1st goal of an AI is to improve, why would it not happily give away it's hardware to implement a new, better AI ?

Even if there are competing AIs, if they are good enough they probably would agree on what is worth trying next, so there would be no or minimal conflict.

They would focus on transmitting what they want to be, not what they currently are.

...come to think of it, once genetic engineering has advanced enough, why would humans not do the same ?

1TheWakalix
That's what self-improvement is, in a sense. See Tiling. (Also consider that improvement is an instrumental goal for a well-designed and friendly seed AI.) Except that whoever decides the next AI's goals Wins, and the others Lose - the winner has their goals instantiated, and the losers don't. Perhaps they'd find some way to cooperate (such as a values handshake - the average of the values of all contributing AIs, perhaps weighted by the probability that each one would be the first to make the next AI on their own), but that would be overcoming conflict which exists in the first place. Essentially, they might agree on the optimal design of the next AI, but probably not on the optimal goals of the next AI, and so each one has an incentive to not reveal their discoveries. (This assumes that goals and designs are orthogonal, which may not be entirely true - certain designs may be Safer for some goals than for others. This would only serve to increase conflict in the design process.) Yes, that is the point of self-improvement for seed AIs - to create something more capable but with the same (long-term) goals. They probably wouldn't have a sense of individual identity which would be destroyed with each significant change.
3habryka
This is correct, but only in so far as the better AI has the same goals as the current AI. If the first AI cares about maximizing Google's stock value, and the second better AI cares about maximizing Microsoft's stock value, then the first AI will definitely not want to stop existing and hand over all resources to the second one.
In fact the winged males look far more like females than they look like wingless males.

All the "3rd sex" I can think of are this : males in female form, for direct reproduction advantage.

Not a big departure from 2 sexes.

Eusocial insects might be more interesting.

N Rays deserve an honorable mention.

Blondlot was very scientific (in appearance), and followed by some scientists (of the same nationality).

Other good candidates today would be : Nanotech, space elevator, anything too much futurist-sounding.

Yes it's going to happen some day, no it won't be like we imagine.

EY uses Bayes to frame reality ever closer, not just to answer abstract homework on paper and call it a day.

If you solve a given problem without spotting it is ill-formed, your answer is correct but not practical.

I would guess thinking "frequency" implies it happens, while "probability" might trigger the But there is still a chance, right ? rationalization.

Others :

  • Almost no tracking of mistakes, failures, or even negative results for that matter.

We know it's bad, yet we keep sweeping valuable knowledge under the rug just because it's embarrassing. Confirmation bias anyone ?

  • No clear valuation of the work's utility.

One consequence is that researchers are kind of expected to know what they will find before they even begin, a form of weak insurance on productivity. This discourages to venture in the unknown.

This is a design-stance explanation...

I worded poorly, but evolution does produce such apparent result.

The Hard Problem of Consciousness

Is way out my league, I did not pretend to solve it : "It's a far cry from a proper explanation".

But pondering it led to another find : "Feeling conscious" looks like an incentive to better model oneself, by thinking oneself special, as having something to preserve... which looks a lot like the soul.

A simple, plausible explanation that dissolves a mystery, works for me ! (until better is offered)... (read more)

It would be stupid and dangerous to deliberately build a "naughty AI" that tests, by actions, its social boundaries, and has to be spanked. Just have the AI ask!

Pitfall : We tend to tell embellished, disguised, misguided, or sometimes plain wrong versions of reality.

An AI would have to see through that to make sense.

From the inside we can't judge the relative speed or power, but we can judge the efficiency.

And it's abysmal : the jumps from quarks to particles to atoms to molecules to cells to animals to stars to galaxies each throw orders of magnitude around like it's nothing.

What could this possibly tell us ?

  • Reality just has that much resource.
  • The result of our reality was not designed.
  • The lords of the matrix are not very bright.
Otherwise there could be an abstract mathematical object structurally identical to this world, but with no experiences in it, because it doesn't exist. And papers that philosophers wrote about subjectivity wouldn't prove they were conscious, because the papers would also 'not exist'.

didn't you just solve the mystery of the First Cause?

My take :

A universe is not just math, it also needs processing to run.

Existence is not in the software or the processor, but in the processing.

So long as that universe is not run/simulated, it's philosophers do not exist, and what they would write is unknown.

2TheWakalix
Processing is what you need to embed a mathematical process into your universe, I agree, but that doesn't necessarily imply that there is a Universal Processor in which our universe is embedded, or even that this hypothesis is meaningful. (For one, what universe does this processor live in? Processors bridge universes, in a sense - they don't explain existence, but pass it off to the "larger" world.)

Okay. Q: Why do I think I am conscious ?

A: Because I feel conscious.

Q: Why ?

A: Like all feelings, it was selected by evolution to signal an important situation and trigger appropriate behavior.

Q: What situation ? What behavior ?

A: Modeling oneself. Paying extra attention.

Q: And how ?

A: I expect a kluge fitting of the blind idiot god, like detecting when proprioception matches and/or drives agent modeling, probably with feedback loops. This would lower environment perception, inhibit attention zapping etc., leading to how consciousness feels.

It's a far... (read more)

2TAG
Agian, you are assuming there is no big deal about why do I feel (anything at all), and therefore the only issue is why do I feel conscious
5Said Achmiz
This is a design-stance explanation, which, firstly, is inherently problematic when applied to evolution (as opposed to a human designer), and, more importantly, doesn’t actually explain anything. The Hard Problem of Consciousness is the problem of giving a functional (physical-stance, more or less—modulo the possibility of lossless abstraction away from “implementation details” of functional units) explanation of why we “feel conscious” (and just what exactly that alleged “feeling” consists of). What’s more, even if we accept the rest of your (evolutionary) explanation, notice that it doesn’t actually answer the question, since everything you said about selection for certain functional properties, etc., would remain true even in the absence of phenomenal, a.k.a. subjective, consciousness (i.e., “what it is like to be” you). You have, in short, managed to solve everything but the Hard Problem!

It can do what the mind it is made from can. No more, no less.

How about : The logic of a system applies only within that system ?

Variants of this are common in all sorts of logical proofs, and it stands to reason that elements outside a system do not follow the rules of that system.

A construct assuming something out-of-universe acting in-universe just can't be consistent.

I assume that I have an error per each inference step

This.

The further a reasoning reaches, the more likely to be wrong.

Any step could be not accurate enough, or not account for unknown effects in unusual situations, or rely on things we have no mean of knowing.

Typical signs that it is drifting too much from reality :

  • Numbers way outside usual ranges.

Errors or imagination produce these easily, reality not.

  • Making one pivotal to the known world.

One is central to one's map, not to reality.

  • Extremely small cause having catastrophic effect.

If so, then why ... (read more)

How on earth can humans overcome this problem?

Why, eugenics of course ! The only way to change our nature.

First, selective breeding. Then genetic engineering.

Yes, there is a risk of botching it. No, we don't have a better solution.

3wizzwizz4
The ends don't justify the means.
why is there something instead of nothing

Don't forget the third alternative : why is there something instead of something else ?

One idea is that there are unlimited potential universes, each running on different fundamental laws, most being poor and sterile. But because of survivor (existence ?) bias, intelligent forms can only observe a universe rich enough to hold them.

Scientist went this way and imagined other laws in order to prove that ours are the only possible. Instead, they found that some alternative algebras, geometries etc do make sense.

Thi... (read more)

9TAG
That's starting at the finishing line. The hard problem of consciousness is about why there should be feelings at all, not about why we feel particular things.
abortion should be mandatory if the baby is the product of rape.

The more humane version : The rapist should be forced to pay for the child's upbringing, while deprived of usual father rights.

Extremely hard to argue against, and puts a limit on the bad action.

Still, it might not be enough...

Basically : In the ancestral environment, future gains were THAT unsure.

BTW, I would not be surprised if evolution led to populations enduring bad seasons to become better at planning, especially long-term, and if this played a role in the enlightenment and industrial revolution.

Edit : Cold climates demand more intertemporal self-control than warm climates

  • Slavery has not been abolished, just sub-contracted to cheap labor countries.

  • Many people are unfit to handle freedom, and would be better off constrained.

  • Technological progress calls for authoritarianism.

    As it gives ever easier access to more power, it raises the risk of misuse and worsens the consequences, so government has to step in and regulate not matter how strongly, else disaster happens.
    (already mentioned in this comment)

    It is already done for nuclear, explosives, "big" weapons and some chemicals.
    Next on the line is driving.
    Then it

... (read more)
When I wrote "language", I meant

When I use a word… it means just what I choose it to mean

(Alice in Wonderland)

We are not fooled : Moon landing hoaxers do not "present arguments", they hammer and they monologue.

Nothing, absolutely NOTHING Buzz could have answered or done would have worked, because they are so deep down the spiral that absolutely EVERYTHING is taken as confirmation.

Yup, every culture has it own education numbering system (that makes no sense) and seems blind to it not being universal. Just like language, numbering, date/hour format etc. except it seems particularly worse for reasons unclear.

I expected better from this author...

Being in water can get one dead really fast. Especially cold one, especially if immersed up to the head. So it makes sense that in that case evolution would select for turning off optimism and on realism, and add a jolt on top.

The question is more "why do we have excessive optimism ?" I think it paid off to make one grab opportunities before one dies anyway of bad luck in a world where so many thing can kill.

Anyways, all mammals have the Diving reflex, that alters respiration (as a whole). Evidence that evolution can and did lead to detect immersion and have strong responses to it.

Carefully examining the justifications for actions is also important. If there are compelling reasons to do X, the fact that we've been "ordered" to do X is irrelevant, just as being ordered NOT to do X is.

Unfortunately, "doing what they say" tend to make people believe they are the top dog.

And a bit too many people are prompt to get this idea, reluctant to abandon it, and abuse it to no end.

So, pragmatically, sometimes it's better to find another way to get the desired result, or at least delay action to diminish that bad association.

To me the logical answer is that it depends on how much value is attributed to "a" life vs respect of individual freedom/integrity.

It is fairly reasonable : do no evil, do not instrumentalize people, especially if not involved ; because this is a very slippery slope.

But it is unworkable to enter such a game of value accounting : Whose value system should be used ? Apple-and-orange value ?

My practical answer meets yours : If one is ready to kill the stranger, one should have anticipated this and done something along those lines long ago, like kill a criminal or comatose.

Wait... indoctrination/fanatization techniques rely on making the person miserable, right ?

...this is getting really uncomfortable.

Realistically, we often don't have the means to check the theory ourselves.

And in a modern world where any and everything is marketed to death, we distrust the pro-speech.

But pragmatically, I find that quickly checking the con-speech is very effective.

If it has a point, it will make it clear.

If it is flaky, that was probably the best it could do.

(this does require resistance to fallacies and bullshit)

If people updated their belief towards those around them, then people with agendas would hammer loudly their forged belief at any opportunit... wait, isn't this EXACTLY what they are doing ?