Adrien Sicart

Mathematician

Posts

Sorted by New

Wiki Contributions

Comments

Sorted by

A Black Swan is better formulated as:
- Extreme Tail Event : Probabilities cannot compute in current paradigm. Its weight is p<Epsilon.
- Extreme Impact if it happens : Paradigm Revolution.
- Can be rationalised in hindsight, because there were hints. "Most" did not spot the pattern. Some may have.

If spotted a priori, one could call it a Dragon King: https://en.wikipedia.org/wiki/Dragon_king_theory

The Argument:
"Math + Evidence + Rationality + Limits makes it Rational to drop Long Tail for Decision Making"
is a prime example of an heuristic which fails into what Taleb calls "Blind Faith in Degenerate MetaProbabilities".

It is likely based on an instance of {Absence of Evidence is Evidence of Absence : Ad Ignorantiam : Logical Fallacy}

The central argument of Anti-Fragility is that Heuristics allocating some resources to Black Swans / Dragon Kings studies & contingency plans are infinitely more rational than "drop the long tail" heuristics.

When it comes to rationality, the Black Swan Theory ( https://en.wikipedia.org/wiki/Black_swan_theory ) is an extremely useful test.

A truly rational paradigm should be built with anti-fragility in mind, especially towards Black Swan events which would challenge its axiomatic.

(1) « Liking », or « desire » can be defined as « All other things equal, Agents will go to what they Desire/Like most, whenever given a choice ». Individual desire/liking/tastes vary.

(2) In Evolutionary Game Theory, in a Game where a Mitochondria-like Agent offers you choice between :

  • (Join eukaryotes) mutualistic endosymbiosis, at the cost of obeying apoptosis, or being flagged as Cancerous enemy
  • (Non eukaryotes) refusal of this offer, at the cost of being treated by the Eukariotes as a threat, or a lesser symbiote.

then that Agent is likely to win. To a rational agent, it’s a winning wager. My last publication expands on this.

What would prevent a Human brain from hosting an AI?

FYI some humans have quite impressive skills:

  • Hypermnesia, random: 100k digits of Pi (Akira Haraguchi) That’s many kB of utterly random programming.
  • Hypermnesia, visual: accurate visual memory (Stephen Wiltshire, NYC Skyline memorised in 10mn)
  • Hypermnesia, language: fluency in 40+ languages (Powell Alexander Janulus)
  • High IQ, computation, etc. : countless records.

Peak human brain could act as a (memory-constrained) Universal Turing/Oracle Machine, and run a light enough AI, especially if it’s programmed in such a way that the Human Memory is its Web-like database?

(1) « people liking thing does not seem like a relevant parameter of design ».

This is quite a bold statement. I personally believe the mainstream theory according to which it’s easier to have designs adopted when they are liked by the adopters.

(2) Nice objection, and the observation of complex life forms gives a potential answer :

  • All healthy multicellular cells obey Apoptosis.
  • Apoptosis literally is « suicide in a way that’s easy to recycle because the organism asks you » (the source of the request can be internal via mitochondria, or external, generally leucocytes).

Given that all your cells welcome even literal kill-switch, and replacement, I firmly believe that they don’t mind surveillance either!

In complex multicellular life, the Cells that refuse surveillance, replacement, or Apoptosis, are the Cancerous Cells, and they don’t seem able to create any complex life form (Only parasitic life forms, feeding off of their host, and sometimes spreading and infecting others, like HeLa).

Load More