All of MrMind's Comments + Replies

MrMind20

I should have written "algebraic complement", which becomes logical negation or set-theoretic complement depending on the model of the theory.

Anyway, my intuition on why open sets are an interesting model for concepts is this: "I know when I see it" seems to describe a lot of the way we think about concepts. Often we don't have a precise definition that could argue all the edge case, but we pretty much have a strong intuition when a concept does apply. This is what happens to recursively enumerable sets: if a number belongs to a R.E. set, you will find out... (read more)

MrMind80

It also happens to me when I got to solve a problem that many have and realize in retrospetct that it was a combination of luck, knowing the right people and skills that you don't know how to transfer, possibly because they are genetic traits. It must be frustrating to hear, after a question like "how have you conquered your social anxiety?", the condensed answer "mostly luck"

MrMind40

On the other hand, it makes you think when you realized how much these kinds of social status booster have permeated every step of the hierarchical ladder of any large organization... and yet, somehow, things still  work out

MrMind30

There is, at least at a mathematical / type theoretic level.

In intuitionistic logic,  is translated to , which is the type of processes that turn an element of  into an element of , but since  is empty, the whole  is absurd as long as  is istantiated (if not, then the only member is the empty identity). This is also why constructively  but not 

Closely related to constructive logic is topology, and indeed if concepts are open set, the logical complement is not a ... (read more)

1a gently pricked vein
I'm unsure if open sets (or whatever generalization) are a good formal underpinning of what we call concepts, but I'm in agreement that there seems needed at least a careful reconsideration of intuitions one takes for granted when working with a concept, when you're actually working with a negation-of-concept. And "believing in" might be one of those things that you can't really do with negation-of-concepts. Also, I think a typo: you said "logical complement", I'm imagining you meant "set-theoretic complement". (This seems important to point out since in topological semantics for intuitionistic logic, the "logical complement" is in fact defined to be the interior of the set-theoretic complement, which guarantees an open.)
MrMind190

One thing to remember when talking  about distinction/defusion is that it's not a free operation: if you distinguish two things that you previously considered the same, you need to store at least a bit of information more than before. That is something that demands effort and energy. Sometimes, you need to store a lot more bits. You cannot simply become superintelligent by defusing everything in sight.

Sometimes, making a distinction is important, but some other times, erasing distinctions is more important. Rationality is about creating and erasing di... (read more)

2ChristianKl
In our community there are people who make a lot of distinction in the uncertainty via putting different probabilities on different claims but fail to distinguish levels of abstration.  In plenty of cases it's not necessary to do fine distinctions, in others it's essential for reasoning clearly. It's worth to be able to go to fine distinctions where the occasion needs clear reasoning. 
6abramdemski
Yeah, totally. I think I want to defend something like being capable of drawing as many distinctions as possible (while, of course, focusing more on the more important distinctions). One of the most distinction-heavy people I know is also one of the hardest to understand. Actually, I think the two people I know who are best at distinctions are also the two most communication-bottlenecked people I know. Nitpick: Not literally. It depends on the probability of the two things. At 50/50, it's 1 bit. The further it gets from that, the more we can use efficient encodings to average less than 1 bit per instance, approaching zero.

Yeah. Probably the reason why e.g. the experience of a raw sound and the interpretation of that sound are fused together by default, is that normally it's only the interpretation we care about, and we need to be able to react to it quickly if it carries any urgent information. I made a similar observation in the essay that Abram is referencing:

Cognitive fusion isn’t necessarily a bad thing. If you suddenly notice a car driving towards you at a high speed, you don’t want to get stuck pondering about how the feeling of danger is actually a mental construct p

... (read more)
Answer by MrMind10

I don't think you need the concept of evidence. In Bayesian probability, the concept of evidence is equivalent to the concept of truth; both in the sense that P(X|X) = 1, whatever you consider evidence is true, but also P(X) = 1 --> P(A /\ X) = P(A|X), you can consider true sentences as evidence without changing anything else.

Add to this that good rationalist practice is to never assume that anything is P(A) = 1, so that nothing is actually true or actually an evidence. You can do epistemology exclusively in the hypotethical: what happens if I consider this true? And then derive consequences.

1Kenny
I don't think this helps, but that's because you can't reason without any assumptions (e.g. axioms, prior beliefs, etc.).
MrMind50

Well, I share the majority of your points. I think that in 30 years millions of people will try to relocate in more fertile areas. And I think that not even the firing of the clathrate gun will force humans to coordinate globally. Although I am a bit more optimist about technology, the actual status quo is broken beyond repair

Answer by MrMind90

The fact is surprising when coupled with the fact that particles do not have a definite spin direction before you measure it. The anti-correlation is maintained non-locally, but the directions are decided by the experiment.

A better example is: take two spheres, send them far away, then make one sphere spin in any orientation that you want. How much would you be surprised to learn that the other sphere spins with the same axis in the opposite directions?

4dvasya
This is the correct answer to the question. Bell and CHSH and all are remarkable but more complicated setups. This - entanglement no matter which basis you'll end up measuring your particle in, not known at the time of state preparation, - is what's salient about the simple 2-particle setup.
MrMind50

How probable is that someone knows their internal belief structure? How probable is that someone who knows their internal belief structure tells you that truthfully instead of using a self-serving lie?

MrMind20

The causation order in the scenario is important. If the mother is instantly killed by the truck, then she cannot feel any sense of pleasure after the fact. But if you want to say that the mother feels the pleasure during the attempt or before, then I would say that the word "pleasure" here is assuming the meaning of "motivation", and the points raised by Viliam in another comment are valid, it becomes just a play on words, devoid of intrinsic content.

MrMind20

So far, Bayesian probability has been extended to infinite sets only as a limit of continuous transfinite functions. So I'm not quite sure of the official answer to that question.

On the other hand, what I know is that even common measure theory cannot talk about the probability of a singleton if the support is continuous: no sigma-algebra on supports the atomic elements.

And if you're willing to bite the bullet, and define such an algebra through the use of a measurable cardinal, you end up with an ultrafilter that allows you to define ... (read more)

1Polytopos
I don't know enough math to understand your response. However, from the bits I can understand, it seems leave open the epistemic issue of needing an account of demostrative knowledge that is not dependent on Bayesian probability.
MrMind20

Under the paradigm of probability as extended logic, it is wrong to distinguish between empirical and demonstrative reasoning, since classical logic is just the limit of Bayesian probability with probabilities 0 and 1.

Besides that, category theory was born more than 70 years ago! Sure, very young compared to other disciplines, but not *so* young. Also, the work of Lawvere (the first to connect categories and logic) began in the 70's, so it dates at least forty years back.

That said, I'm not saying that category theory cannot in principle be used to reason about reasoning (the effective topos is a wonderful piece of machinery), it just cannot say that much right now about Bayesian reasoning

8Polytopos
Interesting. This might be somewhat off topic, but I'm curious how would such an Bayesian analysis of mathematical knowledge explain the fact that it is provable that any number of randomly selected real numbers are non-computable with a probability 1, yet this is not equivalent to a proof that all real numbers are non-computable. The real numbers 1, 1.4, square root 2, pi, etc are all computable numbers, although the probability of such numbers occurring in an empirical sample of the domain is zero.
MrMind40

Yeah, my point is that they aren't truth values per se, not intuitionistic or linear or MVs or anything else

MrMind90

I've also dabbled into the matter, and I have two observation:

  • I'm not sure that probabilities should be understood as truth values. I cannot prove it, but my gut feeling is telling me that they are two different things altogether. Sure, operations on truth values should turn into operations on probabilities, but their underlying logic is different (probabilities after all should be measures, while truth values are algebras)
  • While 0 and 1 are not (good) epistemic probabilities, they are of paramount importance in any model of probability. For example, P(X|X) = 1, so 0/1 should be included in any model of probability
3jollybard
My feeling is that the arguments I give above are pretty decent reasons to think that they're not truth values! As I wrote: "The thesis of this post is that probabilities aren't (intuitionistic) truth values."
MrMind20

The way it's used in the set theory textbooks I've read is usually this:

  • define a function successor on a set S:
  • assume the existence of an inductive set that contains a set and all its successors. This is a weak and very limited form of infinite induction.
  • Use Replacement on the inductive set to define a general form of transfinite recursion.
  • Use transfinite recursion and the union operation to define the step "taking the limit of a sequence".

So, there is indeed the assumption of a kind of infinite process before th... (read more)

1Slider
I am pretty sure the is not obstacle for applying the successor function to the infinite set. And then there is the construction mirroring ω + ω. If you have the infinite set and it has many successors what limits one to not do the inductive set trick again to this situation? I kinda know that if you assume a special inductive set that is only one "permitted application" of it and a "second application" would need a separate ad hoc axiom. Then if we have "full blown" transfinite recursion we just allow that second-level application. New assumtions assume that there are old assumptions. If we just have non-proof "I have feeling it should be that way" we have a pre-axiomatic system before hand. If we don't aim to get the same theorems then "minimal change to keep intact" doesn't make sense. The conneciton here is whether some numbers "fakely exist" where a fake existence could be that some axiom says the thing exist but there is no proof/construction that results in it. A similar kind of stance could be that real numbers are just a fake way to talk about natural numbers and their relations. One could for example note that reals are innumerable but proofs are discrete so almost all reals are undefineable. If most reals are undefinable then unconstructibility by itself doesn't make transfinites any less real. But if the real field can establish some kind of properness then the same avenues of properness open up to make transfinites "legit". I am not that familar how limits connect to the foundamentals but if that route-map checks out then transfinites should not be any ickier than limits.
MrMind20

> Transfinite induction does feel a bit icky in that finite prooflines you outline a process that has infinitely many steps. But as limits have a similar kind of thing going on I don't know whether it is any ickier.

Well, transfinite induction / recursions is reduced to (at least in ZF set theory) the existence of an infinite set and the Replacement axioms (a class function on a set is a set). I suspect you don't trust the latter.

1Slider
The primary first need for transfinite recursion is to go from the successor construction to the natural numbers existing. Going to an approach that assumes an infinite set rather than proves it seems handy but weaker. Althought I guess in reading surreal papers I take set theory as given and while it doesn't feel like any super advanced features are used there migth be a lot of assumption baggage. It also feels llike a dirty trick that we don't need to postulate the existence of zero and that we get surreals from not knowing any surreals. Surreal number definition references sets of surreal numbers? Don't know any? Worry not there is the set that is of every type. And now that you have read the definition with that knowledge you know a new surreal number which enables you to read the definition again. So we get a lot of finite numbers without positing the existence of a single number and we don't even need to explicitly define a successor relation. The base number construction only uses set formation and order and doesn't touch arithmetic operations, so on that level "the birthday" of mappings has yet to come so it is of limited use. I have seen formulations of surreal theory where it is written in a more axiomatic fashion but a "process" style gives a lto of ground to realise connections betweeen strctures.
MrMind20

The first link in the article is broken...

2[anonymous]
Thanks! Should be fixed now.
MrMind20

Obviously, only the wolves that survive.

8lsusr
The global dog population is estimated at 900 million. There are two species of wild wolves: red wolves and grey wolves. Red wolves are critically endangered. It's hard to find exact numbers on grey wolf populations in 2020. According to Wikipedia, grey wolf populations were estimated to be 300 thousand in 2003.
Answer by MrMind110

Beware of the selection bias: even if veterans show more productivity, it could just be because the military training has selected those with higher discipline

2Gordon Seidoh Worley
This suggests we might look at data from countries like Israel and Singapore with mandatory military service (although it's complicated because I think both have alternative civil service for objectors and exceptions for certain groups), and look at how results of veterans there compare with similar populations to hold other cultural effects on discipline constant.
4khafra
Data from periods of forced conscription would correct for that bias, but would introduce the new bias of a 4-F control group. Is there a fancy statistical trick to combine the data and eliminate both biases?
MrMind40

The diagram at the beginning is very interesting. I'm curious about the arrow from relationship to results... care to explain? It refers to joint works or collaborations?

On the other hand, it's not surprising to me that AI alignment is a field that requires much more research and math than software writing skills... the field is completely new and not very well formalized yet, probably your skill set is misaligned with the need of the market

rmoehn100

Good point about the misaligned skillset.

Relationships to results can take many forms.

  • Joint works and collaborations, as you say.
  • Receive feedback on work products and use it to improve them.
  • Discussion/feedback on research direction.
  • Moral support and cheering in general.
  • Or someone who lights a fire under your bum, if that's what you need.
  • Access to computing resources if you have a good relationship with a university.
  • Mentoring.
  • Quick answers to technical questions if you have access to an expert.
  • Probably more.

This only lists the receiving side, wh

... (read more)
MrMind20

> The first thing that you must accept in order to seek sense properly is the claim that minds actually make sense

This is somewhat weird to me. Since Kahneman & Tverski, we know that system 2 is mostly good at rationalizing the actions taken by system 1, to create a self-coherent narrative. Not only thus minds generally don't make any sense, my minds in general lacks any sense. I'm here just because my system 1 is well adjusted to this modern environment, I don't *need* to make any sense.

From this perspective, "making sense" appears to be a tiring and pointless exercise...

1Conor
System 1 doesn't make sense?
MrMind20

Isn't "just the right kind of obsession" a natural ability? It's not that you can orient your 'obsessions' at will...

2Matt Goldenberg
Gladwell also argues its' a function of your environment in Outliers. For instance, you can never get a chance to be obsessed with computers if you're never exposed to computers.
1Pattern
Given your starting "obsessions", how do you pick the good ones to invest time in? And can you discover new ones? (Some methods for learning new subjects may work better than others.)
2Raemon
You can't orient them at all, which you can put yourself in situations which gives you the opportunity to develop new obsessions, or cultivate particular ones over other ones. I think there is some lottery-elements here, but it's at least under a bit of longterm control.
MrMind40

Two of my favorite categories show that they really are everywhere: the free category on any graph and the presheaves of gamma.

The first: take any directed graph, unfocus your eyes and instead of arrows consider paths. That is a category!

The second: take any finite graph. Take sets and functions that realize this graph. This is a category, moreover you can make it dagger-compact, so you can do quantum mechanics with it. Take as the finite graph gamma, which is just two vertex with two arrows between them. Sets and functions that realize this graph are... any graph! So, CT allows you to do quantum mechanics with graphs.

Amazing!

MrMind30

Lambda calculus is though the internal language of a very common kind of category, so, in a sense, category theory allows lambda calculus to do computations not only with functions, but also sets, topological spaces, manifolds, etc.

MrMind60

While I share your enthusiasm toward categories, I find suspicious the claim that CT is the correct framework from which to understand rationality. Around here, it's mainly equated with Bayesian Probability, and the categorial grasp of probability or even measure is less than impressive. The most interesting fact I've been able to dig up is that the Giry monad is the codensity monad of the inclusion of convex spaces into measure spaces, hardly an illuminating fact (basically a convoluted way of saying that probabilities are the most general ways ... (read more)

2Polytopos
It seems odd to equate rationality with probabilistic reasoning. Philosophers have always distinguished between demonstrative (i.e., mathematical) reasoning and probabilistic (i.e., empirical) reasoning. To say that rationality is constituted only by the latter form reasoning is very odd, especially considering that it is only though demonstrative knowledge that we can even formulate such things as Bayesian mathematics. Category theory is a meta-theory of demonstrative knowledge. It helps us understand how concepts relate to each other in a rigorous way. This helps with the theory side of science rather than the observation side of science (although applied category theories are working to build unified formalisms for experiments-as-events and theories). I think it is accurate to say that, outside of computer science, applied category theory is a very young field (maybe 10-20 years old). It is not surprising that there haven't been major breakthroughs yet. Historically fruitful applications of discoveries in pure math often take decades or even centuries to develop. The wave equation was discovered in the 1750s in a pure math context, but it wasn't until the 1860s that Maxwell used it to develop a theory of electromagnetism. Of course, this is not in itself an argument that CT will produce applied breakthroughs. However, we can draw a kind of meta-historical generalization that mathematical theories which are central/profound to pure mathematicians often turn out to be useful in describing the world (Ian Stewart sketches this argument in his Concepts of Modern Mathematics pp 6-7). CT is one of the key ideas in 20th century algebra/topology/logic which has allowed huge innovation in modern mathematics. What I find interesting in particular about CT is how it allows problems to be translated between universes of discourse. I think a lot of its promise in science may be in a similar vein. Imagine if scientists across different scientific disciplines had a way to use
MrMind20

The difference between the two is literally a single summation, so... yeah?

MrMind20

I'd like to point out a source of confusion around Occam's Razor that I see you're falling for, dispelling it will make things clearer: "you should not multiplicate entities without necessities!". This means that Occam's Razor helps decide between competing theories if and only if they have the same explanation and predictive power. But in the history of science, it was almost never the case that competing theories had the same power. Maybe it happened a couple of times (epicycles, the Copenhagen interpretation), but in all ot... (read more)

MrMind60

I arrived at the same conclusion when I tried to make sense of the Metaethics Sequence. My summary of Eliezer's writings is: "morality is a bunch of mental computations shared between most human beings". Morality thus grew out of our evolutive history, and it should not be surprising that in extreme situations it might be incoherent or maladaptive.

Only if you believe that morality should be like systematic and universal and coherent, then you can say that extreme examples are uncovering something interesting about peoples' morality.

Otherwise, extreme situations are as interesting as saying that people cannot mentally factor long numbers.

Answer by MrMind50

First of all, the community around LW2.0 can only be loosely associated to a movement: I don't think there's anyone that explicitly endorses *every* technique or theory appeared here. LW is not CFAR, is not the Alignment forum, etc. So I would caution against enticing someone into LW by saying that the community supports this or that technique.

The main advantage of rationality, in its present stage, is defensive: if you're aspiring to be rational, you wouldn't waste time attending religious gatherings that you despise; you wouldn't... (read more)

MrMind20

In Foerster's paper, he links the increase in productivity linearly with the increase in population. But Scott has also proposed that the rate of innovation is slowing down, due to a logarithmic increase of productivity from population. So maybe Foerster's model is still valid, and 1960 is only the year where we exhausted the almost linear part of progress (the "low hanging fruits").

Perhaps nowadays we combine the exponential growth of population from population with the logarithmic increase in productivity, to get the linear economic growth we see.

1Ege Erdil
This would still lead to something explosive if I understand it correctly,, since dx/dt=xlogx is solved by x=exp(const×exp(t)). Double exponential growth doesn't diverge in finite time, but it's still very fast and inconsistent with the graph in the post.
Answer by MrMind240

Algebraic topology is the discipline that studies geometries by associating them with algebraic objects (usually, groups or vector spaces) and observing how changing the underlying space affects the related algebras. In 1941, two mathematicians working in that field sought to generalize a theorem that they discovered, and needed to show that their solution was still valid for a larger class of spaces, obtained by "natural" transformations. Natural, at that point, was a term lacking a precise definition, and only meant something like "avoidin... (read more)

MrMind20

Is it really quite different, besides halo effect? It strongly depends on the detail, though if the two say the exact same thing, how are things different?

1Pattern
1) The audience. 2) The presentation.
MrMind60

The concept of "fake framework", elucidated in the original post, to me it seems one of a model of reality that hides some complexity, sometimes even to the point of being very wrong, but that is nonetheless useful because it makes some other complex area manageable.

On the other hand, when I read the quotes you presented, I see a rich tapestry of metaphors and jargon, of which the proponent himself says that they can be wrong... but I fail completely to see what part of reality they make manageable. These frameworks seems to just add complexity t... (read more)

0Pattern
I think the frameworks built on earlier work, and this review is not intended as a basic introduction (which would include the motivation/benefit).
2Elo
Post-rational is a place of development, and it was named by various parties outside of lw terminology. Integral becomes an organising principle for other concepts to rest in.
2ChristianKl
It's quite different when CFAR tells you to listen to your emotions via focusing when facing a tough decision then when a random celebrity tells a person to listen to their emotions when facing a tough decision. CFAR's position would be "post-rational" in Wilber's terminology while the random celebrity would be pre-rational (CFAR is a yellow place and not a orange one).
MrMind40

I'm sorry, but you cannot really learn anything from one example. I'm happy that your parents are faring well in their marriage, but if they didn't would you have learned the same thing?

I've consulted a few statistics on arranged marriage, and they all are:

  • underpowered
  • showing no significative difference between autonomous and arranged marriages

The latter part is somewhat surprising for a Westerner, but given what you say, the same should be said for an Indian coming from your background.

The only conclusion I can draw fairly conclusively... (read more)

Dagon*190

I'd agree that the null hypothesis (most common mechanisms work equally well) probably applies in the marriage game. I don't think Squidious was making a claim that arranged marriages are better (and I note that Squidious isn't using their parents to arrange a mate), just a claim that it can work pretty well.

Also, a less-explicit claim that many western narratives about love and marriage are misleading, in that they focus too strongly on finding a perfect match, and not enough on creating and maintaining a bond with a good-enough match. I ... (read more)

MrMind*110

Are you familiar with the concept of fold/unfold? Folds are functions that consume structures and produce values, while unfolds do the opposite. The composition of an unfold plus a fold is called a hylomorphism, of which the factorial is a perfect example: the unfold creates a list from 1 to n, the fold multiplies together the entire list. Your section on the "two-fold recursion" is a perfect description of a hylomorphism: you take a goal, unfold it into a plan composed of a list of micro-steps, then you fold it by executing each one of the micro-steps in order.

4Ruby
Wow, that's really cool to learn! I only have a beginner level knowledge of functional programming concepts and was not aware of hylomorphisms and unfolds (just basics like fold left, fold right). Thanks for bringing that to my attention, I might try to read that whole series.
MrMind50

Luke already wrote that there are at least four factors that feed motivation, and the expectation of success is only one of them. No amount of expectancy can increment drive if other factors are lacking, and as Eliezer notice, it's not sane to expect only one factor to be 10x the others so that it alone powers the engine.

What Eliezer is asking is basicall if anyone has solved the basic coordination problem of mankind, and I think he knows very well that the answer to his question is no. Also, because we are operating in a relatively small mindspace (h... (read more)

Venture Capital seems to be quite successful at finding startups to fund where the founder of the company has a chance of success of less then 30% and the founder still puts in incredibly hard work.

Most people aren't startup founders but there are many people who want to fund startups and are okay with success chances of less then 30%.

There are a lot of coordination problems whereby you need to get people to get people do to things that are not in their own interest that you could also call "the basic coordination problem of mankind".

3ExCeph
You raise a good point about the multiple factors that go into motivation and why it's important to address as many of them as possible. I'm having trouble interpreting your second paragraph, though. Do you mean that humanity has a coordination problem because there is a great deal of useful work that people are not incentivized to do? Or are you using "coordination problem" in another sense? I'm skeptical of the idea that a solution is unlikely just because people haven't found it yet. There are thousands of problems that were only solved in the past few decades when the necessary tools were developed. Even now, most of humanity doesn't have an understanding of whatever psychological or sociological knowledge may help with implementing a solution to this type of problem. Those who might have such an understanding aren't yet in a position to implement it. It may just be that no one has succeeded in Doing the Impossible yet. However, communities and community projects of varying types exist, and some have done so for millennia. That seems to me to serve as proof of concept on a smaller scale. Therefore, for some definitions of "coordinating mankind" I suspect the problem isn't quite as insurmountable as it may look at first. It seems worth some quality time to me.
MrMind20

Re: the third point, I think it's important to differentiate between and , where is the true prediction, that is what actually happens when an agent performs the action .

is simply the outcome the agent is aiming at, while is the outcome the agent eventually gets. So maybe it's more interesting a measure of similarity in , from which you can compare the two.

MrMind30

Let's say that is the set of available actions and is the set of consequences. is then the set of predictions, where a single prediction associates to every possible action a consequence. is then a choice operator, that selects for each prediction an action to take.

What we have seen so far:

  • There's no 'general' or 'natural' choice operator, that is, every choice operator must be based on at least a partial knowledge of the domain or the codomain;
  • Unless the possible consequences are trivial, a choice operator wil
... (read more)
1Gurkenglas
* Knowledge that there is an action to select, in the form of having an action in hand, allows the implementation of exactly one chooser: The one that always selects that action. * [1] holds for any function k / partition k−1 between any two sets. The proof you want may be that A→B is an exponential space and therefore usually larger than A. * interleave/sandwich should then take two predictions as parameters. This suggests that we could define a metric on the space of predictions, and then sandwich the chooser between two nearby predictions, to measure its response to inaccurate predictions.
MrMind20
I wonder if there are any plausible examples of this type where the constraints don't look like ordering on B and search on A.

Yes, as I shown in my post, such operators must know at least an element of one of the domains of the function. If it knows at least an element of A, a constant function on that element has the right type. Unfortunately, it's not much interesting.

MrMindΩ7160

It's interesting to notice that there's nothing with that type on hoogle (Haskell language search engine), so it's not the type of any common utility.

On the other hand, you can still say quite a bit on functions of that type, drawing from type and set theory.

First, let's name a generic function with that type . It's possible to show that k cannot be parametric in both types. If it were, would be valid, which is absurd ( has an element!). It' also possible to show that if k is not parametric in one type, it... (read more)

1Gurkenglas
(A→0)→A usually has one element, so B needs not have an element. That Hoogle doesn't list a result essentially follows from k not being parametric in all types. (Except that it lists unsafeCoerce :: A→B - they'd rather have the type system inconsistent than incomplete...) (A→B)→B stands for what the agent ends up making happen, and may be easier to implement - just like predicting that Kasparov will win a chess match, without knowing how. Interesting (A→B)→A should tend to have the property that interleave turns them into a particular kind of (A→B)→B . Why would you call it interleave?
MrMind30
The difference would be that I'm doing it more for myself than for those out there, because I don't expect my youtube video to get out much.

I also don't know if I'll get some attention, I'm doing that entirely for myself: to leave a legacy, to look back and say that I too did something to raise the sanity waterline.

My biggest hurdle currently is video editing.

My motto: "think big, act small, move quickly". I know that my first videos will suck, I've prepared to embrace suckiness and plunge forward anyway.

1CoolShirtMcPants
That was one of the reasons I wanted to make the youtube videos as well, to leave a legacy. Hopefully I can motivate myself to keep working on it. Let me know if you're interested in sharing ideas or something. I don't think we will conflict too much since my channel will idealy be all over the place with talk about relationships and whatever other random things that interest me.
MrMind30
Honestly, I'm not sure how explaining Bayesian thinking will help people with understanding media claims.

Sometimes important news are based entirely on the availability bias or the base rate fallacy: knowing them is important to cultivate a critical view of media. To understanding why they are wrong you need probabilistic reasoning. But media awareness is just an excuse, a hook to introduce Bayesian thinking, which will allow me to also talk about how to construct a critical view of science.

MrMind20

These are all excellent tips, thank you!

MrMind10

A much, much easier think that still works is P(sunrise) = 1, which I expect is what ancient astronomers felt about.

MrMind20

That entirely depends on your cosmological model, and in all cosmological models I know, the sun is a definite and fixed object, so usually

2zulupineapple
The premise seems to be that there is no model, you're seeing the sun for the first time. Presumably there are also no starts, planets, moons in the sky, and no telescopes or other tools that would help you build a decent cosmological model. In that situation you may still realize that there is one thing rotating around another and deduce that P(sunrise) = 1-P(apocalypse). Unless you happen to live in the Arctic, or your planet is rotating in some weird ways, or it's moving in a weird orbit, or etc. My point is that estimating P(sunrise) is not trivial, the number can't just be pulled out of the air. I don't see anything better than Laplace rule, at least initially. You said it doesn't work, so I'm asking you, what does work?
MrMind40

From what I've understood of the white paper, there's no transaction fee because, instead of rewarding active nodes like in the blockchain, the Tangle punishes inactive nodes. So when a node performes few transactions, other nodes tends to disconnect from it and in the long run an inactive node will be dropped entirely.

On the other hand, a node has only a partial copy of the entire Tangle at each time, so it is possible to keep it small even when the total volume is large.

Economically, I don't know if switching from incentives to partecipate to punishments for leaving makes sense.

MrMind20

With the magic of probability theory, you can convert one into the other. By the way, you yourself should search for evidence that you're wrong, as any honest intellectual would do.

1Zane Scheepers
Oh, I am. But is there anything wrong with asking for help from people who might already know the answer?
MrMind40

This might be a minor or a major nitpick, depending on your point of view: Laplace rule works only if the repeated trials are thought to be independent of one another. That is why you cannot use it to predict sunrise: even without accurate cosmological model, it's quite clear that the ball of fire rising up in the sky every morning is always the same object. But what prior you use after that information is another story...

1zulupineapple
How do you evaluate P(sun will rise tomorrow) then?
MrMind20

This is a standard prediction since the unconscious was theorized more than a century ago, so unfortunately it's not good evidence that the model is correct. Unfortunately, if what you've written is the only things that the list has to say, then I would say that no, this is not worth pursuing.

MrMind40

In a vein similar to Erfeyah's comment, I think that your model needs to be developed much more. For example, what predictions does it make that are notably different from other psychological models? It's just an explanation that feels too "overfitted".

1Zane Scheepers
The conscious mind can be excluded from thought processes. Only becoming aware, post facto, of the reason an action was performed. I agree it needs work, but is it a line of thought worth pursuing?
Load More