All of Polytopos's Comments + Replies

Science and Sanity looks pretty interesting. In the book summary it says he stressed that strict logical identity doesn't hold in reality. Can you say more about how he builds up a logical system without using the law of identity? How does equational reasoning work for example?

5ChristianKl
Instead of speaking about identity, Korzybski advocates speaking about relations.  You can say "New York is bigger than Austin" without asking whether or not Austin is a big or small city and New York is a big or small city. If you have a good map, whether or not "New York is bigger than Austin" holds on the map corresponds to whether it holds for the territory.  That example is trivial and I doubt it's very insightful on its own. Science and Sanity is a very complex book. 
Answer by Polytopos*80

Great question.

My joke answer is: probably Hegel but I don't know for sure because he's too difficult for me to understand.

My serious answer is Graham Priest, a philosopher and logician who has written extensively on paradoxes, non-classical logics, metaphysics, and theories of intensionality. His books are extremely technically demanding, but he is an excellent writer. To the extent that I've managed to understand what he is saying it has improved my thinking a lot. He is one of those thinkers who is simultaneously extremely big picture and also being sup... (read more)

2TekhneMakre
Thank you! Will look at his stuff.

Various fisheries have become so depleted as to no longer be commercially viable. One of the obvious examples is the Canadian Maritime fisheries. Despite advanced warning that overfishing was leading to a collapse in cod populations, they were fished to the point of commercial non-viability, resulting in a regional economic collapse that caused depressed standards of living in the maritime provinces to this day.

according to the story that your brain is telling, there is some phenomenology to it. But there isn't.

Doesn't this assume that we know what sort of thing phenomenological consciousness (qualia) is supposed to be so that we can assert that the story the brain is telling us about qualia somehow fails to measure up to this independent standard of qualia-reality?

The trouble I have with this is that there is no such independent standard for what phenomenal blueness has to be in order to count as genuinely phenomenal. The only standard we have for identifying... (read more)

For an in depth argument that could taken to support this point, I highly recommend Humankind: A Hopeful History by Rutger Bregman.

it generalises. Logic and probability and interpretation and theorisation and all that, are also outputs of the squishy stuff in your head. So it seems that epistemology is not first philosophy, because it is downstream of neuroscience.

I find this claim interesting. I’m not entirely sure what you intend by the word “downstream” but I will interpret it as saying that logic and probability are epistemically justified by neuroscience. In particular, I understand this to include that claim a priori intuition unverified by neuroscience is not sufficient to j... (read more)

This is a fascinating article about how the concept of originality differs in some Eastern cultures https://aeon.co/essays/why-in-china-and-japan-a-copy-is-just-as-good-as-an-original

5PeterMcCluskey
That might have some interesting implications for where mind uploading will initially become popular.

An interesting contribution to is this book by Hofstadter and Sanders

They explain thinking in terms of analogy, which as they use the term encompasses metaphor. This book is the a mature cognitive sciencey articulation of many of the fun and loose ideas that Hofstadter first explored in G.E.B.

I'm curious how many people here think of rationalism as synonymous with something like Quinean Naturalism (or just naturalism/physicalism in general). It strikes me that naturalism/physicalism is a specific view one might come to hold on the basis of a rationalist approach to inquiry, but it should not be mistaken for rationalism itself. In particular, when it comes to investigating foundational issues in epistemology/ontology a rationalist should not simply take it as a dogma that naturalism answers all those questions. Quine's Epistemology Naturalized i... (read more)

2TAG
There's a way of doing rationality which is maximally open and undogmatic, but that isnt the Less Wrong way. Theres a way of doing naturalism, where you make first sure that science has a firm epistemic foundation and only then accept its results, and that's not the Less Wrong way either. If you look at this passage .,it generalises. Logic and probability and interpretation and theorisation and all that, are also outputs of the squishy stuff in your head. So it seems that epistemology is not first philosophy, because it is downstream of neuroscience.

You might be interested to look at David Corfield's book Modal Homotopy Type Theory. In the chapter on modal logic, he shows how all the different variants of modal logic can be understood as monads/comands. This allows us to understand modality in terms of "thinking in a context", where the context (possible worlds) can be given a rigorous meaning categorically and type theoretically (using slice categories).

I really enjoyed this post. It was fun to read and really drove home the point about starting with examples. I also thought it was helpful that it didn't just saying, "teach by example". I feel that simplistic idea is all too common and often leads to bad teaching where example after example is given with no clear definitions or high level explanations. However, this article emphasized how one needs to build on the example to connect it with abstract ideas. This creates a bridge between what we already understand and what we are learning.

As I was thinking... (read more)

Excellent article, thank you. I particularly enjoyed your images and diagrams. To me concept diagrams are another superpower for explaining things. Have you written anything about that?

3Liron
Thanks :) Hmm I think all I can point you to is this tweet.

Personally, I thought "mind-hanger" was ok. I got an image of a coat-hanger for the mind. You could even include that image explicitly in your concept mapping pictures.

Some other ideas that stick with the coat-hanger variant would be "idea-hanger", "concept-hanger".

Another term you might consider is scaffolding. This also has a strong concrete image of construction scaffolding, but the metaphor lends itself to the idea of building on top of the skeletal example, just as we start a building project with a scaffold and then build the real building around it.... (read more)

I think I can fruitfully engage in truth evaluation of grue things wihtout agreeing or supposing that grue is fitting.

As indicated in the post, fittingness is dependent on the domain D under study. If we take grue to be a term in the study of colour, it is profoundly ill-fitting. I think it is a fair assessment that no researcher who studies colour would find it fruitful or salient to evaluate the truth of propositions involving grue. The picture changes however if we let D be philosophy of science. In that case, grue is fitting, precisely because it il... (read more)

1Slider
In my head it is fuzzy whether fittigness is supposed to be the same or different as some concepts being in fashion. I think it is possible to dedtermine the truth of statements which use grue as a concept to deal with color. "Grue is a shade of red" is a statement in the domain of color and it can be determined to be false. That grue isn't very fitting for color is connected to phenomena like statements like "Is grue a shade of green?" which is not effectively or truthfully answered with a plain yes or no answer. But I think this has a correct answer and the truth can be evaluated. Grue can feel discontinous if explained in terms of green and blue, but a representation of green can be given in the grue-style color understanding which would make green the "ill-fitting one" for that concept group. I feel like "fruitful" in this context has multiple possible meaning that might be relevant. One of them is "bears fruit" ie you can make something happen with it, it produces theorems, or other result-type objects. Another would be fruitful in the sense of inciting excitement in the field forward or getting the acceptance of the group. In this sense heliocentrism would be unfruitful in a world of strong catholic geocentrism. In contrast epicycles would be a symptom of fudging, making an ill-fitting theory make correct predictions. The whole motto of "shut up and calculate" seems to suggest a direction where constructing narratives is seen as anti-progress or just leading people astray. One could think that biology could suffer in a same kind of thing where if people antropomorphise and attribute human like wants and needs to evolutionary pressures then incorrect results could be doled out. But it seems the concepts surrounding biology are able to push forward without undue drag from other concepts. However difference between "inclusive genetic fitness" and "every animal just tries really hard to survive and some succeed" seems like it needs to be done again and again in

The term I introduced is "fittingness" not fitness. Fittingness is meant to evoke both fit, as in whether a pair of shoes fit my feet, and also fitting, as in "that is a fitting word choice for this sentence". It is possible that there is another term which would be a better label for the underlying concept. If you have suggestions for alternatives I would love to hear them.

I think it's important that the word is specific, not general. As you point out, we could use a general term qualified with a lengthy phrase like: "success with respect to concept forma... (read more)

Answer by Polytopos30

I am a fan of johnswentworth's gears sequence. It would be fruitful to have this distilled.

It would be good to have some some polling of which gears are tight and slack in the present state of the world for various big projects of interest to LessWrong members. e.g., for AGI research, for ethics, for various sciences, for societal progress, etc.

Thanks, you are correct. I have updated the post to reflect this.

1Slider
That is a weird way you think it reflects on the content. I think I can fruitfully engage in truth evaluation of grue things wihtout agreeing or supposing that grue is fitting. Maybe a bigger example would be that we can do quantum mechanics without really understanding the mathematics. The field of interpretations of quantum mechanics is underdeveloped. Understanding there is not a prerequisite to build tech on it or to verify the outcomes of experiments. I think there was a deveoplement stage where we knew there were statistical regularities and the fittingness of the field went forward when the concept of "entanglement" was used to get a handle on it. Understanding helps in being and getting correct and it is the primary path and approach of some but it is not a rerequisite in that its neglect would lead to failure. 

Fittingness is not the same as telos/purpose/concern. It is a success concept for a specific telos/purpose/concern, namely that belonging to rational inquiry. In other words, it indicates that one has formed a concept or ontology which is successful for the purpose of rational inquiry. Of course, there might be other purposes governing concept formation which would have their own success concepts. For example, if one's purpose is to craft deceptive propaganda then the relevant success concept might be slipperyness or something.

2Gordon Seidoh Worley
Oh, but then why have a special word for success to the purpose of rational inquiry? To my ear "fitness" seems like something general we could say about anything, as in it's "fitness for X", like "fitness for rational inquiry" or "fitness for convincing others".
Answer by Polytopos10

I think a big open question is how to think about rationality across paradigms or incompatible ontological schemas. In focusing only on belief evaluation, we miss there is generally tacit framework of background understanding which is required for the beliefs to be understood and evaluated. 

What happens when people have vastly different background understandings? How does rationality operate in such contexts?


 

The author does a good job articulating his views on why Buddhist concentration and insight practices can lead to psychological benefits. As somebody who has spent years practicing these practices and engaging with various types of (Western) discourse about them, the author's psychological claims seem plausible to a point. He does not offer a compelling mechanism for why introspective awareness of sankharas should lead to diminishing them. He also offers no account for why if insight does dissolve psychological patterns, it would preferentially dissolve ne... (read more)

4romeostevensit
I appreciate the detailed feedback! Agree with most of what you said but think it applies much more to 3rd and 4th path than 1st. After 1st path there is experiential working with rebirth, but that's kinda irrelevant for the 99.9% who aren't there. In the discourses it is claimed that householders can achieve 3rd path, and the Buddha gives quite a bit of practical advice for a happy life, as mundane as things like appropriate savings rates.

A few of points.  First I've heard several AI researchers say that GPT-3 is already close to the limit of all high quality human generated text data.  While the amount of text on the internet will continue to grow,  it might not grow fast enough for major continued improvement. Thus additional media might be necessary for training input. 

Second deaf blind people still have multiple senses that allow them to build 3D sensory-motor models of reality (touch,  smell,  taste,  proprioception,  vestibular,  sound vibr... (read more)

4Veedrac
I expect getting a dataset an order of magnitude larger than The Pile without significantly compromising on quality will be hard, but not impractical. Two orders of magnitude (~100 TB) would be extremely difficult, if even feasible. But it's not clear that this matters; per Scaling Laws, dataset requirements grow more slowly than model size, and a 10 TB dataset would already be past the compute-data intersection point they talk about. Note also that 10 TB of text is an exorbitant amount. Even if there were a model that would hit AGI with, say, a PB of text, but not with 10 TB of text, it would probably also hit AGI with 10 TB of text plus some fairly natural adjustments to its training regime to inhibit overfitting. I wouldn't argue this all the way down to human levels of data, since the human brain has much more embedded structure than we assume for ANNs, but certainly huge models like GPT-3 start to learn new concepts in only a handful of updates, and I expect that trend of greater learning efficiency to continue. I'm also skeptical that images, video, and such would substantially change the picture. Images are very information sparse. Consider the amount you can learn from 1MB of text, versus 1MB of pixels. Correlation is not causation ;). I think it's plausible that agenthood would help progress towards some of those ideas, but that doesn't much argue for multiple distinct senses. You can find mere correlations just fine with only one. It's true that even a deafblind person will have mental structures that evolved for sight and hearing, but that's not much of an argument that it's needed for intelligence, and given the evidence (lack of mental impairment in deafblind people), a strong argument seems necessary. For sure I'll accept that you'll want to train multimodal agents anyway, to round out their capabilities. A deafblind person might still be intellectually capable, but it doesn't mean they can paint.

What happens when OpenAI simply expands this method of token prediction to train with every kind of correlated multi-media on the internet?  Audio,  video,  text,  images,  semantic web ontologies, and scientific data.  If they also increase the buffer size and token complexity,  how close does this get us to AGI?

Veedrac*180

Audio,  video,  text,  images

While other media would undoubtedly improve the model's understanding of concepts hard to express through text, I've never bought the idea that it would do much for AGI. Text has more than enough in it to capture intelligent thought; it is the relations and structure that matters, above all else. If this weren't true, one wouldn't expect competent deafblind people, but there are. Their successes are even in spite of an evolutionary history with practically no surviving deafblind ancestors! Clearly the modules that make humans intelligent, in a way that other animals and things are not, are not dependent on multisensory data.

Ah, this post brings back so many memories of studying philosophy of science in grad school. Great job summarizing Structure

One book that I found very helpful in understanding Kuhn's views in relations to philosophical questions like the objectivity vs mind-dependence of reality is Dynamics of Reason by Michael Friedman. Here Friedman relates Kuhn's ideas both to Kant's notion of categories of the understanding and to Rudolf Carnap's ontological pragmatism. 

The upshot of Friedman's book is the idea of the constitutive a priori which roughly is ... (read more)

Thanks, I just watched Victor's Seeing Spaces talk and it is really cool. 

Wow, there is a lot to dig into here. Thanks for this.

1Vaughn Papenhausen
No problem. I'll be interested to see what you come up with.

The trouble is that these antibodies are not logical.  On the contrary; these antibodies are often highly illogical. They are the blind spots that let us live with a dangerous meme without being impelled to action by it.

That is a brilliant point. I also loved you description of the Buddhist monk taking questions from a Western audience. The image of incompatible knowledge blocks is a great one, that actually makes a lot of sense of how various ideologically conditioned people are able to functionally operate. 

The example that comes up for me is a... (read more)

There is a wonderful scene in the new Pixar film Soul where they show a "lost soul" who turns out to be a hedge fund trader who just keeps saying, "gotta make the trade". Your description of your high income clients reminded me of that. 

Answer by Polytopos30

I can't say anything on this subject that Derek Parfit didn't say better in Reasons and Persons. To my mind, this book is the starting point for all such discussions. Without awareness of it, we are just reinventing the wheel over and over again.

I find it hard to believe your prediction that this breakthrough will be insignificant given what I've read in other reputable sources. I give a pretty high initial credence to the scientific claims of publications like Nature which had this to say in their article on Alphafold2:

The ability to accurately predict protein structures from their amino-acid sequence would be a huge boon to life sciences and medicine. It would vastly accelerate efforts to understand the building blocks of cells and enable quicker and more advanced drug discovery.

reference

Agreed. Open AI did a study on the trends of algorithm efficiency. They found a 44x improvement in training efficiency on ImageNet over 7 years.

https://openai.com/blog/ai-and-efficiency/

I find reading this post and the ensuing discussion quite interesting because I studied academic philosophy (both analytic and continental) for about 12 years at university. Then I changed course and moved into programming and math, and developed a strong interest thinking about AI safety.

I find this debate a bit strange. Academic philosophy has its problems, but it's also a massive treasure trove of interesting ideas and rigorous arguments. I can understand the feeling of not wanting to get bogged down in the endless minutia of academic philosophizing in ... (read more)

Ah, thanks for clarifying. So the key issue is really the adjusted for inflation/deflation part. You are saying even if previously expensive goods become very cheap due to automation, they will still be valued in "real dollars" the same for the productivity calculation. 

Does this mean that a lot rides on how economists determine comparable baskets of goods at different times and also on how far back they look for a historical reference frame?

2Phil
I'm saying that if previously expensive goods become very cheap due to automation, the total for all goods will be valued higher in "real dollars".  For that one good, the total dollar value could indeed be lower, even after overall inflation (such as, for instance, if the price drops by a factor of 20, but only 10 times as many items are produced). But for the economy as a whole, the value in "real dollars" will always at least stay the same after productivity improvements that lower some prices relative to the status quo.  That's because even though that one good may be lower in value even after adjusting for deflation caused by the lower price, the other goods in the economy will make up the difference and more by being higher in value after adjusting for deflation.  

Thanks for your comment Phil. That's helpful, I hadn't considered the question of where labour shifts after less of it is needed to produce an existing good. 

I understand you as saying that as productivity increases in a field and market demand becomes saturated then the workers move elsewhere. This shift of labour to new sectors could (and historically did) lead to more overall productivity, but I think this trend may not continue with the current waves of automation. It seems possible that now areas of the economy where workers move to are those les... (read more)

2Phil
But even if workers move to less productive industries, productivity must still go up, adjusted for inflation. Suppose 5 workers lose their jobs because it takes 5 fewer workers than before to make 10 widgets.  The country is now making the same as before, but with 5 fewer workers.  So productivity is higher than before, if the 5 workers remain unemployed.  (Same output, less labor). If the 5 workers get jobs elsewhere, even if they are almost completely unproductive and make only 1 grommet combined, the country is still more productive than before -- more output (1 extra grommet), same labor. If productivity is output/labor, it must always be true, mathematically, that even if the (now) surplus labor is even minimally productive, average productivity rises.   For the case where the workers stay put making widgets and it's just that more widgets get made, that's just a special case where the surplus labor stays in the same industry, and the "proof" is the same as before.
Answer by Polytopos20

Digital knowledge management tools envisioned in the 1950s and 60s such as Douglas Engelbart's hyperdocument system has not been fully implemented (to my knowledge) and certainly not widely embraced. The World Wide Web failed to implement key features from Engelbart's proposal such as the ability to directly address arbitrary sub-documents, or the ability to live embed a sub-document inside another document. 

Similarly both Engelbart and Ted Nelson emphasized the importance of hyperlinks being two-directional so that the link is browsable from both the... (read more)

I second this book recommendation. I just finished reading it and it is well written and well argued. Bregman explicitly contrasts Hobbes' pessimistic view of human nature with Rousseau's positive view. According to the most recent evidence Rousseau was correct.

His evolutionary argument is that social learning was the overwhelming fitness inducing ability that drove human evolution. As a result we evolved for friendliness and cooperation as a byproduct of selection for social learning.

I don't know enough math to understand your response. However, from the bits I can understand, it seems leave open the epistemic issue of needing an account of demostrative knowledge that is not dependent on Bayesian probability.

Interesting. This might be somewhat off topic, but I'm curious how would such an Bayesian analysis of mathematical knowledge explain the fact that it is provable that any number of randomly selected real numbers are non-computable with a probability 1, yet this is not equivalent to a proof that all real numbers are non-computable. The real numbers 1, 1.4, square root 2, pi, etc are all computable numbers, although the probability of such numbers occurring in an empirical sample of the domain is zero.

2MrMind
So far, Bayesian probability has been extended to infinite sets only as a limit of continuous transfinite functions. So I'm not quite sure of the official answer to that question. On the other hand, what I know is that even common measure theory cannot talk about the probability of a singleton if the support is continuous: no sigma-algebra on 2ℵ0 supports the atomic elements. And if you're willing to bite the bullet, and define such an algebra through the use of a measurable cardinal, you end up with an ultrafilter that allows you to define infinitesimal quantities

I was excited by the initial direction of the article, but somewhat disappointed with how it unfolded.

In terms of Leibniz's hope for a universal formal language we may be closer to that. The new book Modal Homotopy Type Theory (2020 by David Corfield) argues that much of the disappointment with formal languages among philosophers and linguists stems from the fact that through the 20th century most attempts to formalize natural language did so with first-order predicate logic or other logics that lacked dependent types. Yet, dependent types are natura... (read more)

I disagree with the idea that one doesn't have intuitions about generalization if one hasn't studied mathematics. One things that I find so interesting about CT is that it is so general it applies as much to everyday common sense concepts as it does to mathematical ones. David Spivak's ontology logs are a great illustration of this.

I do agree that there isn't a really good beginners book that covers category theory in a general way. But there are some amazing YouTube lectures. I got started on CT with this series, Category Theory for Be... (read more)

It seems odd to equate rationality with probabilistic reasoning. Philosophers have always distinguished between demonstrative (i.e., mathematical) reasoning and probabilistic (i.e., empirical) reasoning. To say that rationality is constituted only by the latter form reasoning is very odd, especially considering that it is only though demonstrative knowledge that we can even formulate such things as Bayesian mathematics.

Category theory is a meta-theory of demonstrative knowledge. It helps us understand how concepts relate to each other in a rigorous way. ... (read more)

2MrMind
Under the paradigm of probability as extended logic, it is wrong to distinguish between empirical and demonstrative reasoning, since classical logic is just the limit of Bayesian probability with probabilities 0 and 1. Besides that, category theory was born more than 70 years ago! Sure, very young compared to other disciplines, but not *so* young. Also, the work of Lawvere (the first to connect categories and logic) began in the 70's, so it dates at least forty years back. That said, I'm not saying that category theory cannot in principle be used to reason about reasoning (the effective topos is a wonderful piece of machinery), it just cannot say that much right now about Bayesian reasoning
Answer by Polytopos10

I am not a mathematician but I've been studying category theory for about a year now. From what I've learned so far it seems that it's main benefit within pure mathematics is that it gives a way of translating between different domains of mathematical discourse. On the face of it, even if you've provided a common set-theoretic foundation for all areas of math, it isn't obvious how higher level constructions in say, geometry, can be translated into the language of algebra or topology, or vice versa. So category theory was invented t... (read more)

David Spivak offers an account of Categories as database schemas with path equivalencies that is similar to the account you've given here in his book Category Theory for the Sciences. He still presents the traditional definitions, giving examples mainly from the category of sets and functions. I also didn't find his presentation of database schema definition especially easy to understand, but it is very useful when you realize that a functor is a systematic migration of data between schemas.

Thanks for your comment. My replies are below.


"so Gisin's musings... are guaranteed to be not a step in any progress of the understanding of physics."

What is your epistemic justification for asserting such a guarantee of failure? Of course, any new speculative idea in theoretical physics is far from likely to be adopted as part of the core theory, but you are making a much stronger claim by saying that it will not even be "a step in any progress of the understanding of physics". Even ideas that are eventually rejected as false, ar... (read more)

4Pattern
I think this is better explained as: We try to do math, but we can make mistakes.* If two people evaluate an arithmetic expression the same way, but one makes a mistake, then they might get different answers. *Other examples: 1. You can try to create a mathematical proof. But if you make a mistake, it might be wrong (even if the premises are right). 2. An incorrect proof, a typo, or something on your computer screen? A proof might have a mistake in it and thus "be invalid". But it could also have a typo, which if corrected yields a "valid proof". Or, the proof might not have a mistake in it - you could have misread it, and what it says is different from what you saw. (Someone can also summarize a proof badly.) If the copy of the proof you have is different from the original errors (or changes) could have been introduced along the way.
2Shmi
Let me reply to the last one first :) The Einstein equation was singled out in the Quanta magazine article. I respect the author, she wrote a lot of good articles for Quanta, but this was quite misleading. I don't understand your second last point. Are you talking about a mathematical algorithm or about a physical measurement? " no matter how many digits we observe following the law like pattern, the future digits may still deviate from that pattern" -- what pattern? No, we don't. And yes, they are. We start with some innate abilities of the brain, add the culture we are brought in, then develop models of empirical observations, whatever they are. 1+1=2 is an abstraction of various empirical observations, be in counting sheep or in mathematical proofs. Logic and math co-develop with increasingly complex models and increasingly non-trivial observations, there is no "we need logic and math to evaluate evidence". If you look through the history of science, math was being developed alongside physics, as one of the tools. In that sense the Noether theorem, for example, is akin to, say, a new kind of a telescope. Because they are of the type that is "not even wrong". The standard math works just fine for both GR and QM, the two main issues are conceptual, not mathematical: How does the (nonlinear) projection postulate emerge from the linear evolution (and no, MWI is not a useful "answer", it has zero predictive power), and how do QM and GR mesh at the mesoscopic scale (i.e. what are the gravitational effects of a spatially separated entangled state?).

I agree that the term mindfulness can be vauge and that it is a recent construction of Western culture. However, that doesn't mean it lacks any content or that we can't make accurate generalizations about it.

To be precise, when I say "mindfulness meditation" I have in mind a family of meditation techniques adapted from Theravada and Zen Buddism for secular Western audiences originally by Jon Kabat-Zinn. These techniques attempt to train the mind in adopt a focused, non-judgemental, observational stance. Such a stance is very useful for ... (read more)

I agree about mindfulness meditation. It is presented as a one-size-fits-all solution, but actually mindfulness meditation is just a knob that emphasizes certain neural pathways at the expense of others. In general, as you say, I've found that mindfulness de-emphasizes agential and narrative modes of understanding. Tulpa work, spirit summoning, shammanism, etc. all move the brain in the opposite direction, activating strongly the narrative/agential/relational faculties. I experienced a traumatic dissociative state after too much vipassana meditation on retreat, and I found that working with imaginal entities really helped bring my system back into balance.

2Gordon Seidoh Worley
"Mindfulness meditation" is a rather vague category anyway, with different teachers teaching different things as if it were all the same thing. This might sometimes be true, but I think of mindfulness meditation as an artificial category recently made up that doesn't neatly, as used by the people who teach it, divide the space of meditation techniques, even if a particular teacher does use it in a precise way that does divide the space in a natural way. None of this is to say you shouldn't avoid it if you think it doesn't work for you. Meditation is definitely potentially dangerous, and particular techniques can be more dangerous than others to particular individuals depending on what else is going on in their lives, so I think this is a useful intuition to have that some meditation technique is not a one-size-fits-all solution that will work for everyone, especially those who have not already done a lot of work and experienced a significant amount of what we might call, for lack of a better term, awakening.

I have often thought that the greatest problem with the tulpa discourse is the tendency there to insist on the tulpa's sharp boundaries and literal agenthood. I find it's much more helpful to think of such things in terms of a broader class of imaginal entities which are semi agential and which often have fuzzy boundaries. The concept of a "spirit" in Western magick is a lot more flexible and in many ways more helpful. Of course, this can be taken in an overly literal or implausibly supernateralistic direction, but if we guard against such interpretations,

... (read more)
4Ann
My (initial) tulpa strongly agrees with this assessment of the problem with tulpa discourse; he made a point to push back on parts of the narrative about as soon as he started acquiring any, because 'taking it too seriously' seemed like the greatest risk of this meditation for me simply because it was implied in the instruction set. He was in a better position to provide reassurance that I didn't have to once we were actually experiencing some independence. In other cases of mind-affecting substances and practices like antidepressants and (other forms of) meditation, I've been willing to try it and taper off if I don't like what it seems to be doing to me/my brain. Now in the case of tulpamancy, I generally like what does to my brain; it practices skills I might have a relative disadvantage in, or benefit from in my work and other hobbies, and empowers me to practice compassion for myself in a way I wasn't previously able to. (In contrast to the poster previously, I have reason to suspect my cognitive empathy is/was lacking in something even for myself.) However, it makes sense to approach it with the same caution as trying a new meditation, drug, or therapy in general - it really is a form of meditation, some of these can have severe downsides for part of the population they could also potentially benefit, and you should feel comfortable winding down the focus on it if you want to or have other priorities. For one contrast, I don't like mindfulness meditation, it pulls me towards sensory overload - I already have too much 'awareness'. Maybe for someone less autistic, mindfulness meditation is the way to go to strengthen a skill they'd benefit from having more of, and modeling other agents is redundant. If having dialogues with yourself is the goal, there are other approaches that might work better for a particular person. I'd say 'know yourself', but I know how tricky that is, so instead I'll say, pay attention to what works for you.
Answer by Polytopos10

Thanks to the comments and discussion, I was motivated to do more research into my own question. What I've found is that there have been some attempts to use semantic technologies for personal knowledge management (PKM).

I have not found evidence one way or the other as to whether these tools have been helpful for knowledge discovery, but they seem promising.

The main tool that would be accessible to the average user is Semantic MediaWiki, this is an extension to Wikipedia's popular MediaWiki software that adds KR functionality based on semantic ... (read more)

Interesting, can you give some examples to illustrate how causal/Bayes nets are used to aid reasoning / discovery?

I see merit in the idea that semantic networks may focus too much on the structure of language, and not enough on the structure of the underlying domain being modelled. As active thinkers, we are looking to build an understanding of the domain, not an understanding of how we talked about that domain.

Issues of language use, such as avoiding ambiguity, could sometimes be useful especially in more abstract argumentation, but more important is being able to track all of the relationships among the domain specific entities and organizing lines of evidence.

Good hypothesis, here is why I don't think it's likely to be true.

It seems to me that when humans make explicit arguments with written language, we are doing a natural language form of knowledge representation. In science and philosophy the process of making conceptual models explicit is very useful for theory formulation and evaluation. i.e., In conceptual domains, human thinkers don't learn like today's neural nets, we don't just immerse ourselves in a sea of raw numbers and absorb the correlations. We might do something like th... (read more)

Load More