CEO & Founder at White Rabbit Express and Blackship.com
...provides no more benefit than any other religious program would provide
What makes you call meditation a religious program?
Where is the religion in this practice:
===
Clearly you have some interest since you're here reading and responding to a rather long series of articles on meditation. But it seems you may also harbor a lot of misunderstandings. What's meditation in your mind? And why are you convinced it's "a waste of time" when hundreds of millions of people are doing it?
Out of curiosity, is there anything that might change your mind? Scientific papers? Meta-analysis studies? Perhaps testimonials of people's positive experience? Hundreds of testimonials? Thousands? Tens of thousands? Or is your decision based on some kind of dogma, moral principle, or fear?
I'm not particularly trying to change your mind, I'm just wondering how someone here on a "rationalist"-themed site ended up so blinded by their biases.
I wrote up something on a meditation technique I used as a freediver.
https://www.lesswrong.com/posts/ieMQHkLuYXND8Yohn/meditation-skill-surfing-the-urge
Maybe it'll give you a new perspective; if not, I'd be happy to understand what makes this a "religious program."
Many people are in fact choosing to not have sex with humans, instead simulating interaction with a human while self-stimulating. If your criticism here is based on an assumption that such choice is somehow invalid or worse, it would be great if you could support that.
I thought that was probably not a choice for most people. Perhaps a result of society getting so obese that no one finds each other attractive anymore? For me, it's like the difference between riding (preferably racing) a motorcycle vs playing a motorcycle video game. I can't imagine why anyone who has experienced the former would prefer the latter.
The OP conceded my points were valid btw, but thanks for weighing in with your profound personal insights!
> I believe this is mostly a waste of time.
oh well!
>there are a lot of people on Less Wrong in particular are - for good reason - skeptical about whether or not there is actually anything worthwhile going on in this space.
if the goal is, in part, to get more people to try meditation, you could also 1) cite the scientific literature on the benefits, 2) maybe encourage them to try (if only for 10 minutes a day, but should be for at least 6-8 weeks in my opinion), 3) compile personal testimonials about the benefits (and perhaps your own story).
A lot has been written about the basic idea. I imagine the type of people who are most interested in a more academic "model" are probably the type who would be more inclined to debate the "ontological problems" with your "pedagogical assumptions" and your "lexical fallacies" blah blah lol You know the types. Arguments, I've found, rarely shift intuitions.
I think some simple metaphors are probably even more effective than a complex model of the mind that people are going to have many reasons to disagree with. This 60-second video gets the point across without so many fancy words:
https://www.youtube.com/watch?v=qxyVCjp48S4
My multiagent model of mind
I have been calling my interpretation of those models a “multiagent model of mind”.
No credit to Marvin Minsky for your model? He pioneered the multi-agent model in his 1986 book "Society of Mind."
http://aurellem.org/society-of-mind/som-1.html
The global workspace can only hold a single piece of information at a time. At any given time, multiple different subsystems are trying to send information into the workspace, or otherwise modify its contents.
The exact process by which this happens is not completely understood,
That sounds lifted wholly from Dennett's work. The similarities are striking:
According to the Multiple Drafts model, perception is accomplished in the brain by parallel, multi-track processes of interpretation and elaboration of sensory inputs. These content discriminations produce something like a narrative stream. Probing this stream at different places and times produces different effects and precipitates different narratives. There are many small agents screaming for attention. What we experience is a product of many processes of interpretation.
Frustratingly, Dennett has very little to say about how these content discriminations work and it is unclear what governs the modules.
Basically you've constructed a dumbed down version with a Cartesian Theater. One of Dennett’s aims is to get rid of this notion of a centralized place of processing in the brain in order to escape Cartesian materialism. For him, there is no single brain area in which it all comes together. With this decentralized notion of consciousness, there is no need for a Theater and no need for a homunculus to live inside our brains. Dennett’s Multiple Drafts model of consciousness must first be understood as an alternative for Cartesian materialism.
So far it seem highly problematic and appallingly inauthentic.
I recommend Minsky's "The Emotion Machine." He offers a much more compelling notion of how things like recent memories, serial processes, symbolic descriptions, and self-models conspire to create an illusion of immanence.
But nothing beats the Monkey Mind analogy in terms of bang for your buck!
Thanks! Was there any requirement that it needed to be a physical set? I assumed the AI would probably be interested in a digital environment.
The set could have a bunch of "cards" to start; or maybe the whole thing is open-sourced if you're philosophically opposed to the idea of people making their own decisions about trading money for things they find valuable. But those issues seem rather secondary to the spirit of the challenge here.
I'm not sure exactly what you disagree about, but thanks for the comparisons.
Here's a nice comparison on Quora from someone "Practicing Yoga & Meditation since 2001"
Zen is a school of "sudden enlightenment". You "just sit" on the cushion for a million years and with shear mind force destroy your ego and then you suddenly "get" it. Or (in the Rinzai school) you are given an absurd puzzle called Koan to solve. It throws your ego from its normal course that you reach Satori. Hence all the strange and crazy stories of Zen masters.
Vipassana is a school of "gradual enlightenment". First you learn how to focus on a single object or awareness for extended period of time. Then with non-judgmental awareness you observe. With long enough practice your mental obstructions or "fetters" as Buddhism calls them are broken - one by one. When all the ten fetters are broken, you have reached.
I have tried Zen in a monastery setting and quickly found that its not my cup of tea. People's temperaments are different. Some people may find Zen to be more appealing than the traditional Vipassana.
I would suggest you try out both a see which works for you. Its one thing to intellectually understand meditation and a totally different thing to sit in a cushion for 8-hours and watch your breath hit the tip of your nostrils.
The datasets it was trained on include Wikipedia (English), Common Web Crawl (basically a subset of the Internet), Github, among others.
A team of researchers from OpenAI recently published a paper describing GPT-3, a deep-learning model for natural-language with 175 billion parameters, 100x more than the previous version, GPT-2. The model is pre-trained on nearly half a trillion words and achieves state-of-the-art performance on several NLP benchmarks without fine-tuning.
In paper published on arXiv, a team of over 30 co-authors described the model and several experiments. The researchers' goal was to produce an NLP system that performs well on a variety of tasks with little or no fine-tuning, and previous work had indicated that larger models might be the solution. To test that hypothesis, the team increased the size of their previous model, GPT-2, from 1.5 billion parameters to 175 billion. For training, the team collected several datasets, including the Common Crawl dataset and the English-language Wikipedia. The model was evaluated against several NLP benchmarks, matching state-of-the-art performance on "closed-book" question-answering tasks and setting a new record for the LAMBADA language modeling task.