All of haig's Comments + Replies

haig20

From what I've read of the source writings within the contemplative traditions, modern neuroscience studies and theories on meditation, and my own experiences and thoughts on the subject as well, I've come to view the practice of meditation as serving 3 different but interconnected purposes: 1.) ego loss, 2.) cultivation of compassion, and 3.) experience of non-dual reality.

Ego loss means inhibiting or eliminating the internal self-critic by changing the way you perceive the target of that critic, namely the concept of a stable 'self' that you identify w... (read more)

haig00

Wanted to add the insights of Neil Gershenfeld which I think is how we should frame these problems:

We've already had a digital revolution; we don't need to keep having it. The next big thing in computers will be literally outside the box, as we bring the programmability of the digital world to the rest of the world.

He was talking about personal fabrication in this context, but the 'digitization' of the physical world is applicable to the sustainability goals I mentioned. Using operations research, loosely-coupled distributed architectures, nature-inspired routing algorithms, and other tricks of the IT trade applied to natural resources, we can finally transition to a sustainable world.

haig20

Surprised no one has mentioned anything involving sustainable/clean tech (energy, food, water, materials). This site does stress existential threats, and I'd think that, given many (most?) societal collapses in the past were precipitated, at least partly, by resource collapse, I'd want to concentrate much of the startup activity around trying to disrupt our short-term wasteful systems. Large pushes to innovate and disrupt the big four (energy, food, water, materials) would do more than anything I can think of to improve the condition of our world and min... (read more)

0haig
Wanted to add the insights of Neil Gershenfeld which I think is how we should frame these problems: He was talking about personal fabrication in this context, but the 'digitization' of the physical world is applicable to the sustainability goals I mentioned. Using operations research, loosely-coupled distributed architectures, nature-inspired routing algorithms, and other tricks of the IT trade applied to natural resources, we can finally transition to a sustainable world.
haig00

Ok, so we can with confidence say that humans and other organisms with developed neural systems experience the world subjectively, maybe not exactly in similar ways, but conscious experience seems likely for these systems unless you are a radical skeptic or solipsist. Based on our current physical and mathematical laws, we can reductively analyse these systems and see how each subsystem functions, and, eventually, with sufficient technology we'll be able to have a map of the neural correlates that are active in certain environments and which produce certa... (read more)

1David_Gerard
It's not yet clear to me that we're talking about anything that's anything. I suppose I'm asking for something that does make that a bit clearer.
haig20

To summarize (mostly for my sake so I know I haven't misunderstood the OP):

  • 1.) Subjective conscious experience or qualia play a non-negligible role in how we behave and how we form our beliefs, especially of the mushy (technical term) variety that ethical reasoning is so bound up in.
  • 2.) The current popular computational flavor of philosophy of mind has inadequately addressed qualia in your eyes because the universality of the extended church-turing thesis, though satisfactorily covering the mechanistic descriptions of matter in a way that provides for
... (read more)
haig10

We have evolved moral intuitions such as empathy and compassion that underly what we consider to be right or wrong. These intuitions only work because we consciously internalize another agent's subjective experience and identify with it. In other words, without the various quales that we experience we would have no foundation to act ethically. An unconscious AI that does not experience these quales could, in theory, act the way we think it should act by mimicking behaviors from a repertoire of rules (and ways to create further rules) that we give it, but that is a very brittle and complicated route, and is the route the SIAI has been taking because they have discounted qualia, which is what this post is really all about.

haig40

"How an algorithm feels from inside" discusses a particular quale, that of the intuitive feeling of holding a correct answer from inside the cognizing agent. It does not touch upon what types of physically realizable systems can have qualia.

1David_Gerard
Um, OK. What types of physically realizable systems can have qualia? Evidently I'm unclear on the concept.
haig00

"If everything real is made of physics, you still must either explain how certain patterns of neuronal excitations are actually green, or you must assert that nothing is actually green at any level of reality."

This is a 'why' question, not a 'how' question, and though some 'why' questions may not be amenable to deeper explanations, 'how' questions are always solvable by science. Explaining how neuronal patterns generate systems with subjective experiences of green is a straightforward, though complex, scientific problem. One day we may unders... (read more)

haig00

In my opinion, the most relevant article was from Drew McDermott, and I'm surprised that such an emphasis on analyzing the computational complexity of approaches to 'friendliness' and self-improving AI has not been more common. For that matter, I think computational complexity has more to tell us about cognition, intelligence, and friendliness in general, not just in the special case of a self-improving optimization/learning algorithms, and could completely modify the foundational assumptions underlying ideas about intelligence/cognition and the singulari... (read more)

haig00

I think extra properties outside of physics conveys a stronger notion than what this view actually tries to explain. Property dualism, such as emergent materialism or epiphenomenalism, doesn't really think there are any extra properties other than the standard physical ones, it is just that when those physical properties are arranged and interact in a certain way they manifest what we experience as subjective experience and qualia and those phenomena aren't further reducible in an explanatory sense, even though they are reducible in the standard sense of ... (read more)

haig00

Reading his essay here: http://edge.org/conversation/the-argumentative-theory it appears that he does indeed come off as pessimistic with regard to raising the sanity line for individuals (ie teaching individuals to reason better and become more rational on their own). However, he does also offer a way forward by emphasizing group reasoning such as what the entire enterprise of science (peer review, etc.) encourages and is structured for. I suspect he thinks that even though most people might be able to understand that their reasoning is flawed and that... (read more)

haig30

An alternative to making things fun is to make things unconscious and/or automatic. No healthy individual complains about insulin production because their pancreas does it for them unconsciously, but diabetic patients must actively intervene with unpleasant, routine injections. One option would be to make the injections less unpleasant (make the process fun and/or less painful), but a better option would be to bring them in line with non-diabetic people and make the process unconscious and automatic again.

haig30

The location in space was fine, the location in time, however, was problematic. Friday afternoon, especially in that area, has probably the most congested traffic anywhere on earth. I was so frustrated to finally get there that I ended up parking in a structure that cost me $16 for two hours. Maybe the next meetup can happen at a later time (after 6pm) on a weekday other than Friday.

Also, a little more structure would have been nice in order to massage the strained conversations into a more productive path. For the next meetup it might be interesting to ask prospective attendees to suggest a list of topics of discussion which we could vote on.

Other than that, nice meeting you all!

0tonsure
next time try the highway and zip up the 405
haig90

In my experience, the inability to be satisfied with a materialistic world-view comes down to simple ego preservation, meaning, fear of death and the annihilation of our selves. The idea that everything we are and have ever known will be wiped out without a trace is literally inconceivable to many. The one common factor in all religions or spiritual ideologies is some sort of preservation of 'soul', whether it be a fully platonic heaven like the Christian belief, a more material resurrection like the Jewish idea, or more abstract ideas found in Eastern a... (read more)

haig60

I may be overlooking something, but I'd certainly consider Robin's estimate of 1-2 week doublings a FOOM. Is that really a big difference compared with Eliezer's estimates? Maybe the point in contention is not the time it takes for super-intelligence to surpass human ability, but the local vs. global nature of the singularity event; the local event taking place in some lab, and the global event taking place in a distributed fashion among different corporations, hobbyists, and/or governments through market mediated participation. Even this difference isn... (read more)

6wedrifid
I think Eliezer estimates 1-2 week until game over. An intelligence that has undeniable, unassailable dominance over the planet. This makes economic measures output almost meaningless. I think you're right on the mark with this one. My thinking diverges with yours here. The global scenario gives a fundamentally different outcome than a local event. If participation is market mediated then the influence is determined by typical competitive forces. Whereas a local foom gives a singularity and full control to whatever the effective utility function is embedded in the machine, as opposed to a rapid degeneration into a hardscrapple hell. More directly in the local scenario that Eliezer predicts outside contributions stop once 'foom' starts. Nobody else's help is needed. Except, of course, as cats paws while bootstrapping.
haig10

There is a web-based tool being worked on at MIT's collective intelligence lab. Couldn't find the direct link to the project, but here's a video overview: Deliberatorium

haig260

Is your pursuit of a theory of FAI similar to, say, Hutter's AIXI, which is intractable in practice but offers an interesting intuition pump for the implementers of AGI systems? Or do you intend on arriving at the actual blueprints for constructing such systems? I'm still not 100% certain of your goals at SIAI.

1timtyler
See Eli on video, 50 seconds in: http://www.youtube.com/watch?v=0A9pGhwQbS0
haig30

"People are not base animals, but people, about 90% animal and 10% something new and different. Religion can be looked on as an act of rebellion by the 90% animal against the 10% new and different (most often within the same person)."

wedrifid120

That way of looking at it is attractive but I don't think it is accurate. Most of religion is the outcome of that extra 10% and definitely part of what we identify as 'person'. Rejecting religion, and other equivalent institutions is an act of rebellion of 2% against the other 8%.

haig10

The wikipedia article for Abductive Reasoning claims this sort of privileging the hypothesis can be seen as an instance of affirming the consequent.

haig30

Aside from learning as a way to acquire useful skills, there are certain things I learn in order to change the way I think. Echoing similar comments, programming seems to have altered my perspective as a kid and continues to do so. One example is learning Lisp. It's become popular to learn lisp not because it is practically useful in day-to-day coding (though it can be), but because it changes the way you think about how to program.

Similarly, studying abstract algebra might be a waste of my time (though I'll understand Lie groups and hence theoretica... (read more)

haig50

"It is useless to attempt to reason a man out of a thing he was never reasoned into." (Jonathan Swift )

haig10

I agree and admit laziness on my part for hoping someone else to insightfully reflect on my problem instead of offering at least a minimum of a solution to start things off. Ironically, I can't seem t make time to analyze how I can make more time!

haig00

What you describe is what Tim Gallwey calls the 'inner game'. It is, to simplify a bit, training your intuitive subconscious without letting your conscious awareness interfere. Here is a video of him coaching a woman who has never touched a tennis racket to serve using the technique.

Another similar technique is drawing on the right side of the brain.

haig50

The post was supposed to be in the spirit of much of the self-improvement posts regarding akrasia, rationality, etc. It seemed logical that managing your information is an important component with the rest of the mental hygiene practices discussed here. It I was mistaken I apologize.

0[anonymous]
I think the original question is valid as such, but there are tons of valid questions that could be asked in similar manner. What I think the article lacks is some insight or just some effort in trying to understand the problem more deeply. Insights don't have to be ground-breaking, but I think articles around here should provide some value to the reader. Now it seems more like a "hey guys, what do you think of free will?" type of query. I suspect that if you would spend some time and effort to try to pin-point the exact problem or perhaps to generalize the problem (or whatever), it might lead you into interesting insights. Let's say through this process you come up with a heuristic or principle for this problem. If the article provided that, it might have some value to the reader and by the virtue of being more specific, it could also spark up interesting discussion. Now it just seems way too open-ended question. Not that it cannot be answered, but that it doesn't inspire commentary. (As an example, perhaps you could have expanded on the opening metaphor. I don't know if it would have lead anywhere interesting, but one never knows.)
3gjm
There's nothing wrong with the topic. Whether it turns out to be a good LW post probably depends on whether anyone contributes any substantially non-obvious advice.
haig00

You might have misunderstood me. I did not limit akrasia to only things we enjoy. I said actually getting going on the task, whether inherently enjoyable or not, is what 'feels wonderful'. I hate going to the dentist, but actually engaging in the process of going to the office and getting it over with feels pretty good as an accomplishment.

And forming the habit of not procrastinating is a very big part of it, IMO. To stop putting things off and automatically jump into a task is a positive habit that does a great deal against akrasia. Why do you think juvenile delinquents get sent off to boot camp or some other long period of regimented experience. To form those habits which will mold their character accordingly.

7gwern
Would it be evidence against your theory that the benefits of boot camp are not clear for juvenile delinquents? http://en.wikipedia.org/wiki/Boot_camp_%28correctional%29#Criticisms
haig50

The Cynic's Theory may in fact describe a true state of mind, but it is not describing akrasia. The Cynic's Theory might better describe those minds whose preferences are placed by exterior influences that conflict with their internal, consciously hidden preferences. An example may be someone who always thought they wanted to be a doctor but deep down knew they wanted to be an artist.

However, when I think of Akrasia, I don't think of incompatible goals or hidden preferences, I think of compatible goals but an inability to consciously exert control of y... (read more)

7gwern
OK, so your major piece of evidence arguing against the 'conflicting-minds' paradigm is that once we conquer some akrasia and get started, we 'feel wonderful'? I don't think that works. Akrasia is about things we do enjoy, and also about things we don't enjoy. I have akrasia about going to my Taekwondo classes, even though I know perfectly well that I'll enjoy them once I'm there. But I also have akrasia about things I don't enjoy doing (like working through homework problems) - and this latter case is by far the majority of akrasia instances. The former is easily explained by different time-preferences - one part of me prefers the here and now, while another part recognizes that stopping whatever I'm doing, getting ready, and going to class will lead to a more enjoyable and healthy hour than my current activity. And the latter is easily explained the same way by multiple factions as well, as simply one faction valuing the abstract utility or long-term consequences over avoiding the short-term disutility. Forming/eliminating habits has nothing to do with it, except as a tactic to support one side over the other. ('I don't want to go to Taekwondo!' 'But this is what we usually do at this time, and someone's waiting - come along already.') And this insight - that there are multiple factions - is the contribution of the naive/cynical theory. Once we know that, we can figure out how to exploit the stupidity or greed of the disfavored faction.
haig30

Yes, a big problem is the human tendency to associate strongly with beliefs so that they become a part of your identity. When I once got into an argument with a particularly stubborn friend regarding religion, I tried to disassociate arguments as much as possible by writing them down and having an impartial 3rd party check for inconsistencies and biases blindly in a type of scoring system. How'd it turn out? He gave up alright, but still retained his beliefs!

0lavalamp
This is true. Another thing that make such arguments difficult/pointless is that it seems the majority of people give rationalizations for what they believe instead of giving the reasons for which they actually obtained those beliefs. This is understandable as people often don't know (or won't admit to) the way they obtained their beliefs. If one doesn't know/won't admit/won't say why they really think what they do, there's no possible way to present a counter-argument against it. I think there's a post saying pretty much this around here somewhere.
haig20

The important thing to point out is that the information signal we experience as pain is an instructive signal more than just an indicative signal. I mean to say that pain's purpose is to make the organism react against whatever is hurting it, not just become aware of it. Since conscious decision making in humans is delayed at least 500ms (and sometimes up to 10 seconds!), signals such as pain have to be a result of low-level cybernetic reactions in the nervous system and not just a conscious experience after the fact. I'm sure if an intelligent designer... (read more)

haig80

I voted up robin hanson, but I would love either Cory Doctorow or Bruce Sterling because they are both smart scifi authors who are vocally skeptical of something like the singularity happening.

Whoever it is, in my opinion the best discussions would consist of people who share very similar worldviews yet strongly differ on some critical ideas. We don't need to see another religion debate that is for sure.

haig40

Warren Buffett seems to fit all the criteria of the counterexample Eliezer asked for. And if you doubt the fanaticism of his fandom, just look over some videos of his annual shareholders' meeting/convention.

2dclayh
Agreed: my father owns about five different books on the theme of "how to be like Buffett".
haig00

Shouldn't this be in the domain of psychological research? The positive psychology movement seems to have a large momentum and many young researchers are pursuing a lot of lines of questioning in these areas. If you really want rigorous, empirically verified, general purpose theory, that seems to be the best bet.

haig70

'cleaning my room' is still abstract. If you decompose that into 'pick up clothes off floor, then make my bed, then vacuum the carpet, .....', then those are concrete tasks.

3pwno
You can decompose all those things into smaller steps too. You cannot determine whether a task is concrete or abstract without considering the person's perception of the task. Making the bed or picking up clothes can be an abstract task for some. I consider cleaning my room a pretty concrete task (same with washing the dishes, another task I procrastinate about) so the theory can't explain my procrastination.
haig90

Well, from reading the comments it seems the most popular type of akrasia that hinders this group is procrastination. I'm sure other weaknesses of will are common, but procrastination seems to be an overwhelmingly common nuisance. This paper http://www.uni-konstanz.de/FuF/SozWiss/fg-psy/gollwitzer/PUBLICATIONS/McCreaetal.PsychSci09.pdf might hint at why this is so. The gist is that the more abstract the tasks/projects/goals are, the more you will procrastinate. As the tasks become more concrete, the procrastination is eliminated. An example is the abs... (read more)

1pwno
What about the concrete task of cleaning my room which I always procrastinate about.
haig00

Good post and important issues: How similar are other human minds to my own? How can I discuss academically what other minds should/should not do/believe if they are so different from my own? It is much like trying to argue over aesthetics of a colored art piece to a person born blind.

It would be constructive to be able to deduce which attributes a person has and which they're lacking, and in what proportions.

haig40

In group #2, where everybody at all levels understand all tactics and strategy, they would all understand the need for a coordinated, galvanized front, and so would figure out a way to designate who takes orders and who does the ordering because that is the rational response. The maximally optimal rational response might be a self-organized system where the lines are blurred between those who do the ordering and those who follow the orders, and may alternate in round-robin fashion or some other protocol. That boils down to a technical problem in operatio... (read more)

haig20

My issue isn't with cryonics, it is with the whole notion of self identity and post-singularity personhood. I guess this ties into EY's 'fun theory' and underscores the importance of a working positive theory of 'fun' as a prerequisite for immortality as we currently define it.

Assume cryonics works, further more assume that your brain is scanned with enough resolution to capture all salient features of what you consider to be your mind. You are now an uploaded entity, and your mind is as malleable as any other piece of software. There are only so many... (read more)

1MrHen
Your view of cryonics seems similar to the concept of reincarnation. Also, some would find personal value in setting up another entity for success. Namely, anyone who has children, but also anyone who argues for the good of the next generation. Essentially, it seems as if you are arguing against giving birth to a new you. While that makes sense in terms of prolonging your own existence, it certainly seems to land on the "evil" side of selfish. Not that that is a problem... just interesting.
1infotropism
So you have one precise reason to not want to live on, and it hinges on quite a few assumptions right ? Adding details to a story makes it less probable. Could you imagine a few other scenarios where things go right instead ? So long as you're alive, there's always at least the unexpected at any rate. You can't predict what your future will be, especially post singularity. Even if you can't imagine how your life could be pleasant or how to make it turn right now, since you aren't expected to outsmart your future self, why wouldn't it find that solution ? I'll list the most obvious thing to me here, if that can help, which is that I don't see how expanding yourself equates with merging with other minds and loosing your individuality. If there was no way to get individual minds that are bigger than ours without a loss of individuality, then please do tell of how individual and complex humans are as compared to previous lifeforms, upstream the tree of life.
haig10

Let's use one of Polya's 'how to solve it' strategies and see if the inverse helps: Irrationalists should lose. Irrationality is systematized losing.

On another note, rationality can refer to either beliefs or behaviors. Does being a rationalist mean your beliefs are rational, your behaviors are rational, or both? I think behaving rationally, even with high probability priors, is still very hard for us humans in a lot of circumstances. Until we have full control of our subconscious minds and can reprogram our cognitive systems, it is a struggle to wi... (read more)

haig10

I'm in pasadena

haig00

Los Angeles, Ca

1MBlume
hey, just so you know, there's some other southern californians hanging out down-thread
haig150

Kiva.org has the distinct honor of being the only charity that has ensured me maximum utilons for my money with an unexpected bonus of most fuzzies experienced ever. Seeing my money being repaid and knowing that it was possible only because my charity dollars worked, that the recipient of my funds actually put the dollars to effective use enough to thrive and pay back my money, well, goddamn it felt good.

MBlume120

kiva feels suspiciously well-optimized on three counts -- there's the utilons (which, given that you're incentivizing industry and entrepreneurship, are pretty darn good), the warm fuzzies you mentioned, and the fact that it seems it could also help me overcome some akrasia with regards to savings. If I loan money out of my paycheck to kiva each month, and reinvest all money repaid, then (assuming a decent repayment rate), the money cycling should tend to increase, meaning that if I need to, say, put a down payment on a house one day, I can take some out,... (read more)

8Eliezer Yudkowsky
But - if you were optimizing strictly for fuzzies - could you have gotten even more fuzzies by giving less money to one recipient in person and tracking their outcome in person?
haig40

We don't want to create a new religion, but whatever we create to take the place of it needs to offer at least as much as that which it replaces, so we might end up actually needing a new 'religion' whether we like it or not. If indeed there is a biological predisposition for humans to want to engage in 'worship', then we might as well worship rationally. I hesitate to call this new organization a religion or the practice worship, those are the things they are replacing, but those words get my idea across.

How about we create a church-like organization t... (read more)

0Ford
A "church-like organization that has local congregations and meets weekly to listen to talks on rationality, the latest scientific discoveries, lectures on philosophy, the state of the world, etc."? Sounds like a Unitarian fellowship, at least the ones I know. Some may be closer to their Protestant roots, though. Of course, they also have talks on irrationality ("spirituality") and, while atheists and other rationalists are certainly welcome, aggressive promotion of any particular world-view is discouraged.
haig40

I like EY's writings, but don't hold them up as gospel. For instance, I think this guy's summary of Bayes Theorem (http://betterexplained.com/articles/an-intuitive-and-short-explanation-of-bayes-theorem) is much more readable and succinct than EY's much longer (http://yudkowsky.net/rational/bayes) essay.

haig60

There is a recent trend of 'serious games' which use video games to teach and train people in various capacities, including military, health care, management, as well as the traditional schooling. I see no reason why this couldn't be applied to rationality training.

I always liked adventure style games as a kid, such as King's Quest or Myst, and wondered why they aren't around any more. They seemed to be testing rationality in that you would need to guide the character through many interconnected puzzles while figuring out the model of the world and how b... (read more)

2rysade
I just finished playing a side-scrolling game called Closure (http://www.closuregame.com) that has some qualities of Myst, et al. I think that you've got a good idea here, but a problem could arise from the 'death penalty' that most games impose. Typically, you just restart the 'mission.' Games that operate like that don't provide quite enough incentive to pull out your whole intellect. If the player knew ahead of time that a single failure meant permanent loss, they would be more apt to give the game effort enough to have their rationality tested accurately.
1[anonymous]
Good idea. What details would you be able to convey?
3steven0461
Google "interactive fiction".
haig20

Isn't this a description of what a liberal arts education is supposed to provide? The skills of 'how to think' not 'what to think'? I'm not too familiar with the curriculum since I did not attend a liberal arts college, instead I was conned into an overpriced private university, but if anyone has more info please chime in.

4David_Gerard
That's what a liberal arts curriculum was originally intended to teach, yes - it's just a bit out of date. An updated version would be worth working out and popularising.
haig140

I just read a nice blog post at neurowhoa.blogspot.com/2009/03/believer-brains-different-from-non.html, covering research on brain differences of believers vs. non-believers. The take away from the recent study was "religious conviction is associated with reduced neural responsivity to uncertainty and error". I'm hesitant to read too much into this particular study, but if there is something to this then the best way to spread rational thought would be to try to correct for this deficiency. Practicing not to let uncertainty or errors slide by, no matter how small, would result in a positive habit and develop their rationality skills.

1[anonymous]
brilliant
haig30

Some commenters said that in fact theory revision sessions such as brainstorming, etc. were actually pleasant to most rationalists and don't necessarily induce sadness. Indeed, I really enjoy arguing and learning new things, or else I wouldn't continue to do them. However, there is a difference between the loose juggling of ideas that we aren't very attached to and the type of continual self-checking of core beliefs that strict rationalists try to do. In order to operate effectively in the world and achieve goals, we need a solid belief foundation to p... (read more)

5AnnaSalamon
Haig, there's a difference between: (1) Updating your beliefs and action-patterns as new evidence comes in; deciding to earn money by the method that is actually most likely to be effective (according to the best evidence you know) instead of by your last year's guess as to how to most effectively earn money, for example. and (2) Having your beliefs and action-patterns be in a constant or immobilizing state of flux. If you're a good rationalist, and you carefully research topics that matter for your goals, after a while you should in most cases have a fairly stable probability distribution as to what the world is like and how best to achieve your goals. (If you find your model flops first one way, and then another, and then another... your models are overconfident and are based too much on recent data, so you should replace them with a more spread-out probability distribution over how things might be.) This way, you get a stable model you can use to e.g. actually earn more money, and not just go through motions that you at one point thought would earn you more money. I've been listening to high-quality entrepreneurship seminars as I exercise (audiobooks are a great way to get free bonus time), and many of them recommend rationality techniques like making your hypotheses explicit and actively searching for dis-confirming evidence. These are seminars made by and for stereotypical go-getters.
haig10

practice, practice, practice

haig90

I think this post and the related ones are really hitting home why it is hard for our minds to function fully rationally at all times. Like Jon Haidt's metaphor that our conscious awareness is akin to a person riding on top of an elephant, our conscious attempt at rational behavior is trying to tame this bundle of evolved mechanisms lying below 'us'. Just think of the preposterous notion of 'telling yourself' to believe or not believe in something. Who are you telling this to? How is cognitive dissonance even possible?

I remember the point when I fina... (read more)

Load More