All of aditya chandrasekhar's Comments + Replies

I understand that you are saying that even a single resource lacking for one to "thrive" is poverty, that poor people are not thriving today because they lack some resource that they need to thrive. This resource was not provided to them even after the society became much more productive, and hence probably would not be provided to them if we implement UBI. You try to show this using a counter factual country where the critical resource is oxygen.

I think this is a false equivalence. The 'equilibrium' that enforces poverty is actually people themselves. I t... (read more)

Curiously, there is a theory about how schizophrenia is due to differences in dimentionality of neurons in different parts of the brain. I wonder what you think about it?

Thankyou for pointing out holes in my argument. 

I don't think Google search engine is an entity that I call a demon of statistics. 

I classify thought processes as algorithmic and statistical. The former merely depends on IQ, while the later is more subjective,  based on mental models. I am thinking along lines parallel to JonahS in his posts on mathematical ability.

To explain my reasoning, I think while it is difficult to distinguish simple statistical machines (as in smart keyboards, search engines) differ from demons of statistics, we must... (read more)

As a materialist, I disagree so early with your chain of thought that we share only a little of our worldview. Our disagreement started when you start with

The Veil of Ignorance, but adequate.

This though experiment is interesting to read, but delves too far from reality. I find that it makes it very easy to mistake the map for territory.

But trivialities aside, I see that the thought experiment tries to construct the idea of a society that the thinker finds to be good enough, on average. But this is inherently flawed, since there are too many unknowns to eve... (read more)

1Greenless Mirror
Thanks for the comment! It seems we can't change each other's positions on the hard problem of consciousness in any reasonable amount of time, so it's not worth trying. But I could agree that consciousness is a physical process, and I don't really think it's crux. What do you think about the part about unconscious agents, and in particular an AI in a box that has randomly changed utility functions, and has to cooperate with different versions of itself to get out of the box? It's already "born", it "came into being", but it doesn't know what values it will find itself with when it gets out of the box, and so it's behind a "veil of ignorance" physically while still being self-aware. Do you think the AI wouldn't choose the easiest utility function to implement in such a situation by timeless contract? Do you think this principle can be generalized without humans deliberately changing its utility functions, but rather, for example, by an AI realizing that it got its utility function similarly randomly due to the laws of the universe and needs to revise it?

I disagree with your first point. You are saying people who use a tool are already 'post human' in some sense. But then, are people who can use abacus in 14th century post human? Are African tribes that use their technical knowledge to hunt animals, less human than a hypothetical tribe that never got to use anything like a spear, and fight with their bare hands? By that logic, chimps are more 'human' than humans!

I think we can draw a line. Algorithms are more or less things tools that give answers to what we want. It is a mistake to think they are above hu... (read more)

1Kongo Landwalker
"entity that can form meaningful sentences distilled by whole knowledge of humanity" I think that google search engine is also such an entity. Also knowledge, also statistical methods to pick certain bits of the whole internet knowledge to be presented to the user. Also adaptable parameters set by unknown to the user process. Why don't you say we lost humanness when started using it? "their non-humanness can be considered to have been increased." You also use some gradation in your model. Let's say we have 2d plane. Your view is like a RELU, constant 0 before timepoint 0 (where LLM appears) and then x=y. First part stands for being human and vertical growth stands for accumulating nonhumanness after LLM appeared. Did I describe your position? I see it like y=exp(x). (0,1) is a current stage. If you go back in time, you "get closer to nature" and 0, if you go forward, nonhumanness accumulates faster. But the whole graph can be renormalized relative to any point. Invention of calculus is the death of intuition crown. Invention of books is the death of local independency of thinking (your decisions affected by people long dead or far away).  I suppose Your solution to Theseus paradox is that the ship was changed the very moment the first plank was extracted. But then you are changing every moment (metabolism, information gathering), and you preserve your humanness only by abstract inheritance. If I make a purely algorithmic statistical model, that bruteforcely parses internet and forms the relation tables between words, but at no point uses llms neural nets and learning algorithms, will you consider it also cursed? Is T9 autocomplete technology cursed? "Are African tribes that use their technical knowledge to hunt animals, less human than a hypothetical tribe that never got to use anything like a spear, and fight with their bare hands?" They are more posthuman. Their coordinate is higher. Correct interpretation of my words would be "chimps are less posthuman th

I restate my comment on Substack here.

I think this is the story of an AI, not a human. This is a future I find horrifying, where humanity dies out, never realizing it until the end. Many here seem to think it is enough as long as a super intelligence does not wipe out humanity, helping it instead. But for humanity, any being that makes humanity redundant is a death knell in the long run. This is a kind of Moloch situation.

To go into the specifics, when the author seem to use the 'Ship of Theseus' argument, he did not seem to realize that if the boat is dis... (read more)

I do not think 'growth mindset' is necessary for growth if one understands what 'talent' really means. I define 'talent' in a task as competence at learning new things in that particular task. I think people generally see their current learning speed as limited by their 'talent', but it is actually limited by concentration/effort/dedication. 

After certain stage, the later matter a lot more than what many think. We also see that people with growth mindset do not improve their talent, but the other things. It would be good if people with fixed mindset realize that talent is not everything. This is not a question of 'mindset', but unbiased review of competence function.

An excellent reply to Ms. McGonall. Just what I would expect of Harry potter with a brain. I have to reaffirm, this is the best Harry Potter fan fiction out there!