Robbo

I'm Rob Long. I work on AI consciousness and related issues. 

http://robertlong.online/

https://experiencemachines.substack.com/

Wiki Contributions

Comments

Sorted by
Robbo110

That poem was not written by Hitler.

According to this website and other reputable-seeming sources, the German poet Georg Runsky published that poem, "Habe Geduld", around 1906.

On 14 May 1938 a copy of this poem was printed in the Austrian weekly Agrarische Post, under the title 'Denke es'. It was then falsely attributed to Adolf Hitler.

In the Hitler biography of John Toland (1976) it appeared for the first time in English translation. Toland made the mistake in identifying it as a true Hitler poem, supposedly written in 1923.

Robbo20

There was a rush to deontology that died away quickly, mostly retreating back into its special enclave of veganism.

Can you explain what you mean by the second half of that sentence?

Robbo31

To clarify, what question were you thinking that is more interesting than? I see that as one of the questions that is raised in the post. But perhaps you are contrasting "realize it is conscious by itself" with the methods discussed in "Could we build language models whose reports about sentience we can trust?"

Robbo10

I think I'd need to hear more about what you mean by sapience (the link didn't make it entirely clear to me) and why that would ground moral patienthood. It is true in my opinion that there are other plausible grounds for moral patienthood besides sentience (which, its ambiguity notwithstanding, I think can be used about as precisely as sapience, see my note on usage), most notably desires, preferences, and goals. Perhaps those are part of what you mean by 'sapience'?

Robbo10

Great, thanks for the explanation. Just curious to hear your framework, no need to reply:

-If you do have some notion of moral patienthood, what properties do you think are important for moral patienthood? Do you think we face uncertainty about whether animals or AIs have these properties? -If you don't, are there questions in the vicinity of "which systems are moral patients" that you do recognize as meaningful?

Robbo10

Very interesting! Thanks for your reply, and I like your distinction between questions:

Positive valence involves attention concentration whereas negative valence involves diffusion of attention / searching for ways to end this experience.

Can you elaborate on this? What is do attention concentration v. diffusion mean? Pain seems to draw attention to itself (and to motivate action to alleviate it). On my normal understanding of "concentration", pain involves concentration. But I think I'm just unfamiliar with how you / 'the literature' use these terms.

Robbo10

I'm trying to get a better idea of your position. Suppose that, as TAG also replied, "realism about phenomenal consciousness" does not imply that consciousness is somehow fundamentally different from other forms of organization of matter. Suppose I'm a physicalist and a functionalist, so I think phenomenal consciousness just is a certain organization of matter. Do we still then need to replace "theory" with "ideology" etc?

Robbo10

to say that [consciousness] is the only way to process information

I don't think anyone was claiming that. My post certainly doesn't. If one thought consciousness were the only way to process information, wouldn't there not even be an open question about which (if any) information-processing systems can be conscious?

Robbo10

A few questions:

  1. Can you elaborate on this?

Suffering seems to need a lot of complexity

and also seems deeply connected to biological systems.

I think I agree. Of course, all of the suffering that we know about so far is instantiated in biological systems. Depends on what you mean by "deeply connected." Do you mean that you think that the biological substrate is necessary? i.e. you have a biological theory of consciousness?

AI/computers are just a "picture" of these biological systems.

What does this mean?

Now, we could someday crack consciousness in electronic systems, but I think it would be winning the lottery to get there not on purpose.

Can you elaborate? Are you saying that, unless we deliberately try to build in some complex stuff that is necessary for suffering, AI systems won't 'naturally' have the capacity for suffering? (i.e. you've ruled out the possibility that Steven Byrnes raised in his comment)

Load More