Carl Feynman

I was born in 1962 (so I’m in my 60s).  I was raised rationalist, more or less, before we had a name for it.  I went to MIT, and have a bachelors degree in philosophy and linguistics, and a masters degree in electrical engineering and computer science.  I got married in 1991, and have two kids.  I live in the Boston area.  I’ve worked as various kinds of engineer: electronics, computer architecture, optics, robotics, software.

Around 1992, I was delighted to discover the Extropians.  I’ve enjoyed being in that kind of circles since then.  My experience with the Less Wrong community has been “I was just standing here, and a bunch of people gathered, and now I’m in the middle of a crowd.”  A very delightful and wonderful crowd, just to be clear.  

I‘m signed up for cryonics.  I think it has a 5% chance of working, which is either very small or very large, depending on how you think about it.

I may or may not have qualia, depending on your definition.  I think that philosophical zombies are possible, and I am one.  This is a very unimportant fact about me, but seems to incite a lot of conversation with people who care.

I am reflectively consistent, in the sense that I can examine my behavior and desires, and understand what gives rise to them, and there are no contradictions I‘m aware of.  I’ve been that way since about 2015.  It took decades of work and I’m not sure if that work was worth it.

Wiki Contributions

Comments

Sorted by

Well, let me quote Wikipedia:

Much of the debate over the importance of qualia hinges on the definition of the term, and various philosophers emphasize or deny the existence of certain features of qualia. Some philosophers of mind, like Daniel Dennett, argue that qualia do not exist. Other philosophers, as well as neuroscientists and neurologists, believe qualia exist and that the desire by some philosophers to disregard qualia is based on an erroneous interpretation of what constitutes science.

If it was that easy to understand, we wouldn't be here arguing about it.  My claim is that arguments about qualia are (partially) caused by people actually having different cognitive mechanisms that produce different intuitions about how experience works.

Humans continue to get very offended if they find out they are talking to an AI

In my limited experience of phone contact with AIs, this is only true for distinctly subhuman AIs.  Then I emotionally react like I am talking to someone who is being deliberately obtuse, and become enraged.  I'm not entirely clear on why I have this emotional reaction, but it's very strong.  Perhaps it is related to the Uncanny Valley effect.  On the other hand, I've dealt with phone AIs that (acted like they) understood me, and we've concluded a pleasant and businesslike interaction.  I may be typical-minding here, but I suspect that most people will only take offense if they run into the first kind of AI.

Perhaps this is related: I felt a visceral uneasiness dealing with chat-mode LLMs, until I tried Claude, which I found agreeable and helpful.  Now I have a claude.ai subscription.  Once again, I don't understand the emotional difference.

I'm 62 years old, which may have something to do with it.  I can feel myself being less mentally flexible than I was decades ago, and I notice myself slipping into crotchety-old-man mode more often.  It's a problem that requires deliberate effort to overcome.

Has anybody tried actual humans or smart LLMs?  It would be interesting to know what methods people actually use.

If we're being pragmatic, the priors we had at birth almost don't matter.  A few observations will overwhelm any reasonable prior.  As long as we don't assign zero probability to anything that can actually happen, the shape of the prior makes no practical difference.

including probably reworking some of my blog post ideas into a peer-reviewed paper for a neuroscience journal this spring.

I think this is a great idea.  It will broadcast your ideas to an audience prepared to receive them.  You can leave out the "friendly AI" motivation and your ideas will stand on their own as a theory of (some of) cognition.

Do we have a sense for how much of the orca brain is specialized for sonar?  About a third of the human brain is specialized to visual perception.  If sonar is harder than vision, evolution might have dedicated more of the orca brain to it.  On the other hand, orcas don't need a bunch of brain for manual dexterity, like us.

In humans, the prefrontal cortex is dedicated to "higher" forms of thinking.  But evolution slides functions around on the cortical surface, and (Claude tells me) association areas like the prefrontal cortex are particularly prone to this.  Just looking for the volume of the prefrontal cortex won't tell you how much actual thought goes on there.

All the pictures are missing for me.

Is this the consensus view? I think it’s generally agreed that software development has been sped up. A factor of two is ambitious! But that’s what it seems to me, and I’ve measured three examples of computer vision programming, each taking an hour or two, by doing them by hand and then with machine assistance. The machines are dumb and produce results that require rewriting. But my code is also inaccurate on a first try. I don’t have any references where people agree with me. And this may not apply to AI programming in general.

You ask about “anonymous reports of diminishing returns to scaling.” I have also heard these reports, direct from a friend who is a researcher inside a major lab. But note that this does not imply a diminished rate of progress, since there are other ways to advance besides making LLMs bigger. O1 and o3 indicate the payoffs to be had by doing things other than pure scaling. If there are forms of progress available to cleverness, then the speed of advance need not require scaling.

One argument against is that I think it’s coming soon, and I have a 40 year history of frothing technological enthusiasm, often predicting things will arrive decades before they actually do. 😀

These criticisms are often made of “market dominant minorities”, to use a sociologist’s term for what American Jews and Indian-Americans have in common. Here’s a good short article on the topic: https://scholarship.law.duke.edu/cgi/viewcontent.cgi?article=5582&context=faculty_scholarship

Load More