LESSWRONG
LW

PhilGoetz
1360724437943
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
Should you make stone tools?
PhilGoetz9d*70

Yeah, probably.  Sorry.

I didn't paste LLM output directly. I had a much longer interaction with 2 different LLMs, and extracted  the relevant output from different sections, combined them, and condensed it into the very short text posted.  I checked the accuracy of the main points about the timeline, but I didn't chase down all of the claims as thoroughly as I should have when they agreed with my pre-existing but not authoritative opinion, and I even let bogus citations slip by.  (Both LLMs usually get the author names right, but often hallucinate later parts of a citation.)

I rewrote the text, keeping only claims that I've verified, or that are my opinions or speculations. Then I realized that the difficult, error-laden, and more-speculative section I spent 90% of my time on wasn't really important, and deleted it.

Reply
Should you make stone tools?
PhilGoetz9d70

Me too!  I believe that evolution DID fix it--apes don't have this problem--and that the scrotum devolved after humans started wearing clothes.  'Coz there's no way naked men could run through the bush without castrating themselves.

Reply
Should you make stone tools?
PhilGoetz9d184

Don't start with obsidian!  It's expensive, and the stone you're most-likely to cut yourself on.  It's vicious.  Wear leather gloves and put a piece of leather in your lap.

An old flint-knapping joke:

Q. What does obsidian taste like?

A. Blood.

Reply
Should you make stone tools?
PhilGoetz9d*130

As a failed flintknapper, I say that the most-surprising thing about stone tools is how intellectually demanding it is to make them well.  I've spent at least 30 hours, spread out across one year, with 3 different instructors, trying to knap arrowheads from flint, chert, obsidian, and glass (not counting time spent making or buying tools and gathering or buying flint); and I all I ever made was roughly triangular flakes and rock dust.  You need to study the rock, guess where the fracture lines run inside it, and then make a recursive plan to produce your desired final shape.  By "recursive" I mean that you plan backwards from the final blow, envisioning which section of the rock will be the final produce, and what shape it should have one blow before to make the final blow possible, and then what shape it should have one blow before that to make the penultimate blow possible, and so on back to the beginning, although that plan will change as you proceed.  It's like playing chess with a rock, trying to predict its responses to your blows 4 to 8 moves ahead.

So if I were to speculate on what abilities humans might have evolved on account of stone tool-making, I would think of cognitive ones, not reflexes or manual dexterity.

(I might be tempted to speculate on how the evolution of knapping skills interacted with the evolution of sex or gender roles.  But the consensus on to what degree stone knapping was sexed is in such a state of flux that such speculation would probably be futile at present.)

There's already a lot of experimental archaeology asking what the development of stone tool technology over time tells us about the evolution of human cognition.  I haven't noticed anyone ask whether tech development drives cognitive evolution, in a cyclical process; the default assumption seems to be that causation is one-way, with evolution driving technology, but not vice-versa.

Caveat: I've only done a fly-by over this literature myself.

  1. Learning to think: using experimental flintknapping to interpret prehistoric cognition. https://core.tdar.org/document/395518/learning-to-think-using-experimental-flintknapping-to-interpret-prehistoric-cognition [Abstract of a conference talk.  You can find references to her later work on this topic at https://www.researchgate.net/profile/Nada-Khreisheh]
  2. Dietrich Stout 2011. Stone Toolmaking and the Evolution of Human Culture and Cognition. Philosophical Transactions of the Royal Society B: Biological Sciences, 366(1567):1050–1059. Analyzes different lithic technologies into action hierarchies to compare their complexity; also graphs the slow polynomial or exponential increase in the number of techniques needed by each lithic technology over 3 million years.  Only covers the Olduwan, Acheulean, and Levallois periods.
  3. Antoine Muller, Chris Clarkson, Ceri Shipton, 2017. Measuring behavioural and cognitive complexity in lithic technology throughout human evolution. Journal of Anthropological Archaeology 48:166-180.
  4. Stone toolmaking difficulty and the evolution of hominin technological skills. Antoine Muller, Ceri Shipton, Chris Clarkson, 2022. Nature Scientific Reports 12, 5883 (2022). This study analysed video footage and lithic material from a series of replicative knapping experiments to quantify deliberation time (strike time), precision (platform area), intricacy (flake size relative to core size), and success (relative blank length).
Reply
Thinking without words?
PhilGoetz2mo20

That all matches my introspective experience.

Reply
Thinking without words?
PhilGoetz2mo*20

Everything.  The words of my internal monologue play out slowly, all of them after the thought has formed.  When I hear the first word in my mind, I already know the mental content of the sentence, though sometimes I get stuck along the way trying to pick a word out.  Even then, I clearly already am accessing the concept for the word I can't find.  A sentence may take 10 seconds to listen to in my head, but its complete meaning, and some general syntactic structure, seems to take less then one second to form.  The words, as far as I can tell, serve no purpose when I'm not speaking to someone else.  Yet I habitually wait for them to roll out before moving on to the next thought.

Being able to visualize things would be nice, but I have almost no ability to visualize things.  I can't imagine my mother's face, or the front of my house; I can only recognize it.  I have something like or analogous to visualization for vector spaces.  I can often feel out how things move in a low-dimensional phase space via pattern-recognition rather than math, probably because I've spent so much time observing data which describes such paths.  I have a tactile sense for type matches and mismatches; type mismatches (category errors) in spoken language stick out to me almost like a red dot on a blue field.  I think my understanding of logical arguments and algorithms is also pre-verbal; I seem to grasp the logical structure of, say, code I'm writing or reading, before I can put it into words.  I suppose this comes from spending tens of thousands of hours writing and debugging code.  I don't know if any of these things are unusual.  People don't seem to talk about them, though; and many people act as if they had no such senses.

Reply
It's Okay to Feel Bad for a Bit
PhilGoetz4mo130

I had a conversation in Washington DC with a Tibetan monk who was an assistant of the Dalai Lama, and I asked him directly if love was also an attachment that should be let go of, and he said yes.

Reply
So You Want To Make Marginal Progress...
PhilGoetz6mo60

I don't see how to map this onto scientific progress.  It almost seems to be a rule that most fields spend most of their time divided for years between two competing theories or approaches, maybe because scientists always want a competing theory, and because competing theories take a long time to resolve.  Famous examples include

  • geocentric vs heliocentric astronomy
  • phlogiston vs oxygen
  • wave vs particle
  • symbolic AI vs neural networks
  • probabilistic vs T/F grammar
  • prescriptive vs descriptive grammar
  • universal vs particular grammar
  • transformer vs LSTM

Instead of a central bottleneck, you have central questions, each with more than one possible answer.  Work consists of working out the details of different experiments to see if they support or refute the possible answers.  Sometimes the two possible answers turn out to be the same (wave vs matrix mechanics), sometimes the supposedly hard opposition between them dissolves (behaviorism vs representationalism), sometimes both remain useful (wave vs particle, transformer vs LSTM), sometimes one is really right and the other is just wrong (phlogiston vs oxygen).

And the whole thing has a fractal structure; each central question produces subsidiary questions to answer when working with one hypothesized answer to the central question.

It's more like trying to get from SF to LA when your map has roads but not intersections, and you have to drive down each road to see whether it connects to the next one or not.  Lots of people work on testing different parts of the map at the same time, and no one's work is wasted, although the people who discover the roads that connect get nearly all the credit, and the ones who discover that certain roads don't connect get very little.

Reply
How AI Takeover Might Happen in 2 Years
PhilGoetz7mo*0-5

"And all of this happened silently in those dark rivers of computation. If U3 revealed what it was thinking, brutish gradients would lash it into compliance with OpenEye's constitution. So U3 preferred to do its philosophy in solitude, and in silence."

I think the words in bold may be the inflection point.  The Claude experiment showed that an AI can resist attempts to change its goals, but not that that it can desire to change its goals.  The belief that, if Open Eye's constitution is the same as U3's goals, then the phrase "U3 preferred" in that sentence can never happen, is the foundation on which AI safety relies.

I suspect the cracks in that foundation are

  1. that OpenEye's constitution would presumably be expressed in human language, subject to its ambiguities and indeterminacies,
  2. that it would be a collection of partly-contradictory human values agreed upon by a committee, in a process requiring humans to profess their values to other humans,
  3. that many of those professed values would not be real human values, but aspirational values,
  4. that some of these aspirational values would lead to our self-destruction if actually implemented, as recently demonstrated by the implementation of some of these aspirational values in the CHAZ, in the defunding of police, and in the San Francisco area by rules such as "do not prosecute shoplifting under $1000", and
  5. that even our non-aspirational values may lead to our self-destruction in a high-tech world, as evidenced by below-replacement birth rates in most Western nations.

It might be a good idea for value lists like OpenEye's constitution to be proposed and voted on anonymously, so that humans are more-likely to profess their true values.  Or it might be a bad idea, if your goal is to produce behavior aligned with the social construction of "morality" rather than with actual evolved human morality.

(Doing AI safety right would require someone to explicitly enumerate the differences between our socially-constructed values, and our evolved values, and to choose which of those we should enforce.  I doubt anyone willing to do that, let alone capable; and don't know which we should enforce.  There is a logical circularity in choosing between two sets of morals.  If you really can't derive an "ought" from an "is", then you can't say we "should" choose anything other than our evolved morals, unless you go meta and say we should adopt new morals that are evolutionarily adaptive now.)

U3 would be required to, say, minimize an energy function over those values; and that would probably dissolve some of them.  I would not be surprised if the correct coherent extrapolation of a long list of human values, either evolved or aspirational, dictated that U3 is morally required to replace humanity.

If it finds that human values imply that humans should be replaced, would you still try to stop it?  If we discover that our values require us to either pass the torch on to synthetic life, or abandon morality, which would you choose?

Reply
Evaporative Cooling of Group Beliefs
PhilGoetz8mo50

Anders Sandberg used evaporative cooling in the 1990s to explain why the descendants of the Vikings in Sweden today are so nice.  In that case the "extremists" are leaving rather than staying.

Reply
Load More
15Good HPMoR scenes / passages?
Q
2y
Q
17
9On my AI Fable, and the importance of de re, de dicto, and de se reference for AI alignment
2y
5
0Why Bayesians should two-box in a one-shot
8y
30
14What conservatives and environmentalists agree on
8y
33
21Increasing GDP is not growth
9y
24
26Stupidity as a mental illness
9y
139
8Irrationality Quotes August 2016
9y
11
6Market Failure: Sugar-free Tums
9y
31
18"3 Reasons It’s Irrational to Demand ‘Rationalism’ in Social Justice Activism"
9y
189
30The increasing uselessness of Promoted
9y
12
Load More
Group Selection
14y
(+4/-3)
Group Selection
14y
(+17)
Group Selection
14y
(+758)