Donald Hobson

MMath Cambridge. Currently studying postgrad at Edinburgh.

Sequences

Neural Networks, More than you wanted to Show
Logical Counterfactuals and Proposition graphs
Assorted Maths

Wiki Contributions

Comments

True. But for that you need there to exist another mind almost identical to yours except for that one thing. 

In the question "how much of my memories can I delete while retaining my thread of subjective experience?" I don't expect there to be an objective answer. 

The point is, if all the robots are a true blank state, then none of them is you. Because your entire personality has just been forgotten.

Who knows what "meditation" is really doing under the hood.

Lets set up a clearer example. 

Suppose you are an uploaded mind, running on a damaged robot body. 

You write a script that deletes your mind, running a bunch of nul-ops before rebooting a fresh blank baby mind with no knowledge of the world. 

You run the script, and then you die. That's it. The computer running nul ops "merges" with all the other computers running nul ops. If the baby mind learns enough to answer the question before checking if it's hardware is broken, then it considers itself to have a small probability of the hardware being broken. And then it learns the bad news. 

 

Basically, I think forgetting like that without just deleting your mind isn't something that really happens. I also feel like, when arbitrary mind modifications are on the table, "what will I experience in the future" returns Undefined. 

Toy example. Imagine creating loads of near-copies of yourself, with various changes to memories and personality. Which copy do you expect to wake up as? Equally likely to be any of them? Well just make some of the changes larger and larger until some of the changes delete your mind entirely and replace it with something else. 

Because the way you have set it up, it sounds like it would be possible to move your thread of subjective experience into any arbitrary program. 

In many important tasks in the modern economy, it isn't possible to replace on expert with any number of average humans. A large fraction of average humans aren't experts. 

A large fraction of human brains are stacking shelves or driving cars or playing computer games or relaxing etc. Given a list of important tasks in the computer supply chain, most humans, most of the time, are simply not making any attempt at all to solve them. 

And of course a few percent of the modern economy is actively trying to blow each other up. 

You can play the same game in the other direction. Given a cold source, you can run your chips hot, and use a steam engine to recapture some of the heat. 

The Landauer limit still applies. 

>But GPT4 isn't good at explicit matrix multiplication either.

So it is also very inefficient. 

Probably a software problem. 

Humans suck at arithmetic. Really suck. From comparison of current GPU's to a human trying and failing to multiply 10 digit numbers in their head, we can conclude that something about humans, hardware or software, is Incredibly inefficient. 

Almost all humans have roughly the same sized brain. 

So even if Einsteins brain was operating at 100% efficiency, the brain of the average human is operating at a lot less.

ie intelligence is easy - it just takes enormous amounts of compute for training.

Making a technology work at all is generally easier than making it efficient. 

Current scaling laws seem entirely consistent with us having found an inefficient algorithm that works at all. 

Like chatGPT uses billions of floating point operations to do basic arithmetic mostly correctly. So it's clear that the likes of chatGPT are also inefficient. 

Now you can claim that chatGPT and humans are mostly efficient, but suddenly drop 10 orders of magnitude when confronted with a multiplication. But no really, they are pushing right up against the fundamental limits for everything that isn't one of the most basic computational operations. 

mirroring much or our seemingly idiosyncratic cognitive biases, quirks, and limitations.

True. 

They also have a big pile of their own new idiosyncratic quirks. 

https://www.lesswrong.com/posts/aPeJE8bSo6rAFoLqg/solidgoldmagikarp-plus-prompt-generation

These are bizarre behaviour patterns that don't resemble any humans. 

This looks less like a human, and more like a very realistic painted statue. It looks like a human, complete with painted on warts, but scratch the paint, and the inhuman nature shows through. 

The width of mindspace is completely irrelevant.

The width of mindspace is somewhat relevant. 

At best, we have found a recipe, such that if we stick precisely to it, we can produce human-like minds. Start making arbitrary edits to the code, and we wander away from humanity. 

At best we have found a small safe island in a vast and stormy ocean. 
 

The likes of chatGPT are trained with RLHF. Humans don't usually say "as a large language model, I am unable to ..." so we are already wandering somewhat from the human. 

Biological cells operate directly at thermodynamic efficiency limits:

Well muscles are less efficient than steam engines. Which is why hamster wheel electricity is a dumb idea, burning the hamster food in a steam engine is more efficient. 

then we should not expect moore's law to end with brains still having a non-trivial thermodynamic efficiency advantage over digital computers. Except that is exactly what is happening. TSMC is approaching the limits of circuit miniaturization, and it is increasing obvious that fully closing the (now not so large) gap with the brain will require more directly mimicking it through neuromorphic computing[2].

 

This is a clear error. 

There is no particular reason to expect TSMC to taper off at a point anywhere near the theoretical limits.

A closely analogous situation is that the speed of passenger planes has tapered off.  And the theoretical limit (ignoring exotic warp drives) is light speed. 

But in practice, planes are limited by the energy density of jet fuel, economics, regulations against flying nuclear reactors, atmospheric drag etc. 

This isn't to say that no spaceship could ever go at 90% light speed. Just that we would need a radically different approach to do that, and we don't yet have that tech.

So yes, TSMC could be running out of steam. Or not. The death of moores law has been proclaimed on a regular basis since it existed. 

"Taiwanese engineers don't yet have the tech to do X" doesn't imply that X is physically impossible.

Load More