Donald Hobson

MMath Cambridge. Currently studying postgrad at Edinburgh.

Sequences

Neural Networks, More than you wanted to Show
Logical Counterfactuals and Proposition graphs
Assorted Maths

Wiki Contributions

Comments

advanced ancient technology is such a popular theme

 

Well one reason is it's a good way to produce plot relevant artefacts. It's hard to have dramatic battles over some object when a factory is churning out more. 

True. But for that you need there to exist another mind almost identical to yours except for that one thing. 

In the question "how much of my memories can I delete while retaining my thread of subjective experience?" I don't expect there to be an objective answer. 

The point is, if all the robots are a true blank state, then none of them is you. Because your entire personality has just been forgotten.

Who knows what "meditation" is really doing under the hood.

Lets set up a clearer example. 

Suppose you are an uploaded mind, running on a damaged robot body. 

You write a script that deletes your mind, running a bunch of nul-ops before rebooting a fresh blank baby mind with no knowledge of the world. 

You run the script, and then you die. That's it. The computer running nul ops "merges" with all the other computers running nul ops. If the baby mind learns enough to answer the question before checking if it's hardware is broken, then it considers itself to have a small probability of the hardware being broken. And then it learns the bad news. 

 

Basically, I think forgetting like that without just deleting your mind isn't something that really happens. I also feel like, when arbitrary mind modifications are on the table, "what will I experience in the future" returns Undefined. 

Toy example. Imagine creating loads of near-copies of yourself, with various changes to memories and personality. Which copy do you expect to wake up as? Equally likely to be any of them? Well just make some of the changes larger and larger until some of the changes delete your mind entirely and replace it with something else. 

Because the way you have set it up, it sounds like it would be possible to move your thread of subjective experience into any arbitrary program. 

In many important tasks in the modern economy, it isn't possible to replace on expert with any number of average humans. A large fraction of average humans aren't experts. 

A large fraction of human brains are stacking shelves or driving cars or playing computer games or relaxing etc. Given a list of important tasks in the computer supply chain, most humans, most of the time, are simply not making any attempt at all to solve them. 

And of course a few percent of the modern economy is actively trying to blow each other up. 

You can play the same game in the other direction. Given a cold source, you can run your chips hot, and use a steam engine to recapture some of the heat. 

The Landauer limit still applies. 

>But GPT4 isn't good at explicit matrix multiplication either.

So it is also very inefficient. 

Probably a software problem. 

Humans suck at arithmetic. Really suck. From comparison of current GPU's to a human trying and failing to multiply 10 digit numbers in their head, we can conclude that something about humans, hardware or software, is Incredibly inefficient. 

Almost all humans have roughly the same sized brain. 

So even if Einsteins brain was operating at 100% efficiency, the brain of the average human is operating at a lot less.

ie intelligence is easy - it just takes enormous amounts of compute for training.

Making a technology work at all is generally easier than making it efficient. 

Current scaling laws seem entirely consistent with us having found an inefficient algorithm that works at all. 

Like chatGPT uses billions of floating point operations to do basic arithmetic mostly correctly. So it's clear that the likes of chatGPT are also inefficient. 

Now you can claim that chatGPT and humans are mostly efficient, but suddenly drop 10 orders of magnitude when confronted with a multiplication. But no really, they are pushing right up against the fundamental limits for everything that isn't one of the most basic computational operations. 

mirroring much or our seemingly idiosyncratic cognitive biases, quirks, and limitations.

True. 

They also have a big pile of their own new idiosyncratic quirks. 

https://www.lesswrong.com/posts/aPeJE8bSo6rAFoLqg/solidgoldmagikarp-plus-prompt-generation

These are bizarre behaviour patterns that don't resemble any humans. 

This looks less like a human, and more like a very realistic painted statue. It looks like a human, complete with painted on warts, but scratch the paint, and the inhuman nature shows through. 

The width of mindspace is completely irrelevant.

The width of mindspace is somewhat relevant. 

At best, we have found a recipe, such that if we stick precisely to it, we can produce human-like minds. Start making arbitrary edits to the code, and we wander away from humanity. 

At best we have found a small safe island in a vast and stormy ocean. 
 

The likes of chatGPT are trained with RLHF. Humans don't usually say "as a large language model, I am unable to ..." so we are already wandering somewhat from the human. 

Biological cells operate directly at thermodynamic efficiency limits:

Well muscles are less efficient than steam engines. Which is why hamster wheel electricity is a dumb idea, burning the hamster food in a steam engine is more efficient. 

Load More