Donald Hobson

MMath Cambridge. Currently studying postgrad at Edinburgh.

Sequences

Neural Networks, More than you wanted to Show
Logical Counterfactuals and Proposition graphs
Assorted Maths

Wiki Contributions

Comments

Sorted by

The Halting problem is a worst case result. Most agents aren't maximally ambiguous about whether or not they halt. And those that are,  well then it depends what the rules are for agents that don't halt. 

There are set ups where each agent is using an nonphysically large but finite amount of compute. There was a paper I saw somewhere a while ago where both agents were doing a brute force proof search for the statement "if I cooperate, then they cooperate" and cooperating if they found a proof.

(Ie searching all proofs containing <10^100 symbols)

There is a model of bounded rationality, logical induction. 

Can that be used to handle logical counterfactuals?

I believe that if I choose to cooperate, my twin will choose to cooperate with probability p; and if I choose to defect, my twin will defect with probability q;

 

And here the main difficulty pops up again. There is no causal connection between your choice and their choice. Any correlation is a logical one. So imagine I make a copy of you. But the copying machine isn't perfect. A random 0.001% of neurons are deleted. Also, you know you aren't a copy. How would you calculate that probability p,q? Even in principle.

If two Logical Decision Theory agents with perfect knowledge of each other's source code play prisoners dilemma, theoretically they should cooperate. 

LDT uses logical counterfactuals in the decision making.

If the agents are CDT, then logical counterfactuals are not involved.

The research on humans in 0 g is only relevant if you want to send humans to mars. And such a mission is likely to end up being an ISS on mars. Or a moon landings reboot. A lot of newsprint and bandwidth expended talking about it. A small amount of science that could have been done more cheaply with a robot. And then everyone gets bored, they play golf on mars and people look at the bill and go "was that really worth it?"

Oh and you would contaminate mars with earth bacteria. 

 

A substantially bigger, redesigned space station is fairly likely to be somewhat more expensive. And the point of all this is still not clear. 

Current day NASA also happens to be in a failure mode where everything is 10 to 100 times more expensive than it needs to be, projects live or die based on politics not technical viability, and repeating the successes of the past seems unattainable. They aren't good at innovating, especially not quickly and cheaply.

n tHere is a more intuitive version of the same paradox. 

Again, conditional on all dice rolls being even. But this time it's either 

A) 1,000,000 consecutive 6's.

B) 999,999 consecutive 6's followed by a (possibly non-consecutive 6).

 

Suppose you roll a few even numbers, followed by an extremely lucky sequence of 999,999 6's.  

 

From the point of view of version A, the only way to continue the sequence is a single extra 6. If you roll 4, you would need to roll a second sequence of a million 6'. And you are very unlikely to do that in the next 10 million steps. And very unlikely to go for 10 million steps without rolling an odd number. 

Yes if this happened, it would add at least a million extra rolls. But the chance of that is exponentially tiny.

Whereas, for B, then it's quite plausible to roll 26 or 46 or 2426 instead of just 6. 

 

Another way to think about this problem is with regular expressions. Let e=even numbers. *=0 or more. 

The string "e*6e*6" matches any sequence with at least two 6's and no odd numbers. 

The sequence "e*66" matches those two consecutive 6's.  And the sequence "66" matches two consecutive 6's with no room for extra even numbers before the first 6. This is the shortest.

 

Phrased this way it looks obvious. Every time you allow a gap for even numbers to hide in, an even number might be hiding in the gap, and that makes the sequence longer. 

 

When you remove the conditional on the other numbers being even, then the "first" becomes important to making the sequence converge at all. 

That is, our experiences got more reality-measure, thus matter more, by being easier to point at them because of their close proximity to the conspicuous event of the hottest object in the Universe coming to existence.

Surely not. Surely our experiences always had more reality measure from the start because we were the sort of people who would soon create the hottest thing. 

Reality measure can flow backwards in time. And our present day reality measure is being increased by all the things an ASI will do when we make one.

We can discuss anything that exists, that might exist, that did exist, that could exist, and that could not exist. So no matter what form your predict-the-next-token language model takes, if it is trained over the entire corpus of the written word, the representations it forms will be pretty hard to understand, because the representations encode an entire understanding of the entire world.

 

 

Perhaps. 

Imagine a huge number of very skilled programmers tried to manually hard code a ChatGPT in python. 

Ask this pyGPT to play chess, and it will play chess. Look under the hood, and you see a chess engine programmed in. Ask it to solve algebra problems, a symbolic algebra package is in there. All in the best neat and well commented code.

Ask it to compose poetry, and you have some algorithm that checks if 2 words rhyme. Some syllable counter. Etc. 

Rot13 is done with a hardcoded rot13 algorithm. 

Somewhere in the algorithm is a giant list of facts, containing "Penguins  Live In Antarctica".  And if you change this fact to say "Penguins Live in Canada", then the AI will believe this. (Or spot it's inconsistency with other facts?) 

And with one simple change, the AI believes this consistently. Penguins appear when this AI is asked for poems about canada, and don't appear in poems about Antarctica. 

When asked about the native canadian diet, it will speculate that this likely included penguin, but say that it doesn't know of any documented examples of this. 

Can you build something with ChatGPT level performance entirely out of human comprehensible programmatic parts?

Obviously having humans program these parts directly would be slow. (We are still talking about a lot of code.) But if some algorithm could generate that code? 

But if the universal failure of nature and man to find non-connectionist forms of general intelligence does not move you

 

Firstly, AIXI exists, and we agree that it would be very smart if we had the compute to run it. 

 

Secondly I think there is some sort of slight of had here. 

ChatGPT isn't yet fully general. Neither is a 3-sat solver.  3-sat looks somewhat like what you might expect a non-connectionist approach to intelligence to look like. There are a huge range of maths problems that are all theoretically equivalent to 3 sat.

In the infinite limit, both types of intelligence can simulate the other at huge overhead, In practice, they can't. 

 

Also, non-connectionist forms of intelligence are hard to evolve, because evolution works in small changes. 

why is it obvious the nanobots could pretend to be an animal so well that it's indistinguishable?

 

These nanobots are in the upper atmosphere, possibly with clouds in the way, and the nanobot fake humans could be any human to nanobot ratio. Nanobot internals except human skin and muscles. Or just a human with a few nanobots in their blood. 

Or why would targeted zaps have bad side-effects?

Because nanobots can be like a bacteria if they want. Tiny and everywhere. The nanobots can be hiding under leaves, cloths, skin, roofs etc. And even if they weren't, a single nanobot is a tiny target. Most of the energy of the zap can't hit a single nanobot. Any zap of light that can stop nanobots in your house needs to be powerful enough to burn a hole in your roof. 

And even if the zap isn't huge, it's not 1 or 2 zapps, it's loads of zapps constantly. 

Load More