Posts

Sorted by New

Wiki Contributions

Comments

Sorted by

(I probably shouldn't interact, but I would at least like to perform a small case study on what happened here, so I am going to try just out of curiosity.)

Human substrate is generally optimised for running one human, but can be repurposed for a variety of purposes. In particular, while memes can lodge themselves quite deeply inside someone, this process is quite inflexible, and generally humans run arbitary processes X by thinking 'what is the X thing to do here?. 

Somewhere between the point where [the-generative-process-that-has-generated-for-itself-the-name-'LightStar'] generated this comment, and the point where I read it, a human took 'LightStar's dialogue and typed it into a comment and submitted it.

I would like to clarify that I am speaking to that human, you, and I would like to hear from you directly, instead of generating 'LightStar' dialogue.

Could I ask you how you ended up here, and what you were doing when this happened?

I would advise that in the cases where people have a sudden revelation about rationality, they generally try to internalise it, and the case where they instead decide to give an internal generative process it's own lesswrong account and speak with every fourth sentence in italics is generally quite rare, and probably indicates some sort of modelling failure.

We generally use 'shard' in 'shard of Coordination' or 'shard of Rationality' to mean a fraction, a splinter, of the larger mathematical structures that comprise these fields. The 'LightStar' generative model has used the article 'the' in conjunction with 'shard', which as used here is kind of a contradiction - there is no 'the' with shard, it's only a piece of the whole. This distinction seems minor, but from my perspective it looks like it's at the center of 'LightStar'.

'LightStar' uses 'the' a lot about itself, describes itself as 'the voice of Humanity and Rationality and Truth', and while yes, there is only one correct rationality, I don't think 'LightStar' contains all of it, or is comprised only of a fragment of it, I think that whether or not 'LightStar' contains such a shard it also contains other parasitic material that results in actions taken that don't generally correspond to just containing such a shard.

I think this model is defective - try returning it to where you found it and getting another one, or failing that, see if they give refunds. I would be curious about your thoughts on the whole thing, where the 'you' in 'your' refers not to [the-generative-process-that-has-generated-for-itself-the-name-'LightStar'] but to the human that took that dialogue and typed it into the comment box.

A surprising amount of human cognition is driven purely verbally/symbolicaly - I recall a study showing that on average people with a native language that had much more concise wording/notation for numbers could remember much longer numbers. As a relatvely verbal person, my intuition about the relationship between observation and vocabulary would be that to know something is to be able to say what it means to know it, but then again it's possible that my case doesn't generalise and that I just happen to rely on symbol-pushing for most of my abstract cognition (at least, that portion of abstract cognition that isn't computed using spacial reasoning).

I was going to write
"Making an observation isn't an atomic action. In order to compress noisy, redundant short-term sensory data into an actual observation stored in long-term memory you need to perform some work of compression/pattern recognition, e.g. the sensory data of ▟ ▟ ▟ ▟ ▟ ▟ ▟ ▟ ▟ ▟ ▟ ▟ ▟ ▟ ▟ ▟ ▟ is compressed into the observation 17 steps , and how you do that is a partially conscious decision where you have to choose what type of data to convert ▟ ▟ ▟ ▟ ▟ ▟ ▟ ▟ ▟ ▟ ▟ ▟ ▟ ▟ ▟ ▟ ▟ into."
But in retrospect it's possible that from your perspective ▟ ▟ ▟ ▟ ▟ ▟ ▟ ▟ ▟ ▟ ▟ ▟ ▟ ▟ ▟ ▟ ▟ is the thing-you've-been-using-the-word-observation-to-mean, and you can store that in your long tem memory just fine and I just happen to throw away or refuse to reason about everything that isn't sufficiently legible.

I’d also note that has anyone tried carrying around an umbrella all the time?

 

I have! This probably doesn't have any useful metaphorical properties, but my outdoor nonraincoat has deep enough pockets that my umbrella only barely pokes out the top, so I just leave it in that pocket 24/7.

It's nice to just not worry about whether or not it will rain, and it counterbalances the weight from my battery pack in my other coat pocket.

 

(I don't know if I'd recommend it - I have an unreasonably light coat for how warm it is so I can spare the weight budget, and I derive a small amount of joy from being Slightly-More-Prepared-Than-Is-Reasonable; it's a tradition with my parents to try and see a pantomine each year in support of a local theatre and when I watched the cast pull out actual nerf super soakers for the deploy-water-at-the-audience-bit I managed to draw my umbrella fast enough to keep myself mostly dry, a fate which my adjacent family members did not share.)

It might just be status quo bias or cynicism-driven pattern-matching, but I feel like for any given deadline, paxlovid-is-illegal is more of a "stable state" than paxlovid-is-legal - it feels like it would be easier to lock the general public into 'paxlovid is dangerous/untrustworthy/ineffective" with a campaign against it than it would be to lock the general public into a state of "paxlovid is safe and works and we use it" with a campaign for it, although now I'm actually trying to visualise a world in which paxlovid remains illegal indefinitely in the face of evidence I feel less confident in that cynicism than I did two weeks ago.

We still need to prevent this from becoming an assassination market - we need some mechanism that prevents the equal-and-opposite outcome of a professional FDA lobbyist/activist purchasing shares in the FDA not approving Paxlovid, and then going on to run a campaign to prevent it.

You might want to try recruiting from people from a more philosophical/mathematical background as opposed to recruiting from a programming background (hopefully we might be able to crack the problem from the pure logic perspective before we get to an application), but yeah now that you mention it "recruiting people to help the AGI issue without also worsening it" looks like it might be an underappreciated issue.

Do you think it will ever be possible to simulate a human mind (or analagous conscious mind) on a deterministic computer?

Do you think it possible in principle that a 'non-deterministic' human mind can be simulated on a non-deterministic substrate analagous to our current flesh substrate, such as a quantum computer?

If yes to either, do you think that it is necessary to simulate the mind on the lowest level of physics (e.g. on a true simulated spacetime indistinguishable from the original) or are higer-level abstractions (like building a mathematical model of one neuron and then using this simple equation as a building block) permissible?

(Also, are you just asking about Robert Roger Penrose's view or is this also your view?)

I don't have a specific mental image for what I mean when I say 'non-deterministic', I was placing a bet on the assumption that YimbyGeorge was hypothesizing that conscious was somehow fundamentally mysterious and therefore couldn't be 'merely' deterministic, based on pattern-matching this view rather than any specific mental image of what it would mean for consciousness to only be possible in non-deterministic systems.

When you say 'require new physics that can explain consciousness', are you imagining:

 

"New insight shows human brain neuron connections have hundreds of tiny side channels that run much faster than the main connections, leading scientists to conclude that the human brain's processing apacity is much greater than previously thought"

or

"New insight reveals thin threads that run through all connections and allow neurons to undergo quantum superposition, allowing much faster and more complex pattern-matching and conscious thought than previously thougt possible, while still remaining overall deterministic"

or

"New insight shows that the human mind is fundamentally nondeterministic and this somehow involves quantum mechanics"

or

"New insight shows souls are fundamental"

 

 

What do you (or your interpretation of Robert Roger Penrose) think a new physics insight that would make consciousness go from mysterious to non-mysterious look like?

I would also note that most modern-day AI like GPT-N are not actually optimisers, just algorithms produced by optimisation processes - the entity of [GPT-N + its trainer + its training data] could be considered an optimiser (albeit a self-contained one), but as soon as you take GPT-N out of that environment it is a stateless algorithm that looks at a short string of text and provides a probability distribution for the next letter. When it is run in generative mode, the set of its weights and answers will be no different from its isolated guesses when being trained.

Load More