Note: I wrote my comment while reading as notes to see what I thought of your arguments while reading more than as a polished thing.
I think your calibration on the 'slow scenario' is off. What you claim is the slowest plausible one is fairly clearly the median scenario given that it is pretty much just following current trends, and slower than present trend is clearly plausible. Things already slowed way down, with advancements in very narrow areas being the only real change. There is a reason that OpenAI hasn't dared even name something GPT 5, for instanc...
Apparently the very subject coming up led to me writing a few paragraphs about the problems of a land value tax before I even started reading it. (A fraction of the things in parenthesis were put in later to elaborate a point.)
There's nothing wrong with replacing current property taxes with equivalent (dollar value) taxes that only apply to the value of the land itself (this would be good to avoid penalizing improving your own land), but the land value tax (aka Georgeism) is awful because of what its proponents want to do with it. Effectively, they want to...
Math is definitely just a language. It is a combination of symbols and a grammar about how they go together. It's what you come up with when you maximally abstract away the real world, and the part about not needing any grounding was specifically about abstract math, where there is no real world.
Verifiable is obviously important for training (since we could give effectively infinite training data), but the reason it is verifiable so easily is because it doesn't rely on the world. Also, note that programming languages are also just that, languages (and quite simple ones) but abstract math is even less dependent on the real world than programming.
Math is just a language (a very simple one, in fact). Thus, abstract math is right in the wheelhouse for something made for language. Large Language Models are called that for a reason, and abstract math doesn't rely on the world itself, just the language of math. LLMs lack grounding, but abstract math doesn't require it at all. It seems more surprising how badly LLMs did math, not that they made progress. (Admittedly, if you actually mean ten years ago, that's before LLMs were really a thing. The primary mechanism that distinguishes the transformer was only barely invented then.)
For something to be a betrayal does not require knowing the intent of the person doing it, and is not necessarily modified if you do. I already brought up the fact that it would be perfectly fine if they had asked permission, it is in the not asking permission to alter the agreed upon course where the betrayal comes in. Saying 'I will do x' is not implicitly asking for permission at all, it is a statement of intent, that disregards entirely that there was even an agreement at all.
'what made A experience this as a betrayal' is the fact that it was. It really is that simple. You could perhaps object that it is strange to experience vicarious betrayal, but since it sounds like the four of you were a team, it isn't even that. This is a very minor betrayal, but if someone were to even minorly betray my family, for instance, I would automatically feel betrayed myself, and would not trust that person anymore even if the family member doesn't actually mind what they did.
Analogy time (well, another one), 'what makes me experience being cold...
Obviously, translating between different perspectives is often a very valuable thing to do. While there are a lot of disagreements that are values based, very often people are okay with the other party holding different values as long as they are still a good partner, and failure to communicate really is just failure to communicate.
I dislike the assumption that 'B' was reacting that way due to past betrayal. Maybe they were, maybe they weren't (I do see that 'B' confirmed it for you in a reaction to another comment, but making such assumptions is still a b...
You might believe that the distinctions I make are idiosyncratic, though the meanings are in fact clearly distinct in ordinary usage, but I clearly do not agree with your misleading use of what people would be lead to think are my words and you should take care to not conflate things. You want people to precisely match your own qualifiers in cases where that causes no difference in the meaning of what is said (which makes enough sense), but will directly object to people pointing out a clear miscommunication of yours because you do not care about a differe...
And here you are trying to be pedantic about language in ways that directly contradict other things you've said in speaking to me. In this case, everything I said holds if we change between 'not different' and 'not that different' (while you actually misquote yourself as 'not very different'). That said, I should have included the extra word in quoting you.
Your point is not very convincing. Yes, people disagree if they disagree. I do not draw the lines in specific spots, as you should know based on what I've written, but you find it convenient to assume I do.
Do you hold panpsychism as a likely candidate? If not, then you most likely believe the vast majority of things are not conscious. We have a lot of evidence that the way it operates is not meaningfully different in ways we don't understand from other objects. Thus, almost the entire reference class would be things that are not conscious. If you do believe in panpsychism, then obviously AIs would be too, but it wouldn't be an especially meaningful statement.
You could choose computer programs as the reference class, but most people are quite sure those aren'...
This statement is obviously incorrect. I have a vague concept of 'red', but I can tell you straight out that 'green' is not it, and I am utterly correct. Now, where does it go from 'red' to 'orange'? We could have a legitimate disagreement about that. Anyone who uses 'red' to mean 'green' is just purely wrong.
That said, it wouldn't even apply to me if your (incorrect) claim about a single definition not being different from an extremely confident vague definition was right. I don't have 'extreme confidence' about consciousness even as a vague concept. I am...
Pedantically, 'self-evident' and 'clear' are different words/phrases, and you should not have emphasized 'self-evident' in a way that makes it seem like I used it, regardless of whether you care/make that distinction personally. I then explained why a lack of evidence should be read against the idea that a modern AI is conscious (basically, the prior probability is quite low.)
Your comment is not really a response to the comment I made. I am not missing the point at all, and if you think I have I suspect you missed my point very badly (and are yourself extremely overconfident about it). I have explicitly talked about there being a number of possible definitions of consciousness multiple times and I never favored one of them explicitly. I repeat, I never assumed a specific definition of consciousness, since I don't have a specific one I assume at all, and I am completely open to talking about a number of possibilities. I simply p...
I agree that people use consciousness to mean different things, but some definitions need to be ignored as clearly incorrect. If someone wants to use a definition of 'red' that includes large amounts of 'green', we should ignore them. Words mean something, and can't be stretched to include whatever the speaker wants them to if we are to speak the same language (so leaving aside things like how 'no' means 'of' in Japanese). Things like purposefulness are their own separate thing, and have a number of terms meant to be used with them, that we can meaningfull...
I did not use the term 'self-evident' and I do not necessarily believe it is self-evident, because theoretically we can't prove anything isn't conscious. My more limited claim is not that it is self evident that LLMs are not conscious, it's that they just clearly aren't conscious. 'Almost no reliable evidence' in favor of consciousness is coupled with the fact that we know how LLMs work (with the details we do not know are probably not important to this matter), and how they work is no more related to consciousness than an ordinary computer program is. It ...
As a (severe) skeptic of all the AI doom stuff and a moderate/centrist that has been voting for conservatives I decided my perspective on this might be useful here (which obviously skews heavily left). (While my response is in order, the numbers are there to separate my points, not to give which paragraph I am responding to.)
"AI-not-disempowering-humanity is conservative in the most fundamental sense"
1.Well, obviously this title section is completely true. If conservative means anything, it means being against destroying the lives o...
1. Kamala Harris did run a bad campaign. She was 'super popular' at the start of the campaign (assuming you can trust the polls, though you mostly can't), and 'super unpopular' losing definitively at the end of it. On September 17th, she was ahead by 2 points in polls, and in a little more than a month and a half she was down by that much in the vote. She lost so much ground. She had no good ads, no good policy positions, and was completely unconvincing to people who weren't guaranteed to vote for her from the start. She had tons of money to get out all of...
Some people went into the 2024 election fearing that pollsters had not adequately corrected for the sources of bias that had plagued them in 2016 and 2020.
I mostly heard the opposite, that they had overcorrected.
As it often does when I write, this ended up being pretty long (and not especially well written by the standards I wish I lived up to).
I'm sure I did misunderstand part of what you are saying (that we do misunderstand easily was the biggest part of what we appear to agree on), but also, my disagreements aren't necessarily things you don't actually mention yourself. I think we disagree mostly on what outcomes the advice itself will give if adopted overly eagerly, because I see the bad way of implementing them as being the natural outcome. Again, I think you...
I have a lot of disagreements with this piece, and just wrote these notes as I read it. I don't know if this will even be a useful comment. I didn't write it as a through line. 'You' and 'your' are often used nonspecifically about people in general.
The usefulness of things like real world examples seems to vary wildly.
Rephrasing is often terrible; rephrasing done carelessly actually often leads to basically lying about what your conversation partner is saying, especially since many people will double down on the rephasing when told that they are wrong, whi...
To be pedantic, my model is pretty obvious, and clearly gives this prediction, so you can't really say that you don't see a model here, you just don't believe the model. Your model with extra assumptions doesn't give this prediction, but the one I gave clearly does.
You can't find a person this can't be done to because there is something obviously wrong with everyone? Things can be twisted easily enough. (Offense is stronger than defense here.) If you didn't find it, you just didn't look hard/creatively enough. Our intuitions against people tricking u...
It does of course raise the difficulty level for the political maneuvering, but would make things far more credible which means that people could actually rely on it. It really is quite difficult to precommit to things you might not like, so structures that make it work seem interesting to me.
I think it would be a bad idea to actually do (there are so many problems with it in practice), but it is a bit of an interesting thing to note how being a swing state helps convince everyone to try to cater to you, and not just a little. This would be the swing state to end all swing states, I suppose.
The way to get this done that might actually work is probably to make it an amendment to each state's constitution that can only be repealed for future elections and not the one the constitutional change reverting this would be voted on in. (If necessary, you can always amend how the state constitution is amended to make this doable.)
I should perhaps have added something I thought of slightly later that isn't really part of my original model, but an intentional blindspot can be a sign of loyalty in certain cases.
The good thing about existence proofs is that you really just have to find an example. Sometimes, I can do that.
It seems I was not clear enough, but this is not my model. (I explain it to the person who asked if you want to see what I meant, but I was talking about parties turning their opponents into scissors statements.)
That said, I do believe that it is a possible partial explanation that sometimes having an intentional blind spot can be seen as a sign of loyalty by the party structure.
So, my model isn't about them making their candidate that way, it is the much more obvious political move... make your opponent as controversial as possible. There is something weird / off / wrong about your opponent's candidate, so find out things that could plausibly make the electorate think that, and push as hard as possible. I think they're good enough at it. Or, in other words, try to find the best scissors statements about your opponent, where 'best' is determined both in terms of not losing your own supporters, and in terms of losing your opponent ...
While there are legitimate differences that matter quite a bit between the sides, I believe a lot of the reason why candidates are like 'scissors statements' is because the median voter theorem actually kind of works, and the parties see the need to move their candidates pretty far toward the current center, but they also know they will lose the extremists to not voting or voting third party if they don't give them something to focus on, so both sides are literally optimizing for the effect to keep their extremists engaged.
When reading the piece, it seemed to assume far too much (and many of the assumptions are ones I obviously disagree with). I would call many of the assumptions made to be a relative of the false dichotomy (though I don't know what it is called when you present more than two possibilities as exhaustive but they really aren't.) If you were more open in your writing to the idea that you don't necessarily know what the believers in natural abstractions mean, and that the possibilities mentioned were not exhaustive, I probably would have had a less negative rea...
Honestly, this post seems very confused to me. You are clearly thinking about this in an unproductive manner. (Also a bit overtly hostile.)
The idea that there are no natural abstractions is deeply silly. To gesture at a brief proof, the counting numbers '1' '2' '3' '4' etc as applied to objects. There is no doubt these are natural abstractions. See also 'on land', 'underwater', 'in the sky' etc. Others include things like 'empty' vs 'full' vs 'partially full and partially empty' as well as 'bigger', 'smaller', 'lighter', 'heavier' etc.
The utility functions...
It obviously has 'any' validity. If an instance of 'ancient wisdom' killed off or weakened the followers enough, it wouldn't be around. Also, said thing has been optimized for a lot of time by a lot of people, and the version we receive probably isn't the best, but still one of the better versions.
While some will weaken the people a bit and stick around for sounding good, they generally are just ideas that worked well enough. The best argument for 'ancient wisdom' is that you can actually just check how it has effected the people using it. If it has good e...
I definitely agree. No matter how useful something will end up being, or how simple it seems the transition will be, it always takes a long time because there is always some reason it wasn't already being used, and because everyone has to figure out how to use it even after that.
For instance, maybe it will become a trend to replace dialogue in videogames with specially trained LLMs (on a per character basis, or just trained to keep the characters properly separate). We could obviously do it right now, but what is the likelihood of any major trend toward th...
No problem with the failure to respond. I appreciate that this way of communicating is asynchronous (and I don't necessarily reply to things promptly either). And I think it would be reasonable to drop it at any point if it didn't seem valuable.
Also, you're welcome.
Sorry, I don't have a link for using actual compression algorithms, it was a while ago. I didn't think it would come up so I didn't note anything down. My recent spate of commenting is unusual for me (and I don't actually keep many notes on AI related subjects).
I definitely agree that it is 'hard to judge' 'more novel and more requiring of intelligence'. It is, after all, a major thing we don't even know how to clearly solve for evaluating other humans (so we use tricks that often rely on other things and these tricks likely do not generalize to other poss...
I obviously tend to go on at length about things when I analyze them. I'm glad when that's useful.
I had heard that OpenAI models aren't deterministic even at the lowest randomness, which I believe is probably due to optimizations for speed like how in image generation models (which I am more familiar with) the use of optimizers like xformers throws away a little correctness and determinism for significant improvements in resource usage. I don't know what OpenAI uses to run these models (I assume they have their own custom hardware?), but I'm pretty sure th...
It seems like you are failing to get my points at all. First, I am defending the point that blue LEDs are unworthy because the blue LED is not worthy of the award, but I corrected your claiming it was my example. Second, you are the only one making this about snubbing at all. I explicitly told you that I don't care about snubbing arguments. Comparisons are used for other reasons than snubbing. Third, since this isn't about snubbing, it doesn't matter at all whether or not the LED could have been given the award.
The point is that the 'Blue LED' is not a sufficient advancement over the 'LED' not that it is a snub. I don't care about whether or not it is a snub. That's just not how I think about things like this. Also, note that the 'Blue LED' was not originally my example at all, someone else brought it up as an example.
I talked about 'inventing LEDs at all' since that is the minimum related thing where it might actually have been enough of a breakthrough in physics to matter. Blue LEDs are simply not significant enough a change from what we already had. Even just ...
I find the idea of determining the level of 'introspection' an AI can manage to be an intriguing one, and it seems like introspection is likely very important to generalizing intelligent behavior, and knowing what is going on inside the AI is obviously interesting for the reasons of interpretability mentioned, yet this seems oversold (to me). The actual success rate of self-prediction seems incredibly low considering the trivial/dominant strategy of 'just run the query' (which you do briefly mention) should be easy for the machine to discover during traini...
Substantial technical accomplishment' sure, but minor impact compared to the actual invention of LEDs. Awarding the 'blue LED' rather than the 'LED' is like saying the invention of the jet engine is more important than the invention of the engine at all. Or that the invention of 'C' is more important than the invention of 'not machine code'.
One of the problems with the Nobel Prize as a measurement or criteria is that it is not really suited for that by nature, especially given criteria like no posthumous awards. This means that it is easy to critique awarding a Nobel Prize, but it is harder to critique not awarding one. You can't give a Nobel Prize to the inventor of the engine, because they probably died a long time ago; you could have for a recent kind of engine. Similarly, you could give a Turing Award to the inventors of C (and they probably did) but the first person who created a mnemoni...
Note that I am, in general, reluctant to claim to know how I will react to evidence in the future. There are things so far out there that I do know how I would react, but I like to allow myself to use all the evidence I have at that point, and not what I thought beforehand. I do not currently know enough about what would convince me of intelligence in an AI to say for sure. (In part because many people before me have been so obviously wrong.)
I wouldn't say I see intelligence as a boolean, but as many valued... but those values include a level below which t...
Huh, they really gave a Nobel in Physics specifically for the blue LED? It would have made sense for LED's at all, but specifically for blue? That really is ridiculous.
I should be clearer that AlphaFold seems like something that could be a chemistry breakthrough sufficient for a prize, I'd even heard about how difficult the problem was before in other contexts, and it was hailed as a breakthrough at the time in what seemed like a genuine way, but I can't evaluate its importance to the field as an outsider, and the terrible physics prize leads me to suspect their evaluations of the Chemistry prize might be flawed due to whatever pressures led to the selection of the Physics prize.
I think that the fact that they are technically separate people just makes it more likely for this to come into play. If it was all the same people, they could simply choose the best contribution of AI and be done with it, but they have the same setup, pressures, and general job, but have not themselves honored AI yet... and each wants to make their own mark.
I do think this is much more likely the reason that the physics one was chosen than chemistry, but it does show that the pressures that exist are to honor AI even when it doesn't make sense.
I do think ...
What would be a minimal-ish definitive test for LLM style AI? I don't really know. I could come up with tests for it most likely, but I don't really know how to make them fairly minimal. I can tell you that current AI isn't intelligent, but as for what would prove intelligence, I've been thinking about it for a while and I really don't have much. I wish I could be more helpful.
I do think your test of whether an AI can follow the scientific method in a novel area is intriguing.
Historically, a lot of people have come up with (in retrospect) really dumb tests...
To the best of my ability to recall, I never recognize which is which except by context, which makes it needlessly difficult sometimes. Personally I would go for 'subconscious' vs 'conscious' or 'associative' vs 'deliberative' (the latter pair due to how I think the subconscious works), but 'intuition' vs 'reason' makes sense too. In general, I believe far too many things are given unhelpful names.
I get it. I like to poke at things too. I think it did help me figure out a few things about why I think what I do about the subject, I just lose energy for this kind of thing easily. And I have, I honestly wasn't going to answer more questions. I think understanding in politics is good, even though people rarely chang positions due to the arguments, so I'm glad it was helpful.
I do agree that many Trump supporters have weird beliefs (I think they're endemic in politics, on all sides, which includes centrists). I don't like what politics does to people's th...
Your interpretation of Trump's words and actions imply he is in favor of circumventing the system of laws and constitution while another interpretation (that I and many others hold) is that his words and actions mean that he thinks the system was not followed, which should be/have been followed.
Separately a significant fraction of the American populace also believes it really was not properly followed. (I believe this, though not to the extent that I think it changed the outcome.) Many who believe that are Trump supporters of course, but it is not such a s...
I don't pay attention to what gets people the Nobel Prize in physics, but this seems obviously illegitimate. AI and physics are pretty unrelated, and they aren't getting it for an AI that has done anything to solve physics. I'm pretty sure they didn't get it for merit, but because AI is hyped. The AI chemistry one makes some sense, as it is actually making attempts to solve a chemistry issue, but I doubt its importance since they also felt the need to award AI in a way that makes no sense with the other award.
We seem to be retreading ground.
"It doesn't matter if the election was stolen if it can't be shown to be true through our justice system". That is an absurd standard for whether or not someone should 'try' to use the legal system (which is what Trump did). You are trying to disqualify someone regardless of the truth of the matter based on what the legal system decided to do later. And Trump DID just take the loss (after exhausting the legal avenues), and is now going through the election system as normal in an attempt to win a new election.
I also find your...
This treatment of the idea of complexity is clearly incorrect for the simplest possible reason... we have no idea what the Kolmogorov complexity is of these objects versus each other, since the lower bounds are exactly identical! (Said bounds are just, a hair above zero because we can be relatively sure that their existence is not absolutely required by the laws of the universe, but little more than that.) The upper bounds are different, but not in an illuminating manner.
Thus, we have to use other things to determine complexity, and the brain is clearly fa... (read more)