I guess! I remember he was always into theoretical QM and "Quantum Foundations" so this is not a surprise. It's not a particularly big field either, most researchers prefer focusing on less philosophical aspects of the theory.
Note that it only stands if the AI is sufficiently aligned that it cares that much about obeying orders and not rocking the boat. Which I don't think is very realistic if we're talking that kind of crazy intelligence explosion super AI stuff. I guess the question is whether you can have "replace humans"-good AI without almost immediately having "wipes out humans, takes over the universe"-good AI.
That sounds interesting! I'll give the paper a read and try to suss out what it means - it seems at least a serious enough effort. Here's the reference for anyone else who doesn't want to go through the intermediate news site:
https://arxiv.org/pdf/2012.06580
(also: professor D'Ariano authored this? I used to work in the same department!)
This feels like a classic case of overthinking. Suggestion: maybe twin sisters care more about their own children than their nieces because they are the ones whom they carried in their womb and then nurtured and actually raised as their own children. Genetics inform our behaviour but ultimately what they do align us to is something like "you shall be attached to cute little baby like things you spend a lot of time raising". That holds for our babies, it holds for babies born with other people's sperm/eggs, it holds for adopted babies, heck it even transfer...
I mean, I guess it's technically coherent, but it also sounds kind of insane. That way Dormammu lies.
Why would one even care about their future self if they're so unconcerned about that self's preferences?
I just think any such people lack imagination. I am 100% confident there exists an amount of suffering that would have them wish for death instead; they simply can't conceive of it.
Or for that matter to abstain towards burning infinite fossil fuels. We happen to not live on a planet with enough carbon to trigger a Venus-like cascade, but if that wasn't the case I don't know if we could stop ourselves from doing that either.
The thing is, any kind of large scale coordination to that effect seems more and more like it would require a degree of removal of agency from individuals that I'd call dystopian. You can't be human and free without a freedom to make mistakes. But the higher the stakes, the greater the technological power we wield,...
What looks like an S-risk to you or me may not count as -inf for some people
True but that's just for relatively "mild" S-risks like "a dystopia in which AI rules the world, sees all and electrocutes anyone who commits a crime by the standards of the year it was created in, forever". It's a bad outcome, you could classify it as S-risk, but it's still among the most aligned AIs imaginable and relatively better than extinction.
I simply don't think many people think about what does an S-risk literally worse than extinction look like. To be fair I also think these aren't very likely outcomes, as they would require an AI very aligned to human values - if aligned for evil.
So, we will have nice, specific things like Prevention of Alzheimer's, or some safer, more reliable descendant of CRISPR may cure most genetic disease in existing people. Also, we will need to have some conversation because the human economy will be obsolete and incentives for states to care about people will be obsolete.
I feel like the fundamental problem with this is that while scientific and technological progress can be advanced intentionally, I can't think of an actual example of large scale social change happening in some kind o...
I think the shell games point is interesting though. It's not psychoanalysing (one can think that people are in denial or have rational beliefs about this, not much point second guessing too far), it's pointing out a specific fallacy: a sort of god of the gaps in which every person with a focus on subsystem X assumes the problem will be solved in subsystem Y, which they understand or care less about because it's not their specialty. If everyone does it, that does indeed lead to completely ignoring serious problems due to a sort of bystander effect.
I suppose that Gaussian is technically the correct prior for "very high number of error factors with a completely unknown but bounded probability distribution". But reality is, that's not a good description of this specific situation, even with as much ignorance as you want thrown in.
I think for this specific example the superior is wrong because realistically we can form an expectation of the distribution of those factors. Just because we don't know doesn't mean it's actually necessarily a gaussian - some factors, like the Coriolis force, are systematic. If the distribution was "a ring of 1 m around the aimed point" then you would know for sure you won't hit the terrorist that way, but have no clue whether you'll hit the kid.
Also, even if the distribution was gaussian, if it's broad enough the difference in probability between hitting the terrorist and hitting the kid may simply be too small to matter.
I mean, yes, humans make mistakes too. Do our most high level mistakes like "Andrew Wiles' first proof of Fermat's Theorem was wrong" affect much our ability to be vastly superior to chimpanzees in any conflict with them?
consciousness is inherently linked to quantum particle wavefunction collapse
As someone with quite a bit of professional experience working with QM, that sounds a bit of a god of the gaps. We don't even know what collapse means, in practice. All we know about consciousness is that it seems like a classical enough phenomenon to experience only one branch of the wavefunction. No particular reason why there can't be more "you" out there in the Hilbert space equally convinced that their branch is the only one into which everything mysteriously collapsed.
Which other people have described the situation otherwise and where? Genuine question, I'm pretty much learning about all of this here.
What? If every couple had only one child, the population would halve at each generation. That's what they mean. Replacement rate requires more than just one child.
I mean, the whole point was "how can we have fertility but also not be a dystopia". You just described a dystopia. It's also kind of telling that the only way to make people have children, something that is supposedly a joyous experience, you can think of is "have a tyrannical dictator make it very clear that they'll make sure the alternative is even worse". Someone thinking this way is part of the problem more than they are of the solution.
I honestly think "find the elixir of immortality within a couple of generations" is not what I'd call a pragmatic plan to solve this. Personally I don't think having 2 or 3 children would necessarily be such a curse in a different kind of world. A few obvious changes that I think would help towards that:
short of immortality, any extension of youth helps. Part of the problem here is that by the time we feel like we've got our shit sorted out, we're almost too old to have children;
artificial wombs. Smooth out the risks of pregnancy and eliminate the bi
I mean, the problem of "my brain gets bad vibes too easily" is more general. Prejudice is a very common manifestation of it, but it's something that can happen in other ways, and in the limit, as mentioned, you get bad vibes from everyone because you're just paranoid and it isolates you. I think this is more an issue of you trying to get a sense of how good your intuition is in the first place, and possibly examine it to move those intuitive vibes to the conscious level. Like for example there are certain patterns in speech and attitude that scream "fake" to me, but it feels like I could at least try describing them.
Thanks! I've actually seen some more recent posts that got pretty popular outlining this same argument, so I guess I'm... happy... that it's gaining some traction? However happy one can be to see the same prophecy of doom repeated and validated by other people who are just as unlikely to change the current trajectory of the world as me.
Possibly perfectionism? I experience this form of creative paralysis a lot - as soon as I get enough into the weeds of one creative form I start seeing the endless ramifications of the tiniest decision and basically can just not move a step without trying to achieve endlessly deep optimisation over the whole. Meanwhile people who can just not give a fuck and let the creative juices flow get shit done.
I think that's a bit too extreme. Are all machines bad? No, obviously better to have mechanised agriculture than be all peasants. But he is grasping something here which we are now dealing with more directly. It's the classic Moloch trap of "if you have enough power to optimise hard enough then all slack is destroyed and eventually life itself". If you thought that was an inevitable end of all technological development (and we haven't proven it isn't yet), you may end up thinking being peasants is better too.
I think some believe it's downright impossible and others that we'll just never create it because we have no use for something so smart it overrides our orders and wishes. That at most we'll make a sort of magical genie still bound by us expressing our wishes.
I feel like this is a bit incorrect. There are imaginable things that are smarter than humans at some tasks, smart as average humans at others, thus overall superhuman, yet controllable and therefore possible to integrate in an economy without immediately exploding into an utopian (or dystopian) singularity. The question is whether we are liable to build such things before we build the exploding singularity kind, or if the latter is in some sense easier to build and thus stumble upon first. Most AI optimists think these limited and controllable intelligences are the default natural outcome of our current trajectory and thus expect mere boosts in productivity.
I don't know about the Bible itself, but there's a long and storied tradition of self mortification and denial of corporeity in general in medieval Christian doctrine and mysticism. If we want to be cute we could call that fandom, but after a couple thousand years of it it ends up being as important as the canon text itself.
I think the fundamental problem is that yes, there are people with that innate tendency, but that is not in the slightest bit helped by creating huge incentives for a whole industry to put its massive resources into finding ways to make that tendency become as bad as possible. Imagine if we had entire companies that somehow profited from depressed people committing suicide and had dedicated teams of behavioural scientists and quants crunching data and designing new strategies to make anyone who already has the tendency maximally suicidal. I doubt we would ...
I definitely think this is a general cultural zeitgeist thing. The progressive thing used to be the positivist "science triumphs over all, humanity rises over petty differences, leaves childish things like religions, nations and races behind and achieves its full potential". But then people have grown sceptical of all grand narratives, seeing them as inherently poisoned because if you worry about grand things you are more inclined to disregard the small ones. Politics built around reclamation of personal identity, community, tradition as forms of resistanc...
This is exactly the kind of thing Egan is reacting to, though—starry-eyed sci-fi enthusiasts assuming LLMs are digital people because they talk, rather than thinking soberly about the technology qua technology.
I feel like this borders on the strawman. When discussing this argument my general position isn't "LLMs are people!". It's "Ok, let's say LLMs aren't people, which is also my gut feeling. Given that they still converse as or more intelligently as some human beings whom we totally acknowledge as people, where the fuck does that leave us as to our a...
Since Chat GPT came out I feel like Egan really lost the plot on that one, already when discussing on Twitter. It felt like a combination of rejection of the "bitter lesson" (understandable: I too find inelegant and downright offensive to my aesthetic sense that brute force deep learning seems to work better than elegantly designed GOFAI, but whatever it is, it does undeniably work ), and political cognitive dissonance that says that if people who wrongthink support AI, and evil billionaires throw their weight behind AI, therefore AI is bad, and therefore ...
That sounds more like my intuition, though obviously there still have to be differences given that we keep using self-attention (quadratic in N) instead of MLPs (linear in N).
In the limit of infinite scaling, the fact that MLPs are universal function approximators is a guarantee that you can do anything with them. But obviously we still would rather have something that can actually work with less-than-infinite amounts of compute.
Interesting. But CNNs were developed originally for a reason to begin with, and MLP-mixer does mention a rather specific architecture as well as "modern regularization techniques". I'd say all of that counts as baking in some inductive biases in the model though I agree it's a very light touch.
Does it make sense to say there is no inductuive bias at work in modern ML models? Seems that clearly literally brute force searching ALL THE ALGORITHMS would still be unfeasible no matter how much compute you throw at it. Our models are very general, but when e.g. we use a diffusion model for images that exploits (and is biased towards) the kind of local structure we expect of images, when we use a transformer for text that exploits (and is biased towards) the kind of sequential pair-correlation you see in natural language, etc.
Generalize this story across a whole field, and we end up with most of the field focused on things which are easy, regardless of whether those things are valuable.
I would say this problem plagues more than just alignment, it plagues all of science. Trying to do everything as a series of individual uncoordinated contributions with an authority on top acting only to filter based on approximate performance metrics has this effect.
On this issue specifically, I feel like the bar for what counts as an actually sane and non-dysfunctional organization to the average user of this website is probably way too lofty for 95% of workplaces out there (to be generous!) so it's not even that strange that it would be the case.
A whole lot of people, the vast majority that I've talked to, can easily answer this - "because they pay me and I'm not sure anyone else will", with a bit of "I know this mediocracy well, and the effort to learn a new one only to find it's not better will drain what little energy I have left".
Or "last time I did that I ended up in this one which is even worse than the previous, so I do not wish to tempt fate again".
Ouch. Sometimes the answer to "Why don't you simply X?" is "What makes you so sure I didn't already 'simply X' in the past, and maybe it just didn't work as well as advertised?".
It's not necessarily that the strategy is bad, but sometimes it needs a few ingredients to make it work, such as specific skills, or luck.
Not just that, but as per manga spoilers:
The US already have a bunch of revived people going, including a Senku-level rationalist and scientist who has discovered the revival fluid in parallel and is in fact much less inclined to be forgiving and wants the exact opposite of Tsukasa, to take advantage of the hard reset to build a full technocracy. By the time Senku & co arrive there, they already have automatic firearms and WW1-era planes. So essentially Tsukasa's plan was always absolutely doomed. Just like it has happened before, one day backwards iso
It's not about science as a whole, but Assassination Classroom features one of the most beautiful uses of actual, genuine, 100% correctly represented math in fiction I've ever seen.
Spoilers:
During one of the exams, Karma is competing against the Principal's son for top score. One of the problems involves calculating the volume of the Wigner-Seitz cell in a body-centered cubic lattice. This is obviously quite hard for middle schoolers, but believable for an exam whose explicit purpose was to test them to their limits and let the very best rise to the top. T
Senku definitely holds that position, and of the authors I wouldn't be surprised if Boichi at least did - he is famously a big lover of classic science fiction. If you check out his Dr. Stone: Byakuya solo spinoff manga, it starts out as a simple side story, showing the life of Senku's dad and his astronauts companions in space, and then spirals out in a completely insane direction involving essentially an AI singularity (understandably, it's not canon).
There is a certain "Jump heroes shouldn't kill wantonly" vibe I guess but truth be told Jump heroes have...
Tsukasa loses everything and has to settle for being bought off, in essence, because Senku manages to accumulate enough of a technological advantage even without any additional revivals. (The series unconvincingly says that Tsukasa's politics were superficial and so he got what he really wanted in the end, to try to rationalize how it worked out for him. Seems like cope to me!)
The story plays a bit of a sleight of hand there with Tsukasa having the additional motivation of saving his little sister, which is a pity because they could have at least played...
Another Dr. Stone fan here. I will definitely vouch for this show. You have to go in being prepared to the anime-ness of it all - this is not actual science and engineering any more than your average "spokon" show represents the actual practice of whatever sport it involves. It's not accidental that Riichiro Inagaki, the writer of the manga this show adapt, previously worked on Eyeshield 21, a hilarious and over-the-top take on American football, wherein Japanese high school teams field running backs as fast as anyone in the NFL and there's a guy who is 2....
Doubt: late answer because I just watched this two days ago, but I found it a fantastic exploration of the problem of reasoning and making decisions with incomplete information and very high stakes. Part of the point being that the viewer does not get a privileged outlook and shares information with the PoV characters, meaning the movie makes you experience the problem deeply.
The antibodies not being chiral dependent doesn't mean there aren't other fundamental links in the whole chain that leads to antibodies being deployed at all that may not be. Mostly I imagine the risk is that we have a lot of systems optimized for dealing with life of a certain chirality. They may be able to cope with the opposite chirality, but less so. COVID alone showed what happens when something far less alien but that is just barely out of distribution for our current immune defenses arrives: literally everyone in the world gets it in a matter of mon...
I think it's a very visible example that right now is particularly often brought up. I'm not saying it's all there is to it but I think the fundamental visceral reaction to the very idea of self-mutilation is an important and often overlooked element of why some people would be put off by the concept. I actually think it's something that makes the whole thing a lot more understandable in what it comes from than the generic "well they're just bigoted and evil" stuff people come up with in extremely partisan arguments on the topics. These sort of psychologic...
Well, yes, it's true, and obviously those things do not necessarily all have genuine infinite value. I think what this really means in practice is not that all non-fungible things have infinite value, but that because they are non-fungible, most judgements involving them are not as easy or straightforward as simple numerical comparisons. Preferences end up being expressed anyway, but just because practical needs force a square peg in a round hole doesn't make it fit any better. I think this in practice manifests in high rates of hesitation or regret for de...
Probabilities for physical processes are encoded in quantum wavefunctions one way or another, so I'd put that under the umbrella of "winning a staring contest with the laws of physics", which was basically what the average Spiral Energy user did.
And then again, while optimistic, the series still does show Simon using his power responsibly and essentially renouncing it to avoid causing the Spiral Nemesis. He doesn't just keep growing everything exponentially and decide nothing bad can ever possibly come out of it.
I think the core message of optimism is a positive one, but of course IRL we have to deal with a world whose physical laws do not in fact seem to bend endlessly under sufficient application of MANLY WARRIOR SPIRIT, and thus that forces us to be occasionally Rossiu even when we'd want to be Simon. Memeing ourselves into believing otherwise doesn't really make it true.
People often say that wars are foolish, and both sides would be better off if they didn't fight. And this is standardly called "naive" by those engaging in realpolitik. Sadly, for any particular war, there's a significant chance they're right. Even aside from human stupidity, game theory is not so kind as to allow for peace unending.
I'm not saying obviously that ALL conflict ever is avoidable or irrational, but there are a lot that are:
I think there's one fundamental problem here IMO, which is that not everything is fungible, and thus not everything manages to actually comfortably exist on the same axis of values. Fingers are not fungible. At the current state of technology, once severed, they're gone. In some sense, you could say, that's a limited loss. But for you, as a human being, it may as well be infinite. You just lost something you'll never ever have back. All the trillions and quadrillion dollars in the world wouldn't be enough to buy it back if you regretted your choice. And th...
It's also not really a movie as much as a live recording of a stage play. But agree it's fantastic (honestly, I'd be comfortable calling it Aladdin rational fanfiction).
Also a little silly detail I love about it in hindsight:
During the big titular musical number, all big Disney villains show on stage to make a case for themselves and why what they wanted was right - though some of their cases were quite stretched. Even amidst this collection of selfish entitled people, when Cruella De Vil shows up to say "I only wanted a coat made of puppies!" she elicits
I think there's a difference though between propaganda and the mix of selection effects that decides what gets attention in profit driven mass media news. Actual intentional propaganda efforts exist. But in general what makes news frustrating is the latter, which is a more organic and less centralised effort.