Have ended up reading project-lawful-avatars-moreinfo.epub as the best of the options. 70 pages into a total of 183 (very long) pages. It is still great. Enjoying it immensely.
Thank you so much for this! I tried to read this thing once before, and didn't get into it at all, despite pretty much loving everything Eliezer has ever written, which experience I now put down to something about the original presentation.
I'm now about 10% of the way through, and loving it. Two days well spent, see y'all in a fortnight or so.....
For anyone in a similar position, I found (and am currently reading):
https://akrolsmir-glowflow-streamlit-app-79n743.streamlit.app
which is much more to my taste.
And I also found https://www.mikescher.com/blo...
I knew that those wise and good benefactors of humanity would turn out to have been warning us of the dangers of polyunsaturated fats all along.
They might want to mention it to people like my father, who, on the advice of his doctor, has been pretty much only eating polyunsaturated fats these last twenty years, for the good of his heart.
Or perhaps to McDonalds, who on the basis of a consumer-led campaign changed their famously good beef-dripping fried chips to vegetable-oil fried chips, coincidentally at about the time obesity and various other nasty diseases with no known cause really became fashionable in America.
Another thing linoleic acid does when there's oxygen around is polymerize into a varnish, which is why linseed oil (lin-oleic) is traditionally used to waterproof cricket bats.
It used to say 'do not eat' in quite large letters on the cricket-bat-varnish bottles. Presumably now it says 'heart-healthy!'.
I wouldn't just write off the naive anti-seed oil position either from a chemical point of view. Metabolism is absurdly complicated and finely tuned. Substituting a slightly different substrate into a poorly understood set of reactions and feedback loops is unlikely to go well.
There was very little linoleic acid in the diet we evolved to eat. Sure, it's essential in small quantities, but using it as a major energy source is likely a very bad idea a priori.
I don't buy this, the curvedness of the sea is obvious to sailors, e.g. you see the tops of islands long before you see the beach, and indeed to anyone who has ever swum across a bay! Inland peoples might be able to believe the world is flat, but not anyone with boats.
A Great Man and an inspiration to me and to this community and to all thinking men.
God rest his soul in peace in Paradise.
alt-text is supposed to be: "I'm not even sure they've read Superintelligence"
Forgive me, I have strongly downvoted this dispassionate, interesting, well-written review of what sounds like a good book on an important subject because I want to keep politics out of Less Wrong.
This is the most hot-button of topics, and politics is the mind-killer. We have more important things to think about and I do not want to see any of our political capital used in this cause on either side.
typo-wise you have a few uses of it's (it is) where it should be its (possessive), and "When they Egyptians" should probably read "When the Egyptians".
I did enjoy your review. Thank you for writing it. Would you delete it and put it elsewhere?
Nicely done! I only come here for the humour these days.
Well, this is nice to see! Perhaps a little late, but still good news...
caches out
cashes out?
I wouldn't touch this stuff with someone else's bargepole. It looks like it takes the willpower out of starvation, and as the saying goes, you can starve yourself thin, but you can't starve yourself healthy.
I could be convinced, by many years of safety data and a well understood causal mechanism for both obesity and the action of these drugs, that that's wrong and that they really are a panacea. But I am certainly not currently convinced!
The question that needs answering about obesity is 'why on earth are people with enormous excess fat reserves feeling hungry?'. It's like having a car with the boot full of petrol in jerry cans but the 'fuel low' light is blinking.
depends on facts about physics and psychology
It does, and a superintelligence will understand those facts better than we do.
My basic argument is that the there are probably mathematical limits on how fast it is possible to learn.
Doubtless there are! And limits to how much it is possible to learn from given data.
But I think they're surprisingly high, compared to how fast humans and other animals can do it.
There are theoretical limits to how fast you can multiply numbers, given a certain amount of processor power, but that doesn't mean that I'd back the entirety of human civilization to beat a ZX81 in a multiplication contest.
What you need to explain is why learning a...
All of RL’s successes, even the huge ones like AlphaGo (which beat the world champion at Go) or its successors, were not easy to train. For one thing, the process was very unstable and very sensitive to slight mistakes. The networks had to be designed with inductive biases specifically tuned to each problem.
And the end result was that there was no generalization. Every problem required you to rethink your approach from scratch. And an AI that mastered one task wouldn’t necessarily learn another one any faster.
I had the distinct impression that AlphaZ...
This is great. Strong upvote!
Are you claiming that a physically plausible superintelligence couldn't infer the physical laws from a video, or that AIXI couldn't?
Those seem to be different claims and I wonder which of the two you're aiming at?
For example, you might be much smarter than me and a meteorologist, but you'd find it hard to predict the weather in a year's time better than me if it's a single-shot-contest.
Sure, but I'd presumably be quite a lot better at predicting the weather in two days time.
I think this is a great article, and the thesis is true.
The question is, how much intelligence is worth how much material?
Humans are so very slow and stupid compared to what is possible, and the world so complex and capable of surprising behaviour, that my intuition is that even a very modest intelligence advantage would be enough to win from almost any starting position.
You can bet your arse that any AI worthy of the name will act nice until it's already in a winning position.
I would.
If there's some intelligence threshold past which minds pretty much always draw against each other in chess even if there is a giant intelligence gap between them, I wouldn't be that surprised.
Just reinforcing this point. Chess is probably a draw for the same reason Noughts-and-crosses is.
Grandmaster chess is pretty drawish. Computer chess is very drawish. Some people think that computer chess players are already near the standard where they could draw against God.
Noughts-and-crosses is a very simple game and can be formally solved by hand. Chess is ...
That makes sense to me but to make any argument about the "general game of life" seems very hard. Actions in the real world are made under great uncertainty and aggregate in a smooth way. Acting in the world is trying to control (what physicists call) chaos.
In such a situation, great uncertainty means that an intelligence advantage only matters "on average over a very long time". It might not matter for a given limited contest, such as a struggle for world domination. For example, you might be much smarter than me and a meteorologist, but you'd find it har...
The "purpose" of most martial arts is to defeat other martial artists of roughly the same skill level, within the rules of the given martial art.
Optimizing for that is not the same as optimizing for general fighting. If you spent your time on the latter, you'd be less good at the former.
"Beginner's luck" is a thing in almost all games. It's usually what happens when someone tries a strategy so weird that the better player doesn't immediately understand what's going on.
The other day a low-rated chess player did something so weird in his op...
For a clear example of this, in endgames where I have a winning position but have little to no idea how to win, Stockfish's king will often head for the hills, in order to delay the coming mate as long as theoretically possible.
Making my win very easy because the computer's king isn't around to help out in defence.
This is not a theoretical difficulty! It makes it very difficult to practise endgames against the computer.
Paul, this is very thought provoking, and has caused me to update a little. But:
I loathe factory-farming, and I would spend a large fraction of my own resources to end it, if I could.
I believe that makes me unusually kind by human standards, and by your definition.
I like chickens, and I wish them well.
And yet I would not bat an eyelid at the thought of a future with no chickens in it.
I would not think that a perfect world could be improved by adding chickens.
And I would not trade a single happy human soul for an infinity of happy chickens.
I think that your single known example is not as benevolent as you think.
zero-days are a thing, and hell, it's even possible that there are computers connected to the internet somewhere that don't get their patches in a timely manner.
I'm a complete innocent in all this. I've never needed to lose weight before, hence appealing for help here. And I don't know anything about Vilhjálmur Stefánsson or ketogenic diets in general.
I do know that sloth and gluttony aren't the explanation, because I have been a slothful glutton for most of my life and I never gained much weight, nor lost it in the long periods when I was a sporty glutton. That's gone wrong recently, hence my search for reasons and techniques.
Wikipedia seems to imply that Vilhjálmur Stefánsson was interested in eskimo-style all-m...
That seems perfectly reasonable! Enjoy the sunshine while you still can.
Wonderful news! Good old Elon! I was worried that there might be some last fragment of hope lying around uncrushed.
All the computers on my boat are solar-powered. It'll be fine....
You're an AI. You've got an internet connection. You can make copies of yourself. You're capable of reading instructions for how to take over insecure computers.
Within about a minute you're on almost every computer in the world.
Fight that.
Taking over internet is the first step, but here I discuss a situation. when AI wants to kill everyone and survive. If it kills everyone being only in current internet, electricity will run out and it will stop.
<nogenies>
Yeah, wouldn't it be great if there was some way to not have a nuclear war or build AI or have everyone die of bird flu?
</nogenies>
I think I know how this game goes.
This is a new one! "Computers will never be able to sort lists of words by arbitrary keys."
Does it require quantum microtubules in the incomprehensibly complex neuron to leverage uncomputable mental powers that can defy Gödel's proof or something?
Yeah, wouldn't it be great if there was some way to not have a nuclear war or build AI?
If anyone can think of one, they'll have my full support.
I think this is probably right. When all hope is gone, try just telling people the truth and see what happens. I don't expect it will work, I don't expect Eliezer expects it to work, but it may be our last chance to stop it.
And it does seem to have got a bit of traction. A very non-technical friend just sent me the link, on the basis that she knows "I've always been a bit worried about that sort of thing."
Hello Rufus! Welcome to Less Wrong!
I totally get where you're coming from, and if I thought the chance of doom was 1% I'd say "full speed ahead!"
As it is, at fifty-three years old, I'm one of the corpses I'm prepared to throw on the pile to stop AI.
The "bribe" I require is several OOMs more money invested into radical life extension research
Hell yes. That's been needed rather urgently for a while now.
The audience here is mainly Americans so you might want to add an explicit sarcasm tag.
"Fault" seems a strange phrasing. If your problem was that one of your nerves was misfiring, so you were in chronic pain, would you describe that as "your fault"? (In the sense of technical fault/malfunction, that would absolutely be your "fault", but "your fault" usually carries moral implications.)
Where would you place the fault?
I suspect everyone can relate in that everyone has felt this at some point, or even at a few memorable points.
Duncan did you just deny my existence? (Don't worry, I don't mind a bit. :-) )
I'm a grade A weirdo, my own family and friends affirm this, only the other day someone on Less Wrong (!) called me a rambling madman. My nickname in my favourite cricket club/drinking society was Space Cadet.
And I'm rather smug about this. Everyone else just doesn't seem very good at thinking. Even if they're right they're usually right by accident. Even the clever...
dalmations->dalmatians?
Did someone fiddle with Charlotte?
I went to talk to her after reading this and she was great fun, I quite see how you fell for her.
But I tried again just now and she seems a pale shadow of her former cheerful self, it's the difference between speaking to a human PR drone in her corporate capacity and meeting her at a party where she's had a couple.
Doesn't any such argument also imply that you should commit suicide?
Not necessarily
Suicide will not save you from all sources of s-risk and may make some worse. If quantum immortality is true, for example. If resurrection is possible, this then makes things more complicated.
The possibility for extremely large amounts of value should also be considered. If alignment is solved and we can all live in a Utopia, then killing yourself could deprive yourself of billions+ years of happiness.
I would also argue that choosing to stay alive when you know of the risk is different from inflicting the risk on a new being you have created...
These seemed good, they taste of lavender, but the person trying them got no effect:
https://www.amazon.co.uk/gp/product/B06XPLTLLN/ref=ppx_yo_dt_b_search_asin_title?ie=UTF8&psc=1
Lindens Lavender Essential Oil 80mg Capsules
The person who had it work for them tried something purchased from a shop, Herbal Calms maybe?, anyway, lavender oil in vegetable oil in little capsules. She reports that she can get to sleep now, and can face doing things that she couldn't previously do due to anxiety if she pops a capsule first.
That makes perfect sense, thank you. And maybe, if we've already got the necessary utility function, stability under self-improvement might be solvable as if it were just a really difficult maths problem. It doesn't look that difficult to me, a priori, to change your cognitive abilities whilst keeping your goals.
AlphaZero got its giant inscrutable matrices by working from a straightforward start of 'checkmate is good'. I can imagine something like AlphaZero designing a better AlphaZero (AlphaOne?) and handing over the clean definition of 'checkmate is good...
What I am not convinced of, is that given all those assumptions being true, certain doom necessarily follows, or that there is no possible humanly tractable scheme which avoids doom in whatever time we have left.
OK, cool, I mean "just not building the AI" is a good way to avoid doom, and that still seems at least possible, so we're maybe on the same page there.
And I think you got what I was trying to say, solving 1 and/or 2 can't be done iteratively or by patching together a huge list of desiderata. We have to solve philosophy somehow, without superi...
A good guess, and thank you for the reference, but (although I admit that the prospect of global imminent doom is somewhat anxious-making), anxiety isn't a state of mind I'm terribly familiar with personally. I'm very emotionally stable usually, and I lost all hope years ago. It doesn't bother me much.
It's more that I have the 'taking ideas seriously' thing in full measure, once I get an *idee fixe* I can't let it go until I've solved it. AI Doom is currently third on the list after chess and the seed oil nonsense, but the whole Bing/Sydney thing sta...
To be clear, even if I were somehow granted vivid knowledge of the future through precognition, you’d still seem crazy to me at this point.
(I assume you mean vivid knowledge of the future in which we are destroyed, obviously in the case where everything goes well I've got some problem with my reasoning)
That's a good distinction to make, a man can be right for the wrong reasons.
Even as a doomer among doomers, you, with respect, come off as a rambling madman.
Certainly mad enough to take "madman" as a compliment, thank you!
I'd be interested if you know a general method I could use to tell if I'm mad. The only time I actually know it happened (thyroid overdose caused a manic episode) I noticed pretty quickly and sought help. What test should I try today?
Obviously "everyone disagrees with me and I can't convince most people" is a bad sign. But after long and patient effort I have convinced a number of unfortunates in my circle of fri...
Finally finished it, took about a month-and-a-half at around 3 hours a day. It kind of ate my life. I enjoyed it immensely.
I think the last thing that I liked as much as this was 'Game of Thrones'. I think it's probably a Great Work of Literature. Shame the future isn't going to be long enough for it to get recognised as such...
I wouldn't have wished it shorter. There were a couple of 'Sandbox' chapters that I'd probably cut, in the same way that Lord of the Rings could do without Tom Bombadil, but the main thing is well-paced and consistently both f... (read more)