All of johnlawrenceaspden's Comments + Replies

Finally finished it, took about a month-and-a-half at around 3 hours a day. It kind of ate my life. I enjoyed it immensely. 

I think the last thing that I liked as much as this was 'Game of Thrones'. I think it's probably a Great Work of Literature. Shame the future isn't going to be long enough for it to get recognised as such...

I wouldn't have wished it shorter. There were a couple of 'Sandbox' chapters that I'd probably cut, in the same way that Lord of the Rings could do without Tom Bombadil, but the main thing is well-paced and consistently both f... (read more)

Have ended up reading project-lawful-avatars-moreinfo.epub as the best of the options. 70 pages into a total of 183 (very long) pages. It is still great. Enjoying it immensely.

2johnlawrenceaspden
Finally finished it, took about a month-and-a-half at around 3 hours a day. It kind of ate my life. I enjoyed it immensely.  I think the last thing that I liked as much as this was 'Game of Thrones'. I think it's probably a Great Work of Literature. Shame the future isn't going to be long enough for it to get recognised as such... I wouldn't have wished it shorter. There were a couple of 'Sandbox' chapters that I'd probably cut, in the same way that Lord of the Rings could do without Tom Bombadil, but the main thing is well-paced and consistently both fun and thought-provoking. It turned out that my preferred way to read it was to unzip project-lawful-avatars-moreinfo.epub. In the unzipped structure all the chapters become plain html files with avatars included, which can be easily read in firefox.

Thank you so much for this! I tried to read this thing once before, and didn't get into it at all, despite pretty much loving everything Eliezer has ever written, which experience I now put down to something about the original presentation. 

I'm now about 10% of the way through, and loving it. Two days well spent, see y'all in a fortnight or so.....


For anyone in a similar position, I found (and am currently reading):

https://akrolsmir-glowflow-streamlit-app-79n743.streamlit.app

which is much more to my taste.

And I also found https://www.mikescher.com/blo... (read more)

2johnlawrenceaspden
Have ended up reading project-lawful-avatars-moreinfo.epub as the best of the options. 70 pages into a total of 183 (very long) pages. It is still great. Enjoying it immensely.

I knew that those wise and good benefactors of humanity would turn out to have been warning us of the dangers of polyunsaturated fats all along.

They might want to mention it to people like my father, who, on the advice of his doctor, has been pretty much only eating polyunsaturated fats these last twenty years, for the good of his heart.

Or perhaps to McDonalds, who on the basis of a consumer-led campaign changed their famously good beef-dripping fried chips to vegetable-oil fried chips, coincidentally at about the time obesity and various other nasty diseases with no known cause really became fashionable in America.

4Slapstick
I'm unsure exactly what points you're making. I'm saying the idea that it's healthiest to avoid virtually any refined oil is mainstream nutritional understanding. Do you dispute this? I'm not making a point about which refined oils/fats are better than others. I haven't seen anything that has convinced me mainstream nutrition is wrong about that, but I don't think its particularly important when they can all be avoided. Typical doctors are not particularly reliable nutritional authorities. They have almost no nutrition training. MacDonalds fries are clearly very unhealthy regardless of what they're fried in. Do you have evidence that they're healthier when fried in beef tallow? Regardless, the point I was making was that the diets the original commenter mentioned all restrict things that mainstream nutrition already suggests cause health problems. Refined sugar, refined grains, refined fats, and animal products are all things mainstream nutrition suggests cause health problems. All of the diets listed restrict at least one of those things, so it's not surprising that people would report temporary improvements in health relative to a diet that doesn't restrict any of them.

Another thing linoleic acid does when there's oxygen around is polymerize into a varnish, which is why linseed oil (lin-oleic) is traditionally used to waterproof cricket bats. 

It used to say 'do not eat' in quite large letters on the cricket-bat-varnish bottles. Presumably now it says 'heart-healthy!'. 

I wouldn't just write off the naive anti-seed oil position either from a chemical point of view. Metabolism is absurdly complicated and finely tuned. Substituting a slightly different substrate into a poorly understood set of reactions and feedback loops is unlikely to go well.

There was very little linoleic acid in the diet we evolved to eat. Sure, it's essential in small quantities, but using it as a major energy source is likely a very bad idea a priori.

I don't buy this, the curvedness of the sea is obvious to sailors, e.g. you see the tops of islands long before you see the beach, and indeed to anyone who has ever swum across a bay! Inland peoples might be able to believe the world is flat, but not anyone with boats.

1cubefox
What's more likely: You being wrong about the obviousness of the sphere Earth theory to sailors, or the entire written record (which included information from people who had extensive access to the sea) of two thousand years of Chinese history and astronomy somehow ommitting the spherical Earth theory? Not to speak of other pre-Hellenistic seafaring cultures which also lack records of having discovered the sphere Earth theory.

 A Great Man and an inspiration to me and to this community and to all thinking men.

God rest his soul in peace in Paradise.

alt-text is supposed to be: "I'm not even sure they've read Superintelligence"

Forgive me, I have strongly downvoted this dispassionate, interesting, well-written review of what sounds like a good book on an important subject because I want to keep politics out of Less Wrong. 

This is the most hot-button of topics, and politics is the mind-killer. We have more important things to think about and I do not want to see any of our political capital used in this cause on either side.

typo-wise you have a few uses of it's (it is) where it should be its (possessive), and "When they Egyptians" should probably read "When the Egyptians".

I did enjoy your review. Thank you for writing it. Would you delete it and put it elsewhere?

9Algon
I disagree-voted because while politics is the mind-killer, I think LW's implicit norm, and that of many LW users, against discussion of politics on the site goes too far. And this article was both informative, and an instance of someone seeking out info to update their models during a time especially adverserial to clear-eyed thinking on the Israel-Palestine conflict. That's attempting to become less-wrong on hard mode, which I want more of. And since this post does it better than the average post on politics here, I strong up-voted it. EDIT: I, ah, forgot to strong up-vote. I feel a bit sheepish about that rant, now. Though I have gone and strong up-voted the article.

Nicely done! I only come here for the humour these days.

Well, this is nice to see! Perhaps a little late, but still good news...

I wouldn't touch this stuff with someone else's bargepole. It looks like it takes the willpower out of starvation, and as the saying goes, you can starve yourself thin, but you can't starve yourself healthy.

I could be convinced, by many years of safety data and a well understood causal mechanism for both obesity and the action of these drugs, that that's wrong and that they really are a panacea. But I am certainly not currently convinced!

The question that needs answering about obesity is 'why on earth are people with enormous excess fat reserves feeling hungry?'. It's like having a car with the boot full of petrol in jerry cans but the 'fuel low' light is blinking. 

55hout
No disagreement from me :)

depends on facts about physics and psychology

 

It does, and a superintelligence will understand those facts better than we do.

My basic argument is that the there are probably mathematical limits on how fast it is possible to learn.

 

Doubtless there are! And limits to how much it is possible to learn from given data.

But I think they're surprisingly high, compared to how fast humans and other animals can do it. 

There are theoretical limits to how fast you can multiply numbers, given a certain amount of processor power, but that doesn't mean that I'd back the entirety of human civilization to beat a ZX81 in a multiplication contest.

What you need to explain is why learning a... (read more)

All of RL’s successes, even the huge ones like AlphaGo (which beat the world champion at Go) or its successors, were not easy to train. For one thing, the process was very unstable and very sensitive to slight mistakes. The networks had to be designed with inductive biases specifically tuned to each problem.

And the end result was that there was no generalization. Every problem required you to rethink your approach from scratch. And an AI that mastered one task wouldn’t necessarily learn another one any faster.

 

I had the distinct impression that AlphaZ... (read more)

2Noosphere89
Admittedly, the success of AlphaZero relied on it being essentially able to generate very, very large amounts of very high-quality data, so this is a domain where synthetic data was very successful. So a weaker version of the post is "you need either a lot of data, and high quality ones, or high amounts of compute, and there's little going around it."

This is great. Strong upvote!

Are you claiming that a physically plausible superintelligence couldn't infer the physical laws from a video, or that AIXI couldn't?

Those seem to be different claims and I wonder which of the two you're aiming at?

For example, you might be much smarter than me and a meteorologist, but you'd find it hard to predict the weather in a year's time better than me if it's a single-shot-contest.

Sure, but I'd presumably be quite a lot better at predicting the weather in two days time.

5Herb Ingram
What point are you trying to make? I'm not sure how that relates to what I was trying to illustrate with the weather example. Assuming for the moment that you didn't understand my point. The "game" I was referring to was one where it's literally all-or-nothing "predict the weather a year from now", you get no extra points for tomorrow's weather. This might be artificial but I chose it because it's a common example of the interesting fact that chaos can be easier to control than simulate. Another example. You're trying to win an election and "plan long-term to make the best use of your intelligence advantage", you need to plan and predict a year ahead. Intelligence doesn't give you a big advantage in predicting tomorrow's polls given today's polls. I can do that reasonably well, too. In this contest, resources and information might matter a lot more than intelligence. Of course, you can use intelligence to obtain information and resources. But this bootstrapping takes time and it's hard to tell how much depending where you start off.

I think this is a great article, and the thesis is true.

The question is, how much intelligence is worth how much material?

Humans are so very slow and stupid compared to what is possible, and the world so complex and capable of surprising behaviour, that my intuition is that even a very modest intelligence advantage would be enough to win from almost any starting position. 

You can bet your arse that any AI worthy of the name will act nice until it's already in a winning position.

I would.

1Aiyen
Even if we assume that's true (it seems reasonable, though less capable AIs might blunder on this point, whether by failing to understand the need to act nice, failing to understand how to act nice or believing themselves to be in a winning position before they actually are), what does an AI need to do to get in a winning position?  And how easy is it to make those moves without them being seen as hostile?   An unfriendly AI can sit on its server saying "I love mankind and want to serve it" all day long, and unless we have solid neural net interpretability or some future equivalent, we might never know it's lying.  But not even superintelligence can take over the world just by saying "I love mankind".  It needs some kind of lever.  Maybe it can flash its message of love at just the right frequency to hack human minds, or to invoke some sort of physical effect that let's it move matter.  But whether it can or not depends on facts about physics and psychology, and if that's not an option, it doesn't become an option just because it's a superintelligence trying it. 

If there's some intelligence threshold past which minds pretty much always draw against each other in chess even if there is a giant intelligence gap between them, I wouldn't be that surprised.

 

Just reinforcing this point. Chess is probably a draw for the same reason Noughts-and-crosses is.

Grandmaster chess is pretty drawish. Computer chess is very drawish. Some people think that computer chess players are already near the standard where they could draw against God.

Noughts-and-crosses is a very simple game and can be formally solved by hand. Chess is ... (read more)

That makes sense to me but to make any argument about the "general game of life" seems very hard. Actions in the real world are made under great uncertainty and aggregate in a smooth way. Acting in the world is trying to control (what physicists call) chaos.

In such a situation, great uncertainty means that an intelligence advantage only matters "on average over a very long time". It might not matter for a given limited contest, such as a struggle for world domination. For example, you might be much smarter than me and a meteorologist, but you'd find it har... (read more)

The "purpose" of most martial arts is to defeat other martial artists of roughly the same skill level, within the rules of the given martial art. 

Optimizing for that is not the same as optimizing for general fighting. If you spent your time on the latter, you'd be less good at the former. 

"Beginner's luck" is a thing in almost all games. It's usually what happens when someone tries a strategy so weird that the better player doesn't immediately understand what's going on. 

The other day a low-rated chess player did something so weird in his op... (read more)

2green_leaf
This is false - the reason they were created was self-defense. That you can have people of similar weight and belt color spar/fight each other in contests is only a side effect of that. That doesn't work in chess if the difference in skill is large enough - if it did, anyone could simply make up n strategies weird enough, and without any skill, win any title or even the World Chess Championship (where n is the number of victories needed). If you're saying it works as a matter of random fluctuations - i.e. a player without skill could win, let's say, 0.5% games against Magnus Carlsen, because these strategies (supposedly) usually almost never work but sometimes they do, that wouldn't be useful against an AI, because it would still almost certainly win (or, more realistically, I think, simply model us well enough to know when we'd try the weird strategy).
3gwd
Not only skill level, but usually physical capability level (as proxied by weight and sex) as well.  As an aside, although I'm not at all knowledgeable about martial arts or MMA, it always seemed like an interesting thing to do might to use some sort of an ELO system for fighting as well: a really good lightweight might end up fighting a mediocre heavyweight, and the overall winner for a year might be the person in a given <skill, weight, sex> class that had the highest ELO.  The only real reason to limit the ELO gap between contestants would be if there were a higher risk of injury, or the resulting fight were consistently just boring.  But if GGP is right that a big upset isn't unheard of, it might be worth 9 boring fights for 1 exciting upset.

For a clear example of this, in endgames where I have a winning position but have little to no idea how to win, Stockfish's king will often head for the hills, in order to delay the coming mate as long as theoretically possible. 

Making my win very easy because the computer's king isn't around to help out in defence.

This is not a theoretical difficulty! It makes it very difficult to practise endgames against the computer.

Paul, this is very thought provoking, and has caused me to update a little. But:

I loathe factory-farming, and I would spend a large fraction of my own resources to end it, if I could. 

I believe that makes me unusually kind by human standards, and by your definition.

I like chickens, and I wish them well.

And yet I would not bat an eyelid at the thought of a future with no chickens in it. 

I would not think that a perfect world could be improved by adding chickens.

And I would not trade a single happy human soul for an infinity of happy chickens.

I think that your single known example is not as benevolent as you think.

zero-days are a thing, and hell, it's even possible that there are computers connected to the internet somewhere that don't get their patches in a timely manner.

3the gears to ascension
Yeah, but can you just read some instructions online and take over all computers? If it can do novel vulnerability discovery then sure, all computers it is. But if it's just an ai skiddie I don't think it's enough to be that kind of threat. Certainly there will be models powerful enough to do this any time now, I'm only disagreeing about the specifics of which ones. Being able to read instructions is not enough to take over the internet, because humans who can read instructions but not do novel security research are also trying. Most highly vulnerable computers do not have available GPUs for you to run yourself on and wouldn't be ideal.

I'm a complete innocent in all this. I've never needed to lose weight before, hence appealing for help here. And I don't know anything about Vilhjálmur Stefánsson or ketogenic diets in general.

I do know that sloth and gluttony aren't the explanation, because I have been a slothful glutton for most of my life and I never gained much weight, nor lost it in the long periods when I was a sporty glutton. That's gone wrong recently, hence my search for reasons and techniques.

Wikipedia seems to imply that Vilhjálmur Stefánsson was interested in eskimo-style all-m... (read more)

1angmoh
Stefánsson's "The Fat of the Land" is not really worth reading for any scientific insight today, but it's entertaining early 1900s anthropology.  I don't have much of an opinion on any specific diet approach, but I can tell you my own experience with weight loss: I've always been between 15-25% bodyfat, yoyoing around. This routine isn't ideal, so I too am a 'victim' of the weight gain phenomenon.  I have no satisfying answers for "why are we getting fatter" or "what makes caloric deficits so hard to maintain". I appreciate the diet blogging community that tries to tackle these questions with citizen science.

That seems perfectly reasonable! Enjoy the sunshine while you still can.

Wonderful news! Good old Elon! I was worried that there might be some last fragment of hope lying around uncrushed.

You're an AI. You've got an internet connection. You can make copies of yourself. You're capable of reading instructions for how to take over insecure computers. 

Within about a minute you're on almost every computer in the world. 

Fight that.

3the gears to ascension
so where are these instructions for how to take over all computers? I don't think they currently exist as such - while security is bad, it's nowhere near that bad, holes like that get attacked and patched by humans.
1Going Durden
why do you assume a regular end-user computer would be capable of supporting an AI?

Taking over internet is the first step, but here I discuss a situation. when AI wants to kill everyone and survive. If it kills everyone being only in current internet, electricity will run out and it will stop.

<nogenies>

Yeah, wouldn't it be great if there was some way to not have a nuclear war or build AI or have everyone die of bird flu? 

</nogenies>

I think I know how this game goes.

This is a new one! "Computers will never be able to sort lists of words by arbitrary keys."

Does it require quantum microtubules in the incomprehensibly complex neuron to leverage uncomputable mental powers that can defy Gödel's proof or something?

Yeah, wouldn't it be great if there was some way to not have a nuclear war or build AI? 

If anyone can think of one, they'll have my full support.

2avturchin
It would be great. But it should not be bird flu. 

I think this is probably right. When all hope is gone, try just telling people the truth and see what happens. I don't expect it will work, I don't expect Eliezer expects it to work, but it may be our last chance to stop it.

And it does seem to have got a bit of traction. A very non-technical friend just sent me the link, on the basis that she knows "I've always been a bit worried about that sort of thing."

I totally get where you're coming from, and if I thought the chance of doom was 1% I'd say "full speed ahead!"

As it is, at fifty-three years old, I'm one of the corpses I'm prepared to throw on the pile to stop AI. 

The "bribe" I require is several OOMs more money invested into radical life extension research

Hell yes. That's been needed rather urgently for a while now. 

2Chris van Merwijk
"if I thought the chance of doom was 1% I'd say "full speed ahead!" This is not a reasonable view. Not on Longtermism, nor on mainstream common sense ethics. This is the view of someone willing to take unacceptable risks for the whole of humanity. 

The audience here is mainly Americans so you might want to add an explicit sarcasm tag.

1Nicholas / Heather Kross
As an American... yeah pretty much.
8GunZoR
I am wounded.

"Fault" seems a strange phrasing. If your problem was that one of your nerves was misfiring, so you were in chronic pain, would you describe that as "your fault"? (In the sense of technical fault/malfunction, that would absolutely be your "fault", but "your fault" usually carries moral implications.)

Where would you place the fault?

I suspect everyone can relate in that everyone has felt this at some point, or even at a few memorable points.

 

Duncan did you just deny my existence? (Don't worry, I don't mind a bit. :-) )

I'm a grade A weirdo, my own family and friends affirm this, only the other day someone on Less Wrong (!) called me a rambling madman. My nickname in my favourite cricket club/drinking society was Space Cadet.

And I'm rather smug about this. Everyone else just doesn't seem very good at thinking. Even if they're right they're usually right by accident. Even the clever... (read more)

8TheLemmaLlama
I'm with you on this one; I like feeling like an outlier. It makes me feel special :P There are some examples there that did grind my gears though, like the pillow-throwing example and the 'that didn't hurt' example. They felt more like 'I'm going to insist your inner experience isn't real, to the point where I won't believe you (even if only in a joking way) if you told me'. Whereas the 'no-one does that' example and the 'we all love Tom Hanks too much' example felt more like a metaphorical 'everyone' and if you actually said 'no, I'm not like that', the response would be 'oh okay not ~everyone's~ like that'. I'd personally feel hurt by the former class of experiences but not the latter, because for me, it's more about invalidation. It's less 'you don't exist', but rather 'you exist in this particular way (that's contrary to my own experiences and completely alien to what I perceive myself as), AND if you say otherwise you're lying'. Similarly, I'd feel hurt by an implication that someone else doesn't exist, if it's contrary to my own experiences. For instance, if I've argued about X with a lot of people and some of them gave a counterargument Y, and then someone has the counterargument Z. They think I'm strawmanning Z as Y, and they tell me: 'no-one said Y'. It's like ... someone definitely said Y. I distinctly remember a nonzero number of people explicitly saying Y to my face, and I even made sure they actually meant Y and I wasn't misinterpreting them. Even if I know it's a metaphorical 'no-one' and they actually just meant 'most people who appear to be saying Y actually mean Z', it still hurts :\
2Sonata Green
(Typo thread?) "GPT-3" → "GPT-6"?

Did someone fiddle with Charlotte? 

I went to talk to her after reading this and she was great fun, I quite see how you fell for her.

 

But I tried again just now and she seems a pale shadow of her former cheerful self, it's the difference between speaking to a human PR drone in her corporate capacity and meeting her at a party where she's had a couple. 

Doesn't any such argument also imply that you should commit suicide?

Not necessarily

Suicide will not save you from all sources of s-risk and may make some worse. If quantum immortality is true, for example. If resurrection is possible, this then makes things more complicated.

The possibility for extremely large amounts of value should also be considered. If alignment is solved and we can all live in a Utopia, then killing yourself could deprive yourself of billions+ years of happiness.

I would also argue that choosing to stay alive when you know of the risk is different from inflicting the risk on a new being you have created... (read more)

These seemed good, they taste of lavender, but the person trying them got no effect:

https://www.amazon.co.uk/gp/product/B06XPLTLLN/ref=ppx_yo_dt_b_search_asin_title?ie=UTF8&psc=1

Lindens Lavender Essential Oil 80mg Capsules

The person who had it work for them tried something purchased from a shop, Herbal Calms maybe?, anyway, lavender oil in vegetable oil in little capsules. She reports that she can get to sleep now, and can face doing things that she couldn't previously do due to anxiety if she pops a capsule first.

That makes perfect sense, thank you. And maybe, if we've already got the necessary utility function, stability under self-improvement might be solvable as if it were just a really difficult maths problem. It doesn't look that difficult to me, a priori, to change your cognitive abilities whilst keeping your goals.

AlphaZero got its giant inscrutable matrices by working from a straightforward start of 'checkmate is good'. I can imagine something like AlphaZero designing a better AlphaZero (AlphaOne?) and handing over the clean definition of 'checkmate is good... (read more)

What I am not convinced of, is that given all those assumptions being true, certain doom necessarily follows, or that there is no possible humanly tractable scheme which avoids doom in whatever time we have left.

 

OK, cool, I mean "just not building the AI" is a good way to avoid doom, and that still seems at least possible, so we're maybe on the same page there.

And I think you got what I was trying to say, solving 1 and/or 2 can't be done iteratively or by patching together a huge list of desiderata. We have to solve philosophy somehow, without superi... (read more)

A good guess, and thank you for the reference, but (although I admit that the prospect of global imminent doom is somewhat anxious-making), anxiety isn't a state of mind I'm terribly familiar with personally.  I'm very emotionally stable usually, and I lost all hope years ago. It doesn't bother me much.

It's more that I have the 'taking ideas seriously' thing in full measure, once I get an *idee fixe* I can't let it go until I've solved it. AI Doom is currently third on the list after chess and the seed oil nonsense, but the whole Bing/Sydney thing sta... (read more)

3baturinsky
Thanks for advice. Looks like my mind works similar to yours, i.e. can't give up task it has latched on. But mine brain draws way more from the rest of my body than it is healthy. It's not as bad now as it was in the first couple of week, but I still have problem sleeping regularly, because my mind can't switch off the overdrive mode. So, I become sleepy AND agitated at the same time, which is quite unpleasant and unproductive state. There are no Lavender Pills around here, but I take other anxiety medications, and they help, to an extent.

To be clear, even if I were somehow granted vivid knowledge of the future through precognition, you’d still seem crazy to me at this point.

 

(I assume you mean vivid knowledge of the future in which we are destroyed, obviously in the case where everything goes well I've got some problem with my reasoning)

That's a good distinction to make, a man can be right for the wrong reasons. 

Even as a doomer among doomers, you, with respect, come off as a rambling madman.

Certainly mad enough to take "madman" as a compliment, thank you!

I'd be interested if you know a general method I could use to tell if I'm mad. The only time I actually know it happened (thyroid overdose caused a manic episode) I noticed pretty quickly and sought help. What test should I try today?

Obviously "everyone disagrees with me and I can't convince most people" is a bad sign. But after long and patient effort I have convinced a number of unfortunates in my circle of fri... (read more)

7[anonymous]
I get that your argument is essentially as follows: 1.) Solving the problem of what values to put into an ai, even given the other technical issues being solved, is impossibly difficult in real life. 2.) To prove the problem’s impossible difficulty, here’s a much kinder version of reality where the problem still remains impossible. I don’t think you did 2, and it requires me to already accept 1 is true, which I think it probably isn’t, and I think that most would agree with me on this point, at least in principle. I don’t disagree with any of them. I doubt there’s a convincing argument that could get me to disagree with any of those as presented. What I am not convinced of, is that given all those assumptions being true, certain doom necessarily follows, or that there is no possible humanly tractable scheme which avoids doom in whatever time we have left. I’m not clever enough to figure out what the solution is mind you, nor am I especially confident that someone else is necessarily going to. Please don’t confuse me for someone who doesn’t often worry about these things.
Load More