Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Open thread, June 5 - June 11, 2017

1 Post author: Elo 05 June 2017 04:23AM

If it's worth saying, but not worth its own post, then it goes here.

Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should start on Monday, and end on Sunday.

4. Unflag the two options "Notify me of new top level comments on this article" and "

Comments (96)

Comment author: Vaniver 08 June 2017 12:04:07AM 9 points [-]

Update on LW2.0: this week, Oliver and I are in Seattle at a CFAR workshop, so only Harmanas is working. This week is some final feature work and bugfixing, as well as user interviews. Next week we plan to start the closed beta. Rather than running for a predefined length of time, the plan is to run until we're happy with the user experience, and then do an open beta, which will probably have a defined length.

Comment author: cousin_it 09 June 2017 10:58:37PM *  4 points [-]

I just realized that Paul Graham's "Make something people want" can be flipped into "Make people want something", which is a better description of many businesses that make money despite creating zero or negative value to society. For example, you can sell something for $1 that gives people $2 in immediate value but also sneakily gives them $3 worth of desire to buy otherwise useless stuff from you. Or you can advertise to create desire where it didn't exist, leading to negative value for everyone who saw your ad but didn't buy your product. Or you can give away your product for free and then charge for the antidote.

This seems like a big loophole in markets which has no market-based solution. It also helps explain why rich people and countries aren't proportionally happier than poor ones, if they are mostly paying to make manufactured pains go away. People's criteria for happiness are too easily raised by what they buy, see or think, but hardly anyone pushes back against that.

My previous thoughts on this topic: 1, 2, 3, 4. I feel like the ideas are coming together into something bigger, but can't put a finger on it yet.

Comment author: Lumifer 10 June 2017 04:24:50AM 0 points [-]

the ideas are coming together into something bigger

It's a wheel.

Comment author: cousin_it 10 June 2017 06:43:58AM 2 points [-]

I feel like it's more general than that. For example, dreaming that an iPhone will make you happy is bad for you, but so is dreaming that becoming a great artist (or even joining an anti-consumerist revolution) will make you happy.

Comment author: ChristianKl 12 June 2017 10:27:14AM *  0 points [-]

What exactly do you mean that people dream of an iPhone making them happy? What do you think people expect an iPhone to do, that it doesn't?

In general, products that don't deliver on expectations lead to customers complaining that the expectations aren't meet. How often do you see such complaints from people who have brought an iPhone?

Comment author: cousin_it 12 June 2017 10:42:17AM *  1 point [-]

That's why I chose the word "dreams" instead of product expectations amenable to customer complaints and such. Watch this ad to see what I mean.

Comment author: entirelyuseless 13 June 2017 04:19:29PM 1 point [-]

Romance seems like an example of a "natural" Marlboro country ad, implanted by nature in people with the intention of making more people, but falsely telling them this is happiness.

Comment author: cousin_it 13 June 2017 04:53:07PM 0 points [-]

Yeah. Though as dreams go, romance isn't the worst. Many of its followers do end up happy, and many of the unhappy ones get over it. Compare with the dream of being a rockstar, where only a handful can ever succeed, and many of the failed ones never let go.

Comment author: ChristianKl 12 June 2017 05:52:33PM 0 points [-]

I don't see any iPhone in that ad.

If you generally believe that the dream of being a ranger is bad for people, do you also judge Winnetou for giving people dreams they use to evade their daily lives but they can't achieve?

Comment author: cousin_it 12 June 2017 06:15:01PM *  0 points [-]

Yeah, I think a lot of entertainment leads to escapism which is partly a downside to me. When Roger Ebert said that video games can't be art, and someone told him that they provide much needed escapism, this was his reply:

I do not have a need "all the time" to take myself away from the oppressive facts of my life, however oppressive they may be, in order to go somewhere where I have control. I need to stay here and take control.

Comment author: entirelyuseless 12 June 2017 01:17:30PM 0 points [-]

I keep hoping that voice activation features will be helpful. Up to now, they haven't been, at least for me. They just do not work consistently enough. Apparently they do for some people, and I expect that at some future time they will for me, but up to now they have consistently failed that expectation, even though I keep hoping it will work.

Comment author: Lumifer 11 June 2017 01:44:20AM 0 points [-]

Dreaming is a fuzzy word -- are you saying that desires are bad for you? Or hopes? Or maybe expectations of good things?

Comment author: Dagon 12 June 2017 09:56:45PM 0 points [-]

I take it as "far-mode, unspecific dreams/hopes/expectations are problematic if the agent doesn't do the work to tie it to near-mode specifics".

Comment author: Lumifer 13 June 2017 01:58:13AM 0 points [-]

Yeah, but there is that very important part of dreams/hopes/expectations providing the much-needed motivatation for doing the work. Without them you are just stuck in the near mode and are slowly crawling towards the nearest local maxium (e.g. turnips -- warning, the link is NSFW).

Comment author: cousin_it 11 June 2017 11:14:44AM *  0 points [-]

Inflated desires/hopes/expectations, I guess.

Comment author: Lumifer 12 June 2017 03:22:08PM 0 points [-]

That sounds... bleak.

Who determines what's "inflated"?

Comment author: Liron 06 June 2017 05:42:04AM 4 points [-]

If you use online dating, I just launched a site called WittyThumbs to analyze and improve your conversations, in order to get better dates. Let me know what you think!

Comment author: Viliam 05 June 2017 12:46:21PM *  4 points [-]

I am reading a book Bring Up Genius (mentioned at SSC recently), and I am confused. I am still in the first part of the book, but seems like the author is alternating between "every healthy child can become a genius if educated properly" and describing reseach and observation of high-IQ children, without ever acknowledging the difference between "every" and "high-IQ". I am trying to write a summary for LW, but I fail to make a coherent explanation.

When I try hard, I could make a consistent hypothesis like: the behavior of high-IQ children gives us hints on the direction we should try to move all children; and the spectacular failures of educational system with regards to high-IQ children are an evidence that the education may be failing the average children in a similar way, only less visibly -- but here I suspect I am simply making up my own stuff instead of explaining the author's view. I suspect the author may have believed that IQ is largely determined by nurture, but he doesn't directly say or deny this; it's just a position I can imagine that would make the rest of the book sound coherently. (But it is obviously wrong.) A less charitable explanation is that the author simply didn't see high IQ as a privilege, because within his family it was a norm. But that would make his lessons less universal. Although still useful for e.g. the folks at Less Wrong.

Anyone else reading the book? Unfortunately, I can't find an English version; I am currently reading Eduku Geniulon in Esperanto, uhm, here.

Comment author: Lumifer 07 June 2017 06:10:03PM 3 points [-]


We like to think that we’re hyper-rational, but when we have to choose a technology, we end up in a kind of frenzy — bouncing from one person’s ... comment to another’s blog post until, in a stupor, we float helplessly toward the brightest light and lay prone in front of it, oblivious to what we were looking for in the first place.


Comment author: MrMind 08 June 2017 08:12:03AM 0 points [-]

Also an insight that struck me:

Your goal should be to “solve” the problem mostly within the problem domain, not the solution domain.

Comment author: Vaniver 10 June 2017 07:42:43PM 2 points [-]

I've moved a post about an ongoing legal issue to its author's drafts. They can return it to public discussion when the trial concludes.

Comment author: Zack_M_Davis 11 June 2017 07:58:20PM 2 points [-]

What specific bad things would you expect to happen if the post was left up, with what probabilities? (I'm aware of the standard practice of not discussing ongoing legal cases, but have my doubts about whether allowing the legal system to operate under conditions of secrecy actually makes things better on net.)

Comment author: Vaniver 12 June 2017 06:26:40PM 3 points [-]

What specific bad things would you expect to happen if the post was left up, with what probabilities? (I'm aware of the [standard practice] of not discussing ongoing legal cases, but have my doubts about whether allowing the legal system to operate under conditions of secrecy actually makes things better on net.)

I am following standard practice. I have only weakly considered the relevant norm, and agree that it's not pure good.

Comment author: bogus 12 June 2017 11:09:34AM *  0 points [-]

If the "legal issue" is what I think it is, then having a post about it here at LW is just worthless gossiping. Just because it might involve the real-world "rationality community" in some tangential way doesn't mean it has a place on this site. Many people here don't even care about what MIRI or CFAR might be working on!

Comment author: ChristianKl 12 June 2017 08:57:00AM 0 points [-]

Actually talking about the specifics of what Vaniver expects in public might produce similar harm to letting the post be public.

Comment author: Elo 11 June 2017 03:28:04PM 2 points [-]

The event will be worth a post-mortem when the legal event is concluded.

Comment author: sad_dolphin 06 June 2017 01:35:32PM *  2 points [-]

I am considering ending my life because of fears related to AI risk. I am posting here because I want other people to review my reasoning process and help ensure I make the right decision.

First, this is not an emergency situation. I do not currently intend to commit suicide, nor have I made any plan for doing so. No matter what I decide, I will wait several years to be sure of my preference. I am not at all an impulsive person, and I know that ASI is very unlikely to be invented in less than a few decades.

I am not sure if it would be appropriate to talk about this here, and I prefer private conversations anyway, so the purpose of this post is to find people willing to talk with me through PMs. To summarize my issue: I only desire to live because of the possibility of utopia, but I have recently realized that ASI-provided immortal life is significantly likely to be bad rather than good. If you are very familiar with the topics of AI risk, mind uploading, and utilitarianism, please consider sending me a message with a brief explanation of your beliefs and your intent to help me. I especially urge you to contact me if you already have similar fears of AI, even if you are a lurker and are not sure if you should. Because of the sensitive nature of this topic, I may not respond unless you provide an appropriately genuine introduction and/or have a legitimate posting history.

Please do not reply/PM if you just want to tell me to call a suicide prevention hotline, tell me the standard objections to suicide, or give me depression treatment advice. I might take a long time to respond to PMs, especially if several people end up contacting me. If nobody contacts me I will repost this in the next discussion thread or on another website.

Edit: The word limit on LW messages is problematic, so please email me at sad_dolphin@protonmail.com instead.

Comment author: Viliam 07 June 2017 09:27:45PM *  10 points [-]

WTF is this? Please take a step back, and look at what you did here.

Your literally first words on this website are about suicide. Then you say no suicide, and then you explain in detail how people are not supposed to talk about your possible suicide. Half of your total contribution on this website is about your suicide-not-suicide. Thanks; now everyone can understand they are not supposed to think about the pink elephant in the room. So... why have you mentioned it, in the first place? Three times in a row, using a bold font once, just to be sure. Seems like you actually want people to think about your possible suicide, but also to feel guilty if they mention it. Because the same comment, without this mind game, could be written like this:

I have recently realized that ASI-provided immortal life is significantly likely to be bad rather than good. If you are very familiar with the topics of AI risk, mind uploading, and utilitarianism, I am interested in your opinions about this topic.

Much less drama, right?

Next, you provide zero information about yourself. You are a stranger here, and you use anonymized e-mail. And I guess we will not learn more about you here, because you prefer private conversations anyway. However, you "urge" people to contact you, and provide an "appropriately genuine introduction", a brief explanation of their beliefs, and their intent to help you. But they are not supposed to mention your suicide-not-suicide, right? But they are supposed to want to help you. But they are not allowed to suggest seeking expert help. And they are supposed to tell you things about themselves, without knowing anything about you. And this all is supposed to happen off-site, without any observers, inter alie because the word limit on LW messages is problematic. Right. How weird no one else has realized yet how much this problematic word limit prevents us from debating AI-related topics here.

More red flags than in China on Mao's birthday.

I don't think you are in a risk of suicide. Instead, I think that people who would contact you are in serious risk of being emotionally exploited (and reminded of your suicide-not-suicide, and their intent to help). Something like: "I told you that I am ready to die unless you convince me not to; and you promised you would help me; and you know that I will never seek expert help; and you don't know whether anyone else talks to me; so... if you stop interacting with me, you might be responsible for my death; is that really okay for you as a utilitarian?"

If anyone wants to play this game, go ahead. I have already seen my share of "suicidal" people giving others detailed instructions how to interact with them, and unsurprisingly, decades later all of them are still alive; and the people who interacted with them regret having that experience.

Comment author: Zack_M_Davis 08 June 2017 03:54:39AM 3 points [-]

I corresponded with sad_dolphin. It added a little bit of gloom to my day, but I don't regret doing it: having suffered from similar psychological problems in the past, I want to be there with my hard-won expertise for people working through the same questions. I agree that most people who talk about suicide in such a manner are unlikely to go through with it, but that doesn't mean they're not being subjectively sincere. I'd rather such cries for help not be disincentivized here (as you seem to be trying to do); I'd rather people be able to seek and receive support from people who actually understand their ideas, rather than callously foisted off onto alleged "experts" who don't understand.

Comment author: sad_dolphin 08 June 2017 11:28:24AM 0 points [-]

I am not sure how to even respond to this. I do not know what drives you to hatefully twist my words, depicting my cry for help as some kind of contrived attempt at manipulation, but you are obviously not acting with anything close to an altruistic intent.

Yes, I am entirely serious about this. Far more than you know. Perhaps if you had contacted me to have an intelligent discussion, instead of directly proceeding to accuse me with many critical generalizations, you would have realized that.

I have had several people message me already, and we are currently having civil discussions about potential future scenarios. I am certain they would all attest that they are not being 'emotionally exploited', as you seem to think is my goal. I publicly mentioned suicide because genuine consideration of the possibility was the entire point of the post, and I (correctly, for the most part) assumed that this community was mature enough to handle it without any drama.

You clearly have zero experience dealing with suicidal individuals, and would do well to stay away from this discussion. I had a hard enough time working up the courage to make that post, and I really do not want any drama from this. I hope you will do the mature thing and just leave me alone.

Comment author: Viliam 08 June 2017 03:15:22PM 4 points [-]

The mature way to handle suicidal people is to call professional help, as soon as possible. If the suicidal thinking is caused by some kind of hormonal imbalance -- which the person will report as "I have logically concluded that it is better for me to die", because that is how it feels from inside -- you cannot fix the hormonal imbalance by a clever argument; that would be magical thinking. Most likely, you will talk to the person until their hormonal spike passes, then the person will say "uhm, what you said makes a lot of sense, I already feel better, thanks!", and the next day you will find them hanging from the noose in their room, because another hormonal spike hit them later in the evening, and they "logically concluded" that life actually is meaningless and there is no hope and no reason to delay the inevitable, so they wouldn't even call you or wait until the morning, because that also would be pointless.

(Been there, failed to do the right thing, lost a friend.)

Sure, this seems like an unfalsifiable hypothesis "you believe it is not caused by hormones because that belief is caused by hormones". But that's exactly the reason to seek professional help instead of debating it; to measure your actual level of hormones, and if necessary, to fix it. Body and mind are connected more than most people admit.

That's all from my side. If you are sincere, I wish you luck. Any meaningful help I could offer is exactly what you refuse, so I have nothing more to add.

Comment author: Zack_M_Davis 08 June 2017 05:54:05PM 1 point [-]

The mature way to handle suicidal people is to call professional help, as soon as possible.

It's worth noting that this creates an incentive to never talk about your problems.

My advice for people who value not being kidnapped and forcibly drugged by unaccountable authority figures who won't listen to reason is to never voluntarily talk to psychiatrists, for the same reason you should never talk to cops.

Comment author: Viliam 08 June 2017 06:16:24PM 0 points [-]

It would be great to have a service where you get your blood sample taken and tested anonymously, and then anonymously receive pills to fix the problem. But I guess most suicidal people would (1) refuse to use this service anyway, either because of some principle, or because they would "logically" conclude it is useless and cannot possibly help; and (2) even if a friend would push them to do so, at some moment they would find a reason to stop taking the pills, and when the effect of the pills wears off, conclude "logically" that the life is not worth living.

Comment author: Lumifer 08 June 2017 03:08:19PM 2 points [-]

The more you write, the less sincere you sound.

Comment author: Elo 08 June 2017 12:30:08AM 0 points [-]

I appreciate the comment here.

Comment author: Mitchell_Porter 08 June 2017 07:10:19PM 0 points [-]

If ASI-provided immortal life were possible, you would already be living it.

... because if you're somewhere in an infinite sequence, you're more likely to be in the middle than at the beginning.

Comment author: MrMind 08 June 2017 08:29:11AM 0 points [-]

As an aside, against the most horrific version of UFAI, even suicide won't avoid dystopia. Heh.

Comment author: cousin_it 08 June 2017 10:35:58PM *  2 points [-]

I've talked to some unstable folks who were really upset by ideas like AI blackmail, rescue sims, etc. Did my best to help them, which sometimes worked and sometimes didn't. If such ideas didn't exist, I suspect these folks would latch onto something more accessible with similar nightmare potential, like the Christian hell or the multiverse or simply the amount of suffering in our world. Mostly I agree with Viliam that fixing the mood with chemicals (or mindfulness, workouts, sunshine, etc) is a better idea than trying to reason through it.

Comment author: MaryCh 09 June 2017 09:05:48AM 1 point [-]

Yvain once wrote a cute (but, to my mind, rather pointless) post about "rational poetry" or some such; but do rationalists even like poetry as a form of expression? Empirically?

If you want to say something in more detail, please leave a comment.


Comment author: btrettel 09 June 2017 04:11:27PM 2 points [-]

Poetry, along with some other art forms, always struck me as inherently uninteresting to the point where I find it hard to believe anyone actually enjoys it. I see some people who are obviously moved by poetry, so clearly I'm just at one end of the spectrum. To each their own.

Comment author: MaryCh 09 June 2017 04:23:00PM 0 points [-]

I only rarely find interesting or moving visual art. I can be loads more interested by a description of a picture, but seldom to the same extent as by a piece of poetry. One co-worker (boss, actually) of mine said she just did not get poetry, and I tried to see other differences in how we tick - I think she's more self-assured and appreciative of data drawn in tables, but that's all. Sometimes, I really wonder if aesthetics are partly genetics...

Comment author: philh 09 June 2017 09:53:28AM 1 point [-]

I wouldn't say I "like poetry" as such, but there are certainly poems I like; two that come to mind are If and Absolutely Nothing. Oh, and a lot of "lik the bred"s. I've sometimes listened to spoken poetry where I didn't follow the words very well but enjoyed the rhythm.

I think Brienne Yudkowsky has written about poetry.

Comment author: Strangeattractor 12 June 2017 07:08:48AM 0 points [-]

I like some poetry. Often in the form of song lyrics, or Shakespeare's plays.

Comment author: borismus 07 June 2017 04:36:40PM 1 point [-]

Wanted to share this concept of a metaquiz with this community.

The primary goal is that participants do poorly on the “other side” section. Underestimating the other side’s knowledge raises the questions “maybe they’re not all stupid?”. Incorrectly stereotyping their beliefs raises the question “maybe they’re not all evil?”. As a secondary goal, if participants do poorly on the quiz itself, they may learn something about climate change. Any feedback on this idea? Links to related concepts?

Here’s an example metaquiz on climate change: https://goo.gl/forms/ZqNQs3y1L1kpMPtF2

Comment author: Lumifer 07 June 2017 04:56:15PM 0 points [-]

The primary goal is that participants do poorly on the “other side” section.

You may want to re-formulate this sentence :-)

The obvious problem is that the "other side" is rarely uniform. You typically get a mix of smart and honest people (but with different values), people who are in there for power and money, the not-too-smart ones duped by propaganda, the edgelords who want attention (and/or the lulz), the social conformists, etc.

Some, but not all are stupid. Some, but not all, are evil.

Comment author: borismus 07 June 2017 05:20:17PM 0 points [-]

The nuance you articulate in the last sentence is kind of the point I'm trying to make. I think many on the fringes would disagree with you.

Further, if such metaquizzes can suggest that in this case "some" is more like "very few", and not "actually quite a lot", I think we'd be in better political shape!

Comment author: Lumifer 07 June 2017 05:46:02PM 1 point [-]

I think many on the fringes would disagree with you.

Clearly they must be both stupid and evil :-D

I think we'd be in better political shape

I see no reason to believe so. Political adversity is NOT driven by misunderstandings.

Comment author: borismus 07 June 2017 08:03:48PM 0 points [-]

Interesting perspective. So you think that both parties have an accurate understanding of one another's viewpoints? Can you provide any evidence for that?

Comment author: Lumifer 07 June 2017 08:12:02PM 1 point [-]

I didn't say they have. I said that if they were to acquire such an accurate understanding, political conflict would not cease.

Comment author: borismus 07 June 2017 10:09:43PM 0 points [-]

Ceasing political conflict is a ridiculously ambitious, unrealistic, maybe even undesirable goal. I'm talking about a slight decrease here.

Comment author: Lumifer 08 June 2017 01:05:45AM 0 points [-]

Right, you claimed that "we'd be in better political shape". Any evidence to back up that belief? Oh, and which political shape is "better"?

Comment author: borismus 08 June 2017 02:10:31AM *  0 points [-]

I attempt explain in this post: http://smus.com/viewpoint-tolerance-through-curiosity/. What do you think?

Comment author: Lumifer 08 June 2017 04:34:48AM 1 point [-]

Well, that link doesn't explain, since you start with these claims as axioms (that is, you assert them as self-evident and I'm not quite willing to assume that). And I still don't know what it the metric by which you measure the goodness of the political shape.

As an aside, your quiz requires me to log into Google. Any particular reason for that?

Comment author: madhatter 05 June 2017 10:31:22PM 1 point [-]

Thoughts on Timothy Snyder's "On Tyranny"?

Comment author: DataPacRat 08 June 2017 06:44:58PM 0 points [-]

Due to Life, I now have a 2x3-foot corkboard just above the foot of my bed. What should I pin to it?

Comment author: Elo 08 June 2017 08:28:11PM 1 point [-]

Kanban board

Comment author: DataPacRat 08 June 2017 09:07:28PM 0 points [-]

After a quick Google - a 'to-do/doing/done' list made of sticky-notes seems like it'd be simple, inexpensive, and helpful. Unless someone comes up with a better suggestion by tomorrow, I expect I'm going to start giving this a try as soon as I hit the nearby dollar store. :)

Comment author: Elo 08 June 2017 09:26:43PM *  1 point [-]

Ideally slots for:
several - to-do
1-2 - doing
1 - next
A few - waiting (for a reply email or something)
Done - several.

Comment author: Lumifer 08 June 2017 06:53:33PM 1 point [-]

A computer screen.

Comment author: DataPacRat 08 June 2017 07:36:07PM 0 points [-]

An interesting thought.

The current setup is that the back of a dresser is facing my bed, with the corkboard on the back; do you know of any such screens that would be feasible to attach, in whatever manner? Or are you thinking more along the lines of grabbing an El Cheapo tablet, supported by a pile of pushpins?

Comment author: Lumifer 08 June 2017 08:44:10PM 0 points [-]

The issue is size. A tablet might be too small for the purpose, though it has the big advantage of being "complete" out of the box. A computer monitor is going to be lager but it's just a display, you will still need an actual computer for it. You might be able to use your smartphone as that computer, but depending on the particulars you could still need additional hardware.

The simplest way of attaching the screen would be plain-vanilla velcro. It's not going to be that heavy.

Comment author: DataPacRat 08 June 2017 09:13:55PM 0 points [-]

I think that before I invest myself too heavily in any particular hardware, I should try to find out more about what sorts of software exist for such passive wall displays. For example, I wouldn't mind something like the custom channel used at my local coffee shop, with my own pick of RSS feeds, weather sources, GCalender items, and the like; but I don't know offhand any piece of software, either for Android or Linux, that does that.

Comment author: Elo 08 June 2017 09:50:00PM 1 point [-]

I wouldn't computer in bed, used to, but it generally leads to bad habits around distraction and sleep

Comment author: MaryCh 07 June 2017 05:10:38PM 0 points [-]

Got another customer who wanted a book for a childof less than 1 y.o. Are there any simple things I can tell them besides "their vision is just developing, come back later"? Because I have the feeling this one didn't quite believe me.

Comment author: Screwtape 07 June 2017 05:35:13PM 1 point [-]

Dr. Seuss has nice pictures. So do travel almanacs. Yeah, the kid probably isn't going to get a whole lot out of them, but you can hold the kid and turn the pages and maybe read a bit at them while they chew on a corner.

Comment author: Viliam 07 June 2017 09:36:36PM *  0 points [-]

Basic shapes, large?

Or perhaps something that seems cute to the parent, and still functions as a large shape for the child. For example, you could make a big dark-green circle on white background, and add some extra lines to make it a frog, while knowing that the child will only see the big green circle on white background.

Comment author: MaryCh 08 June 2017 05:07:36AM 0 points [-]

We don't really have large enough, detail-less books (in our shop). There's a cute series of books made from cloth, but we don't have it, either (I think I will try to change this), but really, there's nothing quite large enough (or maybe I am just wrong and it's alright? Seems not so.)

Comment author: ChristianKl 12 June 2017 06:04:31PM 0 points [-]

If that's really the case that there's no existing book for this use-case, this looks like a market opportunity.

Comment author: ImmortalRationalist 07 June 2017 12:47:54AM 0 points [-]

Has anyone here read Industrial Society And Its Future (the Unabomber manifesto), and if so, what are your thoughts on it?

Comment author: MrMind 08 June 2017 08:25:36AM *  0 points [-]

While I was searching for the manifesto, I noticed a strange incongruence between the English and the Italian Wikipedia. While the latter source is very similar to the former, there is this strange sentence:

il suo documento scritto in 35000 parole La Società Industriale e il Suo Futuro (meglio noto come La Pillola Rossa, chiamato anche "Manifesto di Unabomber")

which translates roughly as "his document 35000 words-long Industrial Society and Its Future (also known as The Red Pill, also called "Unabomber Manifesto").
Wait, what? The Red Pill? Since when?
There's no trace of such name in the English version. Any source on that? Is it plausible? Is it some kind of fucked-up joke?

Comment author: Lumifer 08 June 2017 03:13:04PM 3 points [-]

Wikipedia is a wiki. Anyone can (and does) edit it. There are constant efforts to keep it "clean", but it's not unusual to find, basically, Easter eggs, graffiti, random nonsense, etc. buried in the otherwise reasonable text of some article.

Comment author: Thomas 05 June 2017 08:12:41AM 0 points [-]
Comment author: ZankerH 05 June 2017 01:04:37PM *  0 points [-]

Preliminary solution based on random search

MakeIntVar A
Inc A
Shl A, 5
Inc A
Inc A
Inc A
Shl A, 1

I've hit on a bunch of similar solutions, but 2 * (1 + 34^2) seems to be the common thread.

Comment author: Lumifer 05 June 2017 02:47:57PM *  0 points [-]

Let's rewrite this in something C-like:

int a // a = 0
int b // b = 0
int c // c = 0
a++ // a = 1
a++ // a = 2
b = a * a // b = 4
c = a << a // c = 8
c = b * c // c = 32
c = c + a // c = 34
b = b >> a // b = 1
c << b // c = 1156
c++ // c = 1157
c = c * a // c = 2314

13 lines.

Comment author: Thomas 06 June 2017 06:08:31AM *  0 points [-]
1 int a //a=0
2 int b //b=0
3 inc a //a=1
4 inc a //a=2
5 shl a,a //a=8
6 b=a*a //b=64
7 inc a //a=9
8 b=b+b //b=128
9 b=b+a //b=137
10 a=a*b //a=1233
11 a=a*b //a=168921
12 inc a //a=168922
13 a=b*a //a=23142314
Comment author: Lumifer 06 June 2017 05:31:46PM 0 points [-]

Yep. Effectively you're just writing code in a very restricted subset of assembly language.

A more interesting exercise would be to write a program which would output such code with certain (I suspect, limited) guarantees of optimality .

Comment author: Thomas 06 June 2017 07:17:16PM *  0 points [-]

Say a number (below 1 billion), I'll give you (optimal) code.

Comment author: Lumifer 06 June 2017 08:13:30PM 0 points [-]

You had a genetic algorithm at some point -- is that what you are using?

Comment author: Thomas 06 June 2017 08:27:55PM 0 points [-]

I wouldn't call it genetic. But yes, I have an algorithm to solve this kind of problems.

Comment author: Thomas 05 June 2017 06:23:12PM 0 points [-]

13 lines are also just enough for 23142314.

Comment author: Thomas 05 June 2017 01:13:42PM 0 points [-]

You can't do

Shl A, 5

You must first create 5 in a variable, say B.

Comment author: ZankerH 05 June 2017 01:20:58PM *  0 points [-]

Well, that does complicate things quite a bit. I threw those lines out of my algorithm generator and the frequency of valid programs generated dropped by ~4 orders of magnitude.

Comment author: Thomas 05 June 2017 01:28:39PM 0 points [-]

You can't even shift by 1. You have to create 1 first, out of zero. Just like God.

Comment author: ZankerH 05 June 2017 02:21:54PM 1 point [-]

In which case, best I can do is 10 lines

MakeIntVar A
Inc A
Inc A
Inc A
Inc A
Comment author: Thomas 05 June 2017 03:24:57PM 0 points [-]

Good enough, congratulations!

The next (week) question might be, how to optimally produce an arbitrary large number out of zero. For example, 15 lines is enough to produce 23142314. But is this the minimum?

Comment author: ZankerH 05 June 2017 05:46:32PM 0 points [-]

Define "optimal". Optimizing for the utility function of min(my effort), I could misuse more company resources to run random search on.

Comment author: Thomas 05 June 2017 05:57:49PM 0 points [-]

The optimal is to either minimize the energy or the time required, by my book. Or to minimize algorithmic steps. Doesn't really matter which one of those definitions you adopt, they are closely related.

It's like the Kolmogorov's complexity. Which program language to use as the reference? Doesn't really matter. Just use the one I gave, or modify it in any sensible way. Then find a very good solution for 23142314 - or any other interesting number. They are all interesting.

Comment author: Lumifer 05 June 2017 04:01:34PM *  0 points [-]

Just doing left shifts will scale you up very quickly:

int a
a++ // a = 2
a << a // a = 8
a << a // a = 2048
a << a // a = large number
Comment author: Thomas 05 June 2017 04:07:39PM 0 points [-]

Sure. But those numbers are like hubs, you need a "local line" to get to a "non-hub" number, which are majority.

Comment author: Thomas 05 June 2017 03:27:22PM 0 points [-]

It isn't.

Comment author: ZankerH 05 June 2017 11:44:27AM *  0 points [-]

Define "shortest". Least lines? Smallest file size? Least (characters * nats/char)?

Comment author: Thomas 05 June 2017 12:19:33PM 0 points [-]

Least lines.