Comment author: Viliam 16 September 2016 02:52:44PM 3 points [-]

Even just converting science into a Wikipedia-like format would be useful for the sake of open access. Imagine if all citations in a paper were a hyperlink away, and the abstract would display if you hovered your mouse over the link.

YES! YES! YES! And this could be done pretty much automatically. Also, links in the reverse direction: "who cited this paper?" with abstracts in tooltips.

But there is much more that could be done in the hypothetical Science Wiki. For example, imagine that the reverse citations that disagree with the original paper would appear in a different color or with a different icon, so you could immediately check "who disagree with this paper?". That would already require some human work (unfortunately, with all the problems that follow, such as edit wars and editor corruption). Or imagine having a "Talk page" for each of these papers. Imagine people trying to write better third-party abstracts: more accessible, less buzzwords, adding some context from later research. Imagine people trying to write a simpler version of the more popular papers...

The science could be made more accessible and popular.

Comment author: WhySpace 17 September 2016 05:09:00AM *  2 points [-]

One of my first thoughts was glosses.

If I recall, in the early middle ages, one of the main ways by which philosophy and proto-science advanced was through the extensive use of glosses. (as adapted from biblical glosses) Contemporary thinkers would all write commentaries on various works of Aristotle. At first, these were confined to the margins of the manuscripts being copied, but later they were published separately.

Since Aristotle had a fairly comprehensive philosophy, this meant reading all the glosses on a particular work of his brought you up to speed with the current state of knowledge on that topic. This had the effect of creating domains of knowledge, and scholarly specialization first nucleated around individual texts.

I say this, because one of the main problems with science today is just that there is so much of it. This makes it difficult to have interdisciplinary exchange of knowledge and meaningful communication and coordination.

Having a search engine like Google Scholar helps enormously, but it can be difficult to sift through a body of knowledge if you don't already know all the right keywords to search for. The existence of review articles also helps summarize, but it's still a somewhat clumsy solution. Why not replace the review article with a wiki?

It would be nice to have everything arranged formally and hierarchically, by field, sub-field, and then by topic within that sub field. Each level could have their own publicly editable summary, if there was enough human effort to maintain it. Imagine all that also ordered by citation index, and with links to all relevant news articles, blogs, and reddit threads commenting on each article.

Read a tabloid headline starting with "Scientists Say..."? Go directly to the wiki, and check what other scientists and the internet think of the research quality, background context, reputability, etc. Maybe even have a prediction market on whether the findings will replicate.

A huge part of the scientific discourse is no longer happening in the journal articles themselves, but this could capture it all in one place.

Comment author: DataPacRat 12 September 2016 10:09:14PM 4 points [-]

Time to rebuild a library

My 5 terabyte harddrive went poof this morning, and silly me hadn't bought data-recovery insurance. Fortunately, I still have other copies of all my important data, and it'll just take a while to download everything else I'd been collecting.

Which brings up the question: What info do you feel it's important to have offline copies of, gathered from the whole gosh-dang internet? A recent copy of Wikipedia and the Project Gutenberg DVD are the obvious starting places... which other info do you think pays the rent of its storage space?

Comment author: WhySpace 16 September 2016 12:36:12AM *  0 points [-]

Depends how much storage space you are willing to buy.

One of my fantasies is a Raspberry Pi that automatically downloads all Wikipedia updates each month or so, to keep a local copy. The ultimate version of this would do the same for every new academic article available on Sci-Hub.

Sci-Hub is the largest collection of scientific papers on the planet, and has over 58 million academic papers. If they average 100 kB a piece, that's only 5.8 TB. If they average 1MB each, then you would need to shell out some decent cash, but you could in theory download all available academic papers.

Someone may even have already done something like this, and put the script on GitHub or somewhere. (I haven't looked.)

(Also, nice username. :) )

EDIT: It turns out there's a custom built app for downloading and viewing Wikipedia in various languages. It's available on PCs, Android phones, and there's already a version made specially for the Pi: http://xowa.org/home/wiki/Help/Download_XOWA.html

I wonder how difficult it would be to translate all of Sci-Hub into a wiki format that the app could add and read. You'd probably have to modify the app slightly, in order to divide up all the Sci-Hub articles among multiple hard drives. It might make the in-app search feature take forever, for instance. And obviously it wouldn't work for the Android app, since there's not enough space on a MicroSD card. (Although, maybe a smaller version could be made, containing only the top 32GB of journal articles with the most citations, plus all review articles.)

Even just converting science into a Wikipedia-like format would be useful for the sake of open access. Imagine if all citations in a paper were a hyperlink away, and the abstract would display if you hovered your mouse over the link. (The XOWA app does this for Wikipedia links.)

Comment author: Soothsilver 12 September 2016 12:09:43PM 4 points [-]

Being around here has made me think that I know everything interesting about the world and suppressed my excitement and joy from many minor things I could do. I also feel like my sense of wonder diminished. As I write this, I am a little unhappy, and in a period of depression, but I had similar feelings, if less intense, even before this period.

I was wondering whether you have any advice on how to restore this; or even better, how to "forget" as much rationality and transhumanism as possible (if not actually forgetting, then at least "to think and feel as I did before I read the Sequences")?

Comment author: WhySpace 15 September 2016 11:49:20PM *  0 points [-]

If you are looking for a sense of wonder in particular, I'd recommend pretty much anything by Carl Sagan. Audio-books, Cosmos show, etc. My thought is that the social proof will transfer his positive valence to you. Ideally, when thinking about anything on a grand scale, it would trigger an association with that sense of wonder.

I hesitated to say this though, since it is also likely to make a sense of smallness more salient, if it isn't already salient. Use your best judgement.

Comment author: Nick5a1 06 September 2016 03:57:19PM *  3 points [-]

[Forgetting Important Lessons Learned]

Does this happen to you?

I'm not necessarily talking about mistakes you've made which have caused significant emotional pain, and you've learnt an important lesson from. I think these tend to be easier to remember. I'm more referring to personal processes you've optimized or things you've spent time thinking about and decided the best way to approach that type of problem. ...and then a similar situation or problem appears months or years later and you either (a) fail to recognize it's a similar situation, (b) completely forget about the previous situation and your previous conclusion as the best way to handle this type of problem, or (c) fail to even really think about the new situation as a problem you may have previously solved.

Anyone else frustrated by this?

Do you have any strategies you use to overcome this problem?

Comment author: WhySpace 09 September 2016 10:29:25PM 0 points [-]

I'm not sure how much this would help you in particular, but spaced repetition, when done right, should jog your memory and make you work to recall something just before you would have forgotten it.

In order to learn and remember to apply useful concepts, I have an Anki deck containing the following:

Comment author: fubarobfusco 05 September 2016 05:49:02PM 2 points [-]

It's not as if LW has a problem of too much material these days.

Comment author: WhySpace 05 September 2016 09:28:25PM 2 points [-]

For perspective, I find Elo's writing interesting maybe half the time. That's about on par with a random LW post, for me. (Whereas, >99% of facebook posts are uninteresting.)

If he published more than about once a day, or put a little less effort into each post, I think he'd lower the LW average. (According to my subjective judgement.) Conversely, another hour or so on each post, or a slightly higher self-filter might raise the average. (Assuming his idea of what makes a good post is fairly representative of mine.)

Comment author: ArisKatsaris 01 September 2016 09:58:03AM 1 point [-]

Online Videos Thread

Comment author: WhySpace 03 September 2016 04:16:45AM *  0 points [-]

EDIT: Perhaps I should say why this is relevant. Xrisk isn't just things which could destroy humanity outright, but also things from which we never recover. I'm also interested in building robust institutions which can survive unexpected circumstances, and have positive impact over centuries to influence the far future. (Nobel Prize Foundaton, DARPA's 100 year Starship, Long Now Foundation, etc.) Perhaps cryonicists will also find it interesting.

There was a recent TED talk on what makes systems robust in changing environments.

He starts with the example of the immune system, then mentions long-lived social systems (catholic church, roman empire), but goes on to focus mainly on applications to businesses that want to survive black swan events and industry disruption.

This flows somewhat counter to conventional wisdom, which says to optimize for growth by putting all your eggs into whichever basket has the largest growth rate or market.

He lists 6 characteristic that all these robust, long-lived systems have in common: Redundancy, Diversity, Modularity, Adaptation, Prudence, and Embeddedness.

Comment author: MrMind 30 August 2016 08:46:48AM *  1 point [-]

I think a problem arises with conclusion 4: I can agree that humans imperfectly steering the world for their own values has resulted in a world averagely ok, but AI will possibly be much more powerful than humans.
As far as corporation and sovereing states can be seen to be super-human entities, then we can see that imperfect value optimization has created massive suffering: think of all the damage a ruthless corporation can inflict e.g. by polluting the environment, or a state where political assassination is easy and widespread.
An imperfectly aligned value optimization might result in an average world that is ok, but possibly this world would be separated in a heaven and hell, which I think is not an acceptable outcome.

Comment author: WhySpace 30 August 2016 03:22:41PM *  0 points [-]

This is a good point. Pretty much all the things we're optimizing for which aren't our values are due to coordination problems. (There's also Akrasia/addiction sorts of things, but that's optimizing for values which we don't endorse upon reflection, and so arguably isn't as bad as optimizing for a random part of value-space.)

So, Moloch might optimize for things like GDP instead of Gross National Happiness, and individuals might throw a thousand starving orphans under the bus for a slightly bigger yacht or whatever, but neither is fully detached from human values. Even if U(orphans)>>U(yacht), at least there’s an awesome yacht to counterbalance the mountain of suck.

I guess the question is precisely how diverse human values are in the grand scheme of things, and what the odds are of hitting a human value when picking a random or semi-random subset of value-space. If we get FAI slightly wrong, precisely how wrong does it have to be before it leaves our little island of value-space? Tiling the universe with smiley faces is obviously out, but what about hedonium, or wire heading everyone? Faced with an unwinnable AI arms race and no time for true FAI, I’d probably consider those better than nothing.

That's a really, really tiny sliver of my values though, so I'm not sure I'd even endorse such a strategy if the odds were 100:1 against FAI. If that's the best we could do by compromising, I'd still rate the expected utility of MIRI's current approach higher, and hold out hope for FAI.

Comment author: Dagon 30 August 2016 02:01:02PM 4 points [-]

You can also point out the contradiction that they don't seem to be in a hurry to take the obvious first step by killing themselves. Proving that they see at least one human life as a net positive. Then talk about everyone else they don't want to kill or prevent being born.

Be aware, though, that this isn't truth-seeking. It's debate for the fun of it.

Comment author: WhySpace 30 August 2016 02:42:20PM *  7 points [-]

I think there's also a near/far thing going on. I can't find it now, but somewhere in the rationalist diaspora someone discussed a study showing that people will donate more to help a smaller number of injured birds. That's one reason why charity adds focus on 1 person or family's story, rather than faceless statistics.

Combining this with what you pointed out, maybe a fun place to take the discussion would be to suggest that we start with a specific one of our friends. "Exactly. Let's start with Bob. Alice next, then you. I'll volunteer to go last. After all, I wouldn't want you guys to have to suffer through the loss of all your friends, one by one. No need to thank me, it is it's own reward."

EDIT: I was thinking of scope insensitivity, but couldn't remember the name. It's not just a LW concept, but also an empirically studied bias with a Wikipedia page and everything.

However, I mis-remembered it above. It's true that I could cherry pick numbers and say that donations went down with scope in one case, but I'm guessing that's probably not statistically significant. People are probably willing to donate a little more, not less, to have an impact a hundred times as large. Perhaps there are effects from misleading vividness at a small scale, as I imply. However, on a large scale, the slope is likely largely positive, even if just barely.

Comment author: WhySpace 30 August 2016 05:01:24AM *  7 points [-]

Here's the problem with talking x-risk with cynics who believe humanity is a net negative, and also a couple possible solutions.

Frequently, when discussing the great filter, or averting nuclear war, someone will bring up the notion that it would be a good thing. Humanity has such a bad track record with environmental responsibility or human rights abuses toward less advanced civilizations, that the planet, and by extension the universe, would be better off without us. Or so the argument goes. I've even seen some countersignaling severe enough to argue, somewhat seriously, in favor of building more nukes and weapons, out of a vague but general hatred for our collective insanity, politics, pettiness, etc.

Obviously these aren't exactly careful, step by step arguments, where if I refute some point they'll reverse their decision and decide we should spread humanity to the stars. It's a very general, diffuse dissatisfaction, and if I were to refute any one part, the response would be "ok sure, but what about [lists a thousand other things that are wrong with the world]". It's like fighting fog, because it's not their true objection, at least not quite. It's not like either of us feels like we're on opposite sides of a debate or anything though, so usually pointing out a few simple facts is enough to get a concession that there are exceptions to the rule "humanity sucks". However, obviously refuting all thousand things, one by one, isn't a sound strategy. There really is a lot of bad stuff that humanity has done, and will continue to do I'm sure.

Usually, I try to point at broad improving trends like infant mortality, war, extreme poverty, etc. I'll argue that the media biases our fears by magnifying all the problems that remain. I paint a rosy future of people fighting debater's prisons in the past, debating universal healthcare today, and in the future arguing fiercely over whether money and work are needed at all in their post-scarcity Star Trek economy. Political rights for minorities yesterday, social justice today, argue over any minor inconveniences tomorrow. Starvation yesterday, healthy food for all today, gourmet delicacies free next to drinking fountains tomorrow. I figure they're more likely to accept a future where we never stop arguing, but do so over progressively more petty things, and never realize we're in a utopia.

However, I think I might have better luck trying to counter-counter signal. "Yeah, humanity is pretty messed up, but why do you want to put us out of our misery? Shouldn't we be made to suffer through climate change and everything else we've brought on ourselves, instead of getting off easy? Imagine another thousand years of inane cubical work and a dozen more Trump presidencies. Maybe we'll learn our lesson." [Obviously, I'm joking here.]

I think this might have the advantage of aligning their cynicism with their more charitable impulses, at least the way my conversations tend to go. And there's no impulse to counter-counter-counter-signal, because I've gone up a meta-level and made the counter-signaling game explicit, which releases all the fun available from being contrarian, and moves the conversation toward new sources of amusement. I'll bet we could then proceed to have interesting discussions on how to solve the world's problems. If whoever I'm musing with comes up with a few ideas of their own, maybe they'll even take ownership of the ideas, and start to actually care about saving the world in their own way. I can dream, I suppose.

Comment author: morganism 29 August 2016 09:58:01PM 6 points [-]

Academic Torrents site, for large scale database transfers

http://academictorrents.com/

Comment author: WhySpace 30 August 2016 03:38:25AM *  6 points [-]

LWers who liked this may also like: http://sci-hub.bz/

About: https://en.wikipedia.org/wiki/Sci-Hub

Basically, if you search for something and they don't have it, there's a huge network of scientists with access to pay-walled journals, and one of them will add a PDF. They've grown larger than any of the journal subscription companies, and have the world's largest collection of scientific papers.

View more: Prev | Next