All of WhySpace_duplicate0.9261692129075527's Comments + Replies

I'm not really sure how shortform stuff could be implemented either, but I have a suggestion on how it can be used: jokes!

Seriously. If you look at Scott's writing, for example, one of the things which makes it so gripping is the liberal use of amusing phrasing, and mildly comedic exaggerations. Not the sort of thing that makes you actually laugh, but just the sort of thing that is mildly amusing. And, I believe he specifically recommended it in his blog post on writing advice. He didn't phrase his reasoning quite like this, but I think of it as little bit... (read more)

Note to self, in case I come back to this problem: the Vienna Circle fits the bill.

:)

Honestly, there are a bunch of links I don't click, because the 2 or 3 word titles aren't descriptive enough. I'm a big fan of the community norm on more technically minded subreddits, where you can usually find a summary in one of the top couple comments.

So, I'm doing what I can to encourage this here. But mostly, I thought it was important on the AI front, and wanted to give a summary which more people would actually read and discuss.

Here are some thoughts on the viability of Brain Computer Interfaces. I know nothing, and am just doing my usual reality checks and initial exploration of random ideas, so please let me know if I'm making any dumb assumptions.

They seem to prefer devices in the blood vessels, due to the low invasiveness. The two specific form factors mentioned are stents and neural dust. Whatever was chosen would have to fit in the larger blood vessels, or flow freely through all of them. Just for fun, let's choose the second, much narrower constraint, and play with some nu... (read more)

1turchin
I would do it by using genetically modified human cells like macrophages, which sit inside blood vessels and register electric activities of the surrounding. It may send information by dumping its log as a DNA chain back into bloodstream. Downstreams such DNA chains will be sorted and read, but it would create time delays. This way of converting cells into DNA machines will lead eventually to bionanorobots, which will be able to everything original nanobots were intended to do, including neural dust. Another option is to deliver genetic vectors with genes into some astrocytes, and create inside them some small transmission element, like fluorescent protein reacting on changes of surrounding electric field. The best solution would be receptor binding drug, like antidepressant (which is legal to deliver into the brain), which also able to transmit information about where and how it has bounded, maybe helping high resolution non-invasive scans.

The article only touches on it briefly, but suggests faster AI takeoff are worse, but "fast" is only relative to the fastest human minds.

Has there been much examination of the benefits of slow takeoff scenarios, or takeoffs that happen after human enhancements become available? I vaguely recall a MIRI fundraiser saying that they would start putting marginal resources toward investigating a possible post-Age of EM takeoff, but I have no idea if they got to that funding goal.

Personally, I don't see Brain-Computer Interfaces as useful for AI takeoff... (read more)

1eternal_neophyte
Perhaps Elon doesn't believe we are I/O bound, but that he is I/O bound. ;] There's a more serious problem which I've not seen most of the Neuralink-related articles talk about* - which is that layering intelligence augmentations around an overclocked baboon brain will probably actually increase the risk of a non-friendly takeoff. * haven't read the linked article through yet

TL;DR of the article:

This piece describes a lot of why Elon Musk wanted to start Neurolink, and how Brain-Computer Interfaces (BCIs) currently work, and how they might be implemented in the future. It's a really, really broad article, and aims for breadth while still having enough depth to be useful. If you already have a grasp of evolution of the brain, Dual Process Theory, parts of the brain, how neurons fire, etc. you can skip those parts, as I have below.

AI is dangerous, because it could achieve superhuman abilities and operate at superhuman speeds. Th... (read more)

0[anonymous]
Thanks for the summary and overview!

TL;DR: What are some movements you would put in the same reference class as the Rationality movement? Did they also spend significant effort trying not to be wrong?

Context: I've been thinking about SSC's Yes, We have noticed the skulls. They point out that aspiring Rationalists are well aware of the flaws in straw Vulcans, and actively try to avoid making such mistakes. More generally, most movements are well aware of the criticisms of at least the last similar movement, since those are the criticisms they are constantly defending against.

However, searchin... (read more)

1WhySpace_duplicate0.9261692129075527
Note to self, in case I come back to this problem: the Vienna Circle fits the bill.
2fubarobfusco
Just a few groups that have either aimed at similar goals, or have been culturally influential in ways that keep showing up in these parts — * The Ethical Culture movement (Felix Adler). * Pragmatism / pragmaticism in philosophy (William James, Charles Sanders Peirce). * General Semantics (Alfred Korzybski). * The Discordian Movement (Kerry Thornley, Robert Anton Wilson). * The skeptic/debunker movement within science popularization (Carl Sagan, Martin Gardner, James Randi). General Semantics is possibly the closest to the stated LW (and CFAR) goals of improving human rationality, since it aimed at improving human thought through adopting explicit techniques to increase awareness of cognitive processes such as abstraction. "The map is not the territory" is a g.s. catchphrase.
0ChristianKl
It's hard to find the reference class because our rationality movement lends it's existence to the internet. If you take a pre-internet self-development movement like Landmark education it's different in many ways and it would be hard for me to say that it's in the same reference class as our rationality movement.
0Viliam
There is always going to be some difference, so I am going to ignore medium-sized differences here and cast a wide net: * scientists -- obviously, right? * atheists -- they usually have "reason" as their applause light (whether deservedly or not) * "social engineers" of all political flavors, including SJWs -- believe themselves to know better than the uneducated folks * psychoanalysts * behaviorists * mathematicians * philosophers

I'm not so sure. Would your underlying intuition be the same if the torture and death was the result of passive inaction, rather than of deliberate action? I think in that case, the torture and death would make only a small difference in how good or bad we judged the world to be.

For example, consider a corporate culture with so much of this dominance hierarchy that it has a high suicide rate.

Also:

Moloch whose buildings are judgment! ... Lacklove and manless in Moloch! ... Moloch who frightened me out of my natural ecstasy!

... Real holy laughter in the ri

... (read more)

I'd add that it also starts to formalise the phenomenon where one's best judgement oscillates back and forth with each layer of an argument. It's not clear what to do when something seems a strong net positive, then a strong negative, then a strong positive again after more consideration. If the value of information is high, but it's difficult to make any headway, what should we even do?

This is especially common for complex problems like xrisk. It also makes us extremely prone to bias, since we by default question conclusions we don't like more than ones we do.

This is really sad. I'm sorry to hear things didn't work out, but I'm still left wondering why not.

I guess I was really hoping for a couple thousand+ word post-mortem, describing the history of the project, and which hypotheses you tested, with a thorough explanation of the results.

If you weren't getting enough math input, why do you think that throwing more people at the problem wouldn't generate better content? Just having a bunch of links to the most intuitive and elegant explanations, gathered in one place, would be a huge help to both readers and writ... (read more)

7Alexei
Yes, many students would benefit from a math explanation platform. But it was hard for us to find writers, and we weren't getting as much traction with them as we wanted. We reached out to some forums and to many individuals. That version of Arbital was also promoted by Eliezer on FB. When we switched away from math, it wasn't because we thought it was hopeless. We had a lot of ideas left to try out. But when it's not going well, you have to call it quits at some point, and so we did. There was also the consideration that if we built a platform for (math) explanations, it would be hard to eventually transition to a platform that solved debates (which always seemed like the more important part). I think if someone wanted to give it a shot with another explanation platform and had a good strategy for getting writers, I'd feel pretty optimistic about their chance of success.

I don't see any reason why AI has to act coherently. If it prefers A to B, B to C, and C to A, it might not care. You could program it to prefer that utility function.*

If not, maybe the A-liking aspects will reprogram B and C out of it's utility function, or maybe not. What happens would depend entirely on the details of how it was programmed.

Maybe it would spend all the universe's energy turning our future light cone from C to B, then from B to A, and also from A to C. Maybe it would do this all at once, if it was programmed to follow one "goal"... (read more)

I rather like this way of thinking. Clever intuition pump.

What are we actually optimizing the level-two map for, though?

Hmmm, I guess we're optimizing out meta-map to produce accurate maps. It's mental cartography, I guess. I like that name for it.

So, Occam's Razor and formal logic are great tools of philosophical cartographers. Scientists sometimes need a sharper instrument, so they crafted Solomonoff induction and Bayes' theorem.

Formal logic being a special case of Bayesian updating, where only p=0 and p=1 values are allowed. There are third alternat... (read more)

True. Maybe we could still make celebrate our minor celebrities more, along with just individual good work, to avoid orbiting too much around any one person. I don't know what the optimum incentive gradient is between small steps and huge accomplishments. However, I suspect that on the margin more positive reinforcement is better along the entire length, at least for getting more content.

(There are also benefits to adversarial review and what not, but I think we're already plenty good at nitpicking, so positive reinforcement is what needs the most attentio... (read more)

3Viliam
This. Different people need different advice. People prone to worship and groupthink need to be told about the dangers of following the herd. People prone to nitpicking and contrarianism need to be told about how much power they lose by being unable to cooperate. Unfortunately, in real life most people will choose exactly the opposite message -- the groupthinkers will remind themselves of the dangers of disagreement, and the nitpickers will remind themselves of the dangers of agreement.

Awesome link, and a fantastic way of thinking about how human institutions/movements/subcultures work in the abstract.

I'm not sure the quote conveys the full force of the argument out of that context though, so I recommend reading the full thing if the quote doesn't ring true with you (or even if it does).

0Elo
Lesswrong doesn't celebrate heroes much. I think that's on purpose though...

I agree that philosophy and neuroscience haven't confirmed that the qualia I perceive as red is the same thing as the qualia you experience when you look at something red. My red could be your blue, etc. (Or, more likely, completely unrelated sensations chosen randomly from trillions of possibilities.) Similarly, we can't know exactly what it's like to be someone else, or to be an animal or something.

However, it's perfectly reasonable to group all possible human experiences into one set, and group all possible things that an ant might experience in another... (read more)

0Dagon
Deeper than that. Nobody's even credibly suggested that it's possible to such a thing exists in the measurable, physical world or how anyone might even start to confirm this. Even if you talk about measurable things, it's not about "exact", it's about the relative amount of overlap with different clusters of other. Taking for an example whether a transwoman is more like a ciswoman or a cisman, scanning doesn't give you much help, no matter how much data you collect. The overlap of scans with ants is pretty small, and there's no significant difference between the overlap of (for example) ciswomen compared to queen ants versus transwomen compared to queen ants. There is a TON of differences and a TON of overlap (depending on granularity of scan) between ciswomen, cismen, and transwomen, and asking "is the average transwoman closer to the average cisman or the average ciswoman" is just a useless thing. It depends on how you weight the differences, and in most cases the differences between individuals in the same category are as significant as the difference in average across categories. No matter what objective evidence you put together, it's going to come down to "there's some clustering, but it's kind of arbitrary whether you think it's important". How you feel is internal and unmeasurable. If you want to talk about something other than politics or other people's expectations (same thing), dissolve the topic - are you talking about feelings, or biological/behavior clustering? In either case, why do you care about "typical", as opposed to "existent" or "experienced" feelings/behaviors/measurements?
3Zack_M_Davis
I claim that we already have enough empirical evidence to conclude with very high confidence that "gender identity" is not a useful construct for understanding the psychology of gender dysphoria. For the trans women in particular, I claim that a solid majority of them will have an "autogynephilic" etiology: that is, non-exclusively-androphilic trans women are basically straight men who let their fixation on erotic cross-dressing and cross-gender fantasy spiral out of control and get reified into a highly-valued self-identity. (Something like this may be true of a minority of trans men, but that's much rarer.) This claim predicts that self-reported confidence in one's gender identity will not correlate with any sexually-dimorphic brain features in MRI studies. For more information on the findings supporting these claims, see Kay Brown's FAQ, or Anne Lawrence's monograph Men Trapped in Men's Bodies: Narratives of Autogynephilic Transsexualism.

that still rules out the globehopping, couchsurfing lifestyle.

Not necessarily. I'd be fine with it if my girlfriend decided to hitchhike around Europe for a month or two, and I'm pretty sure she'd be fine with me doing the same. There's no reason the one with the job couldn't take a vacation in the middle, too.

If the unemployed partner did this twice a year, for 2 months at a time, that'd be 1/3 of their time spent globetrotting. If they did this 3x a year, (2 months home, then 2 months exploring, then 2 months home again) that'd be pushing it, but might be stable long term if they could find ways to make sure the working party didn't feel used or left out.

This was a useful article, and it's nice to know the proper word for it. Let me see if I can add to it slightly.

Maybe a prisoner is on death row, and if they run away they are unlikely to suffer the consequences, since they'll be dead anyway. However, even knowing this, they may still decide to spend their last day on earth nursing bruises, because they value the defiance itself far more than any pain that could be inflicted on them. Perhaps they'd even rather die fighting.

It looks like you don't reflectively endorse actions taken for explicitly semiotic r... (read more)

Agreed. I'd love to see even more of all of these sorts of things, but the low margin nature of the industry makes this somewhat difficult to attack directly, so there isn't anywhere near as much money being invested in that direction as I would like.

I believe NASA has gotten crop yields high enough that a single person can be fed off of only ~25 m^2 of land, (figure may be off, or may be m^3 or something, but that's what I vaguely recall.) but that would have been with fancy hydroponic/aquaponic/aeroponic setups or something, and extremely high crop densi... (read more)

2ChristianKl
Fancy hydroponic setups aren't almost 0 cost. Vertical farming is not cost competitive with the farming that we have. There are companies like http://rowbot.com/ who develop farm robots to more effectively use fertiliser. There are plenty of devices for growing food at home on kickstarter. Having fruit trees isn't free. Having fruit trees in cities costs the cities enough to prefer other kind of trees. It's worthy to be suspicious of free. Most of the time there's some work involved. It's worth understanding the hidden costs.

A job is a cost

Agreed. When I said the "cost to local jobs" I was being informal, but referring to the (supposed) increase in unemployment as Walmart displaces local, less efficient small businesses.

Paying people to do a job which can be eliminated is like paying people to dig holes and fill them back in. I'd rather just give them the money than pay them to do useless work, but I'll take the second option over them being unemployed.

As an interesting side note, I think this might put me on the opposite side of the standard anti-Walmart argument... (read more)

0ChristianKl
Very few politicians of either side argue that Walmart is bad. At least outside of the local level.

these are called "recessions" and ... "depressions".

Ha, very good point. Our current society is largely built around growth, and when growth stops the negative effects absolutely do trickle down, even to people who don't own stocks. In fact, companies were counting on those increases, and so have major issues when they don't materialize, and need to get rid of workers to cut costs.

I will mention that through most of history and prehistory, the economic growth rate has been much, much, smaller. I haven't read it, so I can't vouch for ... (read more)

0Lumifer
True, and I think not many people want a return to those times (some do, though, mostly on environmentalist grounds). That's the whole growing-inequality debate, a separate highly complicated topic. Economically speaking, a job is a cost. If you can produce the same value with fewer jobs, that's a good thing called an increase in productivity.

I'd like to take this a step further, even. If you are a utilitarian, for example, what you really care about is happiness, which is only distantly linked to things like GDP and productivity, at least in developed nations. People who have more money spend more money,^[citation_needed] so economic growth is disproportionately focused on better meeting the desires of those with more money. Maybe the economy doubles every couple decades, but that doesn't mean that the poor have twice as much too. I would be interested to know precisely how much more they actu... (read more)

2ChristianKl
Selling groceries is a low margin business. There isn't much room for a startup to sell radically cheaper groceries. At the same time companies like Amazon do invest in optimizing grocery sales. Technology such as self-driving trucks have the potential to lower grocery prices. There are also various companies who work on making farming more efficient. Both GMO and putting more robots on farmland has the potential to make food production more efficient.
1Lumifer
This implies that no one cares much about periods when the GDP growth turns negative -- and these are called "recessions" and, when the contraction is severe enough, "depressions". Is that so? I don't think that's the case. Take a recent economic growth success story -- China. The major effect was lifting hundreds of millions of people out of poverty. I don't think that's the case as well (as long as we are talking about consequences rather than intentions). Uber is cheaper than taxis, AirBnB is cheaper than hotels, etc. And Walmart was a startup before startups were cool :-P

Here, have a related podcast from You Are Not So Smart.

TL;DR:

A scientist realized that the Change My View subreddit is basically a pile of already-structured arguments, with precisely what changed the person's mind clearly labeled with a "Δ". He decided to data mine it, and look at what correlated with changed minds.

Conclusions:

  • Apparently people with longer, more detailed, better structured initial views were more likely to award a delta. (Maybe that's just because they changed their mind on one of the minor points though, and not the bigger to

... (read more)

If you read it, I'd be interested to know what specific techniques they endorse, and how those differ from the sorts of things LW writes.

The general 4 categories of goals/subgoals Wikipedia lists seem right though. I've see people get stuck on 3 without having any idea what the physical problem was (2) and without more than a 1 hr meeting to set a "strategy" (1) to solve the problems that weren't understood.

  1. In consideration of a vision or direction...

  2. Grasp the current condition.

  3. Define the next target condition.

  4. Move toward that target condition iteratively, which uncovers obstacles that need to be worked on.

0MrMind
I feel that marketing, enterpreneurship, science and many other human activities share a common model of exploring a landscape of potential: you're trying to reach some maximum without really knowing much more than your immediate surrounding. Backward chaining can work really only when you already have a more accurate map of reality.

Someone on Brain Debugging Discussion (the former LW Facebook group) runs a channel called Story Brain. He decomposes movies and such, and tries to figure out what makes them work and why we like certain things.

It seems weird to me to talk about reddit as a bad example. Look at /r/ChangeMyView, /r/AskScience, /r/SpaceX, etc, not the joke subs that aren't even trying for epistemic honesty. /r/SpaceX is basically a giant mechanism that takes thousands of nerds and geeks in on one end, and slowly converts them into dozens of rocket scientists, and spits them out the other side. For example, this is from yesterday. Even the majority that never buy a textbook or take a class learn a lot.

I think this is largely because Reddit is one of the best available architectures... (read more)

1Viliam
That is a great point!!! Can we turn it into actionable advice for "creating better LW"? Maybe there is a chance to create a vivid (LW-style) rationalist community on Reddit. We just have to find out what is the secret that makes the three subreddits you mentioned different from the rest of Reddit, and make it work for a LW-style forum. I noticed CMV has about 30 moderators, AskScience has several hundreds, and SpaceX has nine. I don't know what is the average, but at this moment I have an impression that a large-ish number of active moderators is a must. Another ingredient is probably that these sites seem to have clear rules on what is okay; what should one optimize for. In CMV, it's replies that "change OP's mind"; in AskScience it's replies compatible with respected science. -- I am afraid we couldn't have a similarly clear rule for "x-rationality". EDIT: I like the anti-advice page for CMV. (And I find it quite amusing that a lot of them pretty much describe how RationalWiki works.) I posted that link on LW.

Nick Bostrom's Apostasy post

For anyone who comes this way in the future, I found Nick Bostrom's post through a self-critique of Effective Altruism.

I rather like this concept, and probably put higher credence on it than you. However, I don't think we are actually modeling that many layers deep. As far as I can tell, it's actually rare to model even 1 layer deep. I think your hypothesis is close, but not quite there. We are definitely doing something, but I don't think it can properly be described as modeling, at least in such fast-paced circumstances. It's something close to modeling, but not quite it. It's more like what a machine learning algorithm does, I think, and less like a computer simulation.... (read more)

1the gears to ascension
I enthusiastically agree with you. I actually do machine learning as my day job, and its ability to store "lookup table" style mappings with generalization was exactly what I was thinking of when referring to "modeling". I'm pleased I pointed to the right concept, and somewhat disappointed that my writing wasn't high enough quality to clarify this from the beginning. what you mention about obsessing seems extremely true to me, and seems related to Satvik's internationalization of it as "rapid fire simulations". in general I think of s1 as "fast lookup-table style reasoning" and s2 as "cpu-and-program style reasoning". my goal here was to say: 1. humans have a hell of a lot of modeling power in the fast lookup style of reasoning 2. that style of reasoning can embed recursive modeling 3. a huge part of social interaction is a complicated thing that gets baked into lookup style reasoning ^•^
0Raemon
I'm not sure you actually disagree with the OP. I think you are probably right about the mechanism by which people identify and react to social situations. I think the main claims of the OP hold whether you're making hyper-fast calculations, or lookup checks. The lookup checks still correspond roughly to what the hyperfast calculations would be, and I read the OP mainly as a cautionary tale for people who attempt to do use System 2 reasoning to analyze social situations (and, especially, if you're attempting to change social norms) Aspiring rationalists are often the sort of people who look for inefficiencies in social norms and try to change them. But this often results in missing important pieces of all the nuances that System 1 was handling.

I use Metaculus a lot, and have made predictions on the /r/SpaceX subreddit which I need to go back and make a calibration graph for.

(They regularly bet donations of reddit gold, and have occasional prediction threads, like just before large SpaceX announcements. They would make an excellent target audience for better prediction tools.)

I've toyed with the idea of making a bot which searched for keywords on Reddit/LW, and tracked people's predictions for them. However, since LW is moving away from the reddit code base, I'm not sure if building such a bot would make sense right now.

I'm worried that I found the study far more convincing than I should have. If I recall, it was something like "this would be awesome if it replicates. Regression toward the mean suggests the effect size will shrink, but still." This thought didn't stop me from still updating substantially, though.

I remember being vaguely annoyed at them just throwing out the timeout losses, but didn't discard the whole thing after reading that. Perhaps I should have.

I know about confirmation bias and p-hacking and half a dozen other such things, but none of that stopped me from overupdating on evidence I wanted to believe. So, thanks for your comment.

2Lumifer
An interesting concept -- un-updating. Should happen when you updated on evidence that turned out to be wrong/mistaken, so you need to update back and I suspect that some biases will be involved here :-/

In that case, let me give a quick summary of what I know of that segment of effective altruism.

For context, there are basically 4 clusters. While many/most people concentrate on traditional human charities, some people think animal suffering matters more than 1/100th as much as a human suffering, and so think of animal charities are therefore more cost effective. Those are the first 2 clusters of ideas.

Then you have people who think that movement growth is more important, since organizations like Raising for Effective Giving have so far been able to move l... (read more)

0morganism
B612 Foundation is working on impact risks, by trying to get some IR cameras out to L2, L3 at least, and hopefully at S5. and Planetary Resources say that objects found with their IR cameras for mining, will go into the PDSS database.
0whpearson
Thanks! I'll get in touch with the EA community in a bit. I've got practical work to finish and I find forums too engaging.

I checked their karma before replying, so I could taylor my answer to them if they were new. They have 1350 karma though, so I asume they are already familiar with us.

Same likely goes for the existential risk segment of EA. These are the only such discussion forums I'm aware of, but neither is x-risk only.

0whpearson
I'm a cryonaut from a few years back. I had deep philosophical differences to most of the arguments for AI Gods, which you may be able to determine from some of my recent discussions. I still think that it not completely crazy to try and create an beneficial AI God (taking into consideration my fallible hardware and all), but I put a lot more weight on futures where the future of intelligence is very important, but not as potent as a god. Thanks for your pointers towards the EA segment, I wasn't aware that there was a segment.

math, physics, and computer science

Yes, yes, and yes.

surveys of subjects or subfields

You mean like a literature review, but aimed at people entirely new to the field? If so, Yes. If not, probably also yes, but I'll hold off on committing until I understand what I'm committing to.

instrumental rationality

No. Just kidding, of course it's a Yes.

Personally, I think that changing the world is a multi-armed bandit problem, and that EA has been overly narrow in the explore/exploit tradeoff, in part due to the importance/tractablness/neglectedness heuris... (read more)

Edit: TL;DR: Mathematics is largely Ra worship, perhaps worse than even the more abstract social sciences. This means that That Magic Click never happens for most people. It's a prime example of "most people do not expect to understand things", to the point where even math teachers don't expect to understand math, and they pass that on to their students in a vicious cycle.

Surely as soon as you see the formula ... you know that you are dealing with some notion of addition that has been extended from the usual rules of addition.

Only if you know... (read more)

2Oscar_Cunningham
Yeah I definitely agree with all of this. It's just that the original post was phrasing it as "Someone has claimed that 1+2+3+...=-1/12, do you believe them or not?" and it struck me that it doesn't mean anything to believe it or not unless you first understand what it would even mean for 1+2+3+... to equal -1/12. In order to understand this you first have to be aware that the notion of addition can be extended. If you aren't aware of this (as you point out most people aren't) the original post is even less useful; it's asking a question that you can't possibly answer.

That looks like a useful way of decreasing this failure mode, which I suspect we LWers are especially susceptible to.

Does anyone know any useful measures (or better yet heuristics) for how many gears are inside various black boxes? Kolmogorov complexity (from Solomonoff induction) is useless here, but I have this vague idea that chaos theory systems > weather forecasting > average physics simulation > simple math problems I can solve exactly by hand

However, that's not really useful if I want to know how long it would take to do something novel. Fo... (read more)

off-by-one errors would go away.

I always have to consciously adjust for off-by-one errors, so this sounded appealing at first. (In the same way that Tau is appealing.)

However, after a bit of thought, I'm unable to come up with an example where this would help. Usually, the issue is that the 7th thru 10th seats aren't actually 10-7=3 seats, but 4 seats. (the 7th, 8th, 9th, and 10th seats).

Calling them the 6th thru 9th seats doesn't solve this. Can someone give an example of what it does solve, outside of timekeeping? (i.e., anything that counting time in beats wouldn't solve?)

1cousin_it
Some inconsistencies with one-based counting: * The first ten things have one-digit numbers, except the last one which has a two-digit number. * The second ten things have numbers starting with 1, except the last one which starts with 2. * The first thing in the first ten things has number 01, so "first" means simultaneously 0 and 1. All of those go away with zero-based counting. Also zero-based counting has many benefits for programming: * The first memory address is all zeroes, not 00000001. * The first grid cell originates at (0, 0), not (1, 1). * The idioms a[i / n], a[i % n], a[i * n + j] don't work as well with one-based counting. I think the underlying reason for all of those is the way modular arithmetic works. If a and b are positive integers, then the possible results of both a / b (integer division) and a % b (modulus) range from 0 rather than 1. Since digits in a number, indices in a sequence, etc. are often defined by these operations, zero-based counting feels more natural. As to your problem, it can be solved by consistently using half-open ranges, which is another good idea that works well with mine :-)
1bogus
Of course they're three seats. "10th" is not a seat, it's a fencepost! If you want "the 7th, 8th, 9th, and 10th seats" you should say 7th thru 11th.
0[anonymous]
The main examples to me are timekeeping, modular arithmetic, and memory addressing. The first moment of time, the first modulus and the first memory address are all 0. It seems harder to come up with natural examples where the first something is 1. Your problem would be solved by using half-open ranges everywhere, which is another good idea that works well with mine.

BTW it's "canon" not "cannon" - cheers!

Thanks for the correction. I always worry that I'll make similar mistakes in more formal writing.

I don't really understand the reasons behind a lot of the proposed site mechanics, but I've been toying around with an idea similar to your slider, but for a somewhat different purpose.

Consider this paradox:

  1. As far as I can tell, humor and social interaction is crucial to keeping a site fun and alive. People have to be free to say whatever is on their mind, without worrying too much about social repercussions. They have to feel safe and be able to talk freely.

  2. This is, to some extent, at odds with keeping quality high. Having extremely high standards is

... (read more)
0Paul Crowley
Thinking about it, I'd rather not make the self-rating visible. I'd rather encourage everyone to assume that the self-rating was always 2, and encourage that by non-technical means.
0Paul Crowley
Making the self-rating visible for the purpose you state has real value. Will think about that. BTW it's "canon" not "cannon" - cheers!
1Lumifer
I think it's a good idea to have a tag dictionary that allows (or maybe even forces) posters to tag their posts with things like "shower thought", "rant", "wild guess", "exploratory digging", "this is 100% true, I promise", etc. It would be awesome to convert these tags to a cannon scheme where "did I post this? I must have been drunk" corresponds to a wooden cannon, a decent post would be a bronze 42-pounder, and an instant classic would get a Big Bertha symbol. Accordingly the users themselves could be classified by the Royal Navy's rating scheme. Pipsqueaks would be unrated vessels and then we'd go up all the way to the 1st rate ships of the line with over a hundred guns on board.

Precisely my reaction. I aim for midnight-1:00, but consider 2:00 or 3:00 a mistake. 4:00 or 5:00 is regret incarnate.

Personally, I'm excited about the formation of Solid Metallic Hydrogen in the lab. (Although, it only has 52% odds of being a big deal, as measured by citation count.) SMH may be stable at room temperature, and the SMH to gas phase transition could release more energy than chemical reactions do, making it more energy dense than rocket fuel. Additionally, there's like a ~35% chance of it superconducting at room temperature.

(As a side note, does anyone know whether something like this might make fusion pressures easier to achieve? I realize starting off a li... (read more)

0darius
I'm not a physicist, but if I wanted to fuse metallic hydrogen I'd think about a really direct approach: shooting two deuterium/tritium bullets at each other at 1.5% of c (for a Coulomb barrier of 0.1 MeV according to Wikipedia). The most questionable part I can see is that a nucleus from one bullet could be expected to miss thousands of nuclei from the other, before it hit one, and I would worry about losing too much energy to bremsstrahlung in those encounters.

group theory / symmetry

The Wikipedia page for group theory seems fairly impenetrable. Do you have a link you'd recommend as a good place to get one’s feet wet in the topic? Same with symmetry.

Thanks!

1sen
"Group" is a generalization of "symmetry" in the common sense. I can explain group theory pretty simply, but I'm going to suggest something else. Start with category theory. It is doable, and it will give you the magical ability of understanding many math pages on Wikipedia, or at least the hope of being able to understand them. I cannot overstate how large an advantage this gives you when trying to understand mathematical concepts. Also, I don't believe starting with group theory will give you any advantage when trying to understand category theory, and you're going to want to understand category theory if you're interested in reasoning. When I was getting started with category theory, I went back and forth between several pages (Category Theory, Functor, Universal Property, Universal Object, Limits, Adjoint Functors, Monomorphism, Epimorphism). Here are some of the insights that made things click for me: * An "object" in category theory corresponds to a set in set theory. If you're a programmer, it's easier to think of a single categorical object as a collection (class) of OOP objects. It's also valid and occasionally useful to think of a single categorical object as a single OOP object (e.g., a collection of fields). * A "morphism" in category theory corresponds to a function in set theory. If you think of a categorical object as a collection of OOP objects, then a morphism takes as input a single OOP object at a time. * It's perfectly valid for a diagram to contain the same categorical object twice. Diagrams only show relations, and it's perfectly valid for an OOP object to be related to another OOP object of the same class. When looking at commutative diagrams that seem to contain the same categorical object twice, think of them as distinct categorical objects. * Diagrams don't only show relationships between OOP objects. They can also show relationships between categorical objects. For example, a diagram might state that there is a bijection between two

...reactions like this,...

The relevant bit from the link:

... I'll happily volunteer a few hours a week.

EDIT: AAAUUUGH REDDIT'S DB USES KEY-VALUE PAIRS AIIEEEE IT ONLY HAS TWO TABLES OH GOD WHY WHY SAVE ME YOG-SOTHOTH I HAVE GAZED INTO THE ABYSS AAAAAAAIIIIGH okay. I'll still do it. whimper

My reaction was the complete opposite: an excellent signaling tool.

If I just made a connection between 2 things, and want to bounce ideas off people, I can just say Epistemic effort: Thought about it musingly, and wanted to bounce the idea off a few people and no one will judge me for have a partially formed idea. Perhaps more importantly, anyone not interested in such things will skip the article, instead of wasting their time, and feeling the need do discourage my offending low quality post.

I'm not a fan of "brainstorming" in particular, but th... (read more)

Thanks again. I haven't actually read the book, just Yvain's review, but maybe I should make the time investment after all.

Thanks for the comment. It's humbling to get a glimpse of vastly different modes of thought, optimized for radically different types of problems.

Like, I feel like I have this cognitive toolbox I've been filling up with tools for carpentry. If a mode of thought looks useful, I add it. But then I learn that there's such a thing as a shipyard, and that they use an entirely different set of tools, and I wonder how many such tools people have tried to explain to me only for me to roll my eyes and imagine how poorly it would work to drive a nail. When all you ha... (read more)

2Benquo
This seems consistent with Tetlock's guess in Superforecasting that hedgehogs are better for knowing what questions to ask.

Maybe we could start tagging such stuff with epistemic status: exploratory or epistemic status: exploring hypotheses or something similar? Sort of the opposite of Crocker's rules, in effect. Do you guys think this is a community norm worth adding?

We have a couple concepts around here that could also help if they turned into community norms on these sorts of posts. For example:

  • triangulating meaning: If we didn't have a word for "bird", I might provide a penguin, a ostrich, and an eagle as the most extreme examples which only share their "bi

... (read more)

I'd agree with you that most abstract beliefs aren't needed for us to simply live our lives. However, it looks like you were making a normative claim that to minimize bias, we shouldn't deal too much with abstract beliefs when we can avoid it.

Similarly, IIRC, this is also your answer to things like discussion over whether EMs will really "be us", and other such abstract philosophical arguments. Perhaps such discussion isn't tractable, but to me it does still seem important for determining whether such a world is a utopia or a dystopia.

So, I would... (read more)

Believe Less.

As in, believe fewer things and believe them less strongly? By assigning lower odds to beliefs, in order to fight overconfidence? Just making sure I'm interpreting correctly.

don't bother to hold beliefs on the kind of abstract topics

I've read this sentiment from you a couple times, and don't understand the motive. Have you written about it more in depth somewhere?

I would have argued the opposite. It seems like societal acceptance is almost irrelevant as evidence of whether that world is desirable.

3RobinHanson
Yes believe fewer things and believe them less strongly. On abstract beliefs I'm not following you. The usual motive for most people is that they don't need most abstract beliefs to live their lives.

Normally, I'm pretty good at remembering sources I get info from, or at least enough that I can find it again quickly. Not so much in this case. This was about halfway through a TED talk, but unfortunately TED doesn't search their "interactive transcripts" when you use the search function on their page. A normal web search for the sorts of terms I remember doesn't seam to be coming up with anything.

I scanned through all the TED talks in my browser history without much luck, but I have this vague notion that the speaker used the example to make a ... (read more)

Load More