:)
Honestly, there are a bunch of links I don't click, because the 2 or 3 word titles aren't descriptive enough. I'm a big fan of the community norm on more technically minded subreddits, where you can usually find a summary in one of the top couple comments.
So, I'm doing what I can to encourage this here. But mostly, I thought it was important on the AI front, and wanted to give a summary which more people would actually read and discuss.
Here are some thoughts on the viability of Brain Computer Interfaces. I know nothing, and am just doing my usual reality checks and initial exploration of random ideas, so please let me know if I'm making any dumb assumptions.
They seem to prefer devices in the blood vessels, due to the low invasiveness. The two specific form factors mentioned are stents and neural dust. Whatever was chosen would have to fit in the larger blood vessels, or flow freely through all of them. Just for fun, let's choose the second, much narrower constraint, and play with some nu...
The article only touches on it briefly, but suggests faster AI takeoff are worse, but "fast" is only relative to the fastest human minds.
Has there been much examination of the benefits of slow takeoff scenarios, or takeoffs that happen after human enhancements become available? I vaguely recall a MIRI fundraiser saying that they would start putting marginal resources toward investigating a possible post-Age of EM takeoff, but I have no idea if they got to that funding goal.
Personally, I don't see Brain-Computer Interfaces as useful for AI takeoff...
TL;DR of the article:
This piece describes a lot of why Elon Musk wanted to start Neurolink, and how Brain-Computer Interfaces (BCIs) currently work, and how they might be implemented in the future. It's a really, really broad article, and aims for breadth while still having enough depth to be useful. If you already have a grasp of evolution of the brain, Dual Process Theory, parts of the brain, how neurons fire, etc. you can skip those parts, as I have below.
AI is dangerous, because it could achieve superhuman abilities and operate at superhuman speeds. Th...
TL;DR: What are some movements you would put in the same reference class as the Rationality movement? Did they also spend significant effort trying not to be wrong?
Context: I've been thinking about SSC's Yes, We have noticed the skulls. They point out that aspiring Rationalists are well aware of the flaws in straw Vulcans, and actively try to avoid making such mistakes. More generally, most movements are well aware of the criticisms of at least the last similar movement, since those are the criticisms they are constantly defending against.
However, searchin...
I'm not so sure. Would your underlying intuition be the same if the torture and death was the result of passive inaction, rather than of deliberate action? I think in that case, the torture and death would make only a small difference in how good or bad we judged the world to be.
For example, consider a corporate culture with so much of this dominance hierarchy that it has a high suicide rate.
Also:
...Moloch whose buildings are judgment! ... Lacklove and manless in Moloch! ... Moloch who frightened me out of my natural ecstasy!
... Real holy laughter in the ri
I'd add that it also starts to formalise the phenomenon where one's best judgement oscillates back and forth with each layer of an argument. It's not clear what to do when something seems a strong net positive, then a strong negative, then a strong positive again after more consideration. If the value of information is high, but it's difficult to make any headway, what should we even do?
This is especially common for complex problems like xrisk. It also makes us extremely prone to bias, since we by default question conclusions we don't like more than ones we do.
This is really sad. I'm sorry to hear things didn't work out, but I'm still left wondering why not.
I guess I was really hoping for a couple thousand+ word post-mortem, describing the history of the project, and which hypotheses you tested, with a thorough explanation of the results.
If you weren't getting enough math input, why do you think that throwing more people at the problem wouldn't generate better content? Just having a bunch of links to the most intuitive and elegant explanations, gathered in one place, would be a huge help to both readers and writ...
I don't see any reason why AI has to act coherently. If it prefers A to B, B to C, and C to A, it might not care. You could program it to prefer that utility function.*
If not, maybe the A-liking aspects will reprogram B and C out of it's utility function, or maybe not. What happens would depend entirely on the details of how it was programmed.
Maybe it would spend all the universe's energy turning our future light cone from C to B, then from B to A, and also from A to C. Maybe it would do this all at once, if it was programmed to follow one "goal"...
I rather like this way of thinking. Clever intuition pump.
What are we actually optimizing the level-two map for, though?
Hmmm, I guess we're optimizing out meta-map to produce accurate maps. It's mental cartography, I guess. I like that name for it.
So, Occam's Razor and formal logic are great tools of philosophical cartographers. Scientists sometimes need a sharper instrument, so they crafted Solomonoff induction and Bayes' theorem.
Formal logic being a special case of Bayesian updating, where only p=0 and p=1 values are allowed. There are third alternat...
True. Maybe we could still make celebrate our minor celebrities more, along with just individual good work, to avoid orbiting too much around any one person. I don't know what the optimum incentive gradient is between small steps and huge accomplishments. However, I suspect that on the margin more positive reinforcement is better along the entire length, at least for getting more content.
(There are also benefits to adversarial review and what not, but I think we're already plenty good at nitpicking, so positive reinforcement is what needs the most attentio...
Awesome link, and a fantastic way of thinking about how human institutions/movements/subcultures work in the abstract.
I'm not sure the quote conveys the full force of the argument out of that context though, so I recommend reading the full thing if the quote doesn't ring true with you (or even if it does).
I agree that philosophy and neuroscience haven't confirmed that the qualia I perceive as red is the same thing as the qualia you experience when you look at something red. My red could be your blue, etc. (Or, more likely, completely unrelated sensations chosen randomly from trillions of possibilities.) Similarly, we can't know exactly what it's like to be someone else, or to be an animal or something.
However, it's perfectly reasonable to group all possible human experiences into one set, and group all possible things that an ant might experience in another...
that still rules out the globehopping, couchsurfing lifestyle.
Not necessarily. I'd be fine with it if my girlfriend decided to hitchhike around Europe for a month or two, and I'm pretty sure she'd be fine with me doing the same. There's no reason the one with the job couldn't take a vacation in the middle, too.
If the unemployed partner did this twice a year, for 2 months at a time, that'd be 1/3 of their time spent globetrotting. If they did this 3x a year, (2 months home, then 2 months exploring, then 2 months home again) that'd be pushing it, but might be stable long term if they could find ways to make sure the working party didn't feel used or left out.
This was a useful article, and it's nice to know the proper word for it. Let me see if I can add to it slightly.
Maybe a prisoner is on death row, and if they run away they are unlikely to suffer the consequences, since they'll be dead anyway. However, even knowing this, they may still decide to spend their last day on earth nursing bruises, because they value the defiance itself far more than any pain that could be inflicted on them. Perhaps they'd even rather die fighting.
It looks like you don't reflectively endorse actions taken for explicitly semiotic r...
Agreed. I'd love to see even more of all of these sorts of things, but the low margin nature of the industry makes this somewhat difficult to attack directly, so there isn't anywhere near as much money being invested in that direction as I would like.
I believe NASA has gotten crop yields high enough that a single person can be fed off of only ~25 m^2 of land, (figure may be off, or may be m^3 or something, but that's what I vaguely recall.) but that would have been with fancy hydroponic/aquaponic/aeroponic setups or something, and extremely high crop densi...
A job is a cost
Agreed. When I said the "cost to local jobs" I was being informal, but referring to the (supposed) increase in unemployment as Walmart displaces local, less efficient small businesses.
Paying people to do a job which can be eliminated is like paying people to dig holes and fill them back in. I'd rather just give them the money than pay them to do useless work, but I'll take the second option over them being unemployed.
As an interesting side note, I think this might put me on the opposite side of the standard anti-Walmart argument...
these are called "recessions" and ... "depressions".
Ha, very good point. Our current society is largely built around growth, and when growth stops the negative effects absolutely do trickle down, even to people who don't own stocks. In fact, companies were counting on those increases, and so have major issues when they don't materialize, and need to get rid of workers to cut costs.
I will mention that through most of history and prehistory, the economic growth rate has been much, much, smaller. I haven't read it, so I can't vouch for ...
I'd like to take this a step further, even. If you are a utilitarian, for example, what you really care about is happiness, which is only distantly linked to things like GDP and productivity, at least in developed nations. People who have more money spend more money,^[citation_needed] so economic growth is disproportionately focused on better meeting the desires of those with more money. Maybe the economy doubles every couple decades, but that doesn't mean that the poor have twice as much too. I would be interested to know precisely how much more they actu...
Here, have a related podcast from You Are Not So Smart.
TL;DR:
A scientist realized that the Change My View subreddit is basically a pile of already-structured arguments, with precisely what changed the person's mind clearly labeled with a "Δ". He decided to data mine it, and look at what correlated with changed minds.
Conclusions:
Apparently people with longer, more detailed, better structured initial views were more likely to award a delta. (Maybe that's just because they changed their mind on one of the minor points though, and not the bigger to
If you read it, I'd be interested to know what specific techniques they endorse, and how those differ from the sorts of things LW writes.
The general 4 categories of goals/subgoals Wikipedia lists seem right though. I've see people get stuck on 3 without having any idea what the physical problem was (2) and without more than a 1 hr meeting to set a "strategy" (1) to solve the problems that weren't understood.
In consideration of a vision or direction...
Grasp the current condition.
Define the next target condition.
Move toward that target condition iteratively, which uncovers obstacles that need to be worked on.
Someone on Brain Debugging Discussion (the former LW Facebook group) runs a channel called Story Brain. He decomposes movies and such, and tries to figure out what makes them work and why we like certain things.
It seems weird to me to talk about reddit as a bad example. Look at /r/ChangeMyView, /r/AskScience, /r/SpaceX, etc, not the joke subs that aren't even trying for epistemic honesty. /r/SpaceX is basically a giant mechanism that takes thousands of nerds and geeks in on one end, and slowly converts them into dozens of rocket scientists, and spits them out the other side. For example, this is from yesterday. Even the majority that never buy a textbook or take a class learn a lot.
I think this is largely because Reddit is one of the best available architectures...
Nick Bostrom's Apostasy post
For anyone who comes this way in the future, I found Nick Bostrom's post through a self-critique of Effective Altruism.
I rather like this concept, and probably put higher credence on it than you. However, I don't think we are actually modeling that many layers deep. As far as I can tell, it's actually rare to model even 1 layer deep. I think your hypothesis is close, but not quite there. We are definitely doing something, but I don't think it can properly be described as modeling, at least in such fast-paced circumstances. It's something close to modeling, but not quite it. It's more like what a machine learning algorithm does, I think, and less like a computer simulation....
I use Metaculus a lot, and have made predictions on the /r/SpaceX subreddit which I need to go back and make a calibration graph for.
(They regularly bet donations of reddit gold, and have occasional prediction threads, like just before large SpaceX announcements. They would make an excellent target audience for better prediction tools.)
I've toyed with the idea of making a bot which searched for keywords on Reddit/LW, and tracked people's predictions for them. However, since LW is moving away from the reddit code base, I'm not sure if building such a bot would make sense right now.
I'm worried that I found the study far more convincing than I should have. If I recall, it was something like "this would be awesome if it replicates. Regression toward the mean suggests the effect size will shrink, but still." This thought didn't stop me from still updating substantially, though.
I remember being vaguely annoyed at them just throwing out the timeout losses, but didn't discard the whole thing after reading that. Perhaps I should have.
I know about confirmation bias and p-hacking and half a dozen other such things, but none of that stopped me from overupdating on evidence I wanted to believe. So, thanks for your comment.
Ping pong.
In that case, let me give a quick summary of what I know of that segment of effective altruism.
For context, there are basically 4 clusters. While many/most people concentrate on traditional human charities, some people think animal suffering matters more than 1/100th as much as a human suffering, and so think of animal charities are therefore more cost effective. Those are the first 2 clusters of ideas.
Then you have people who think that movement growth is more important, since organizations like Raising for Effective Giving have so far been able to move l...
I checked their karma before replying, so I could taylor my answer to them if they were new. They have 1350 karma though, so I asume they are already familiar with us.
Same likely goes for the existential risk segment of EA. These are the only such discussion forums I'm aware of, but neither is x-risk only.
math, physics, and computer science
Yes, yes, and yes.
surveys of subjects or subfields
You mean like a literature review, but aimed at people entirely new to the field? If so, Yes. If not, probably also yes, but I'll hold off on committing until I understand what I'm committing to.
instrumental rationality
No. Just kidding, of course it's a Yes.
Personally, I think that changing the world is a multi-armed bandit problem, and that EA has been overly narrow in the explore/exploit tradeoff, in part due to the importance/tractablness/neglectedness heuris...
Edit: TL;DR: Mathematics is largely Ra worship, perhaps worse than even the more abstract social sciences. This means that That Magic Click never happens for most people. It's a prime example of "most people do not expect to understand things", to the point where even math teachers don't expect to understand math, and they pass that on to their students in a vicious cycle.
Surely as soon as you see the formula ... you know that you are dealing with some notion of addition that has been extended from the usual rules of addition.
Only if you know...
That looks like a useful way of decreasing this failure mode, which I suspect we LWers are especially susceptible to.
Does anyone know any useful measures (or better yet heuristics) for how many gears are inside various black boxes? Kolmogorov complexity (from Solomonoff induction) is useless here, but I have this vague idea that chaos theory systems > weather forecasting > average physics simulation > simple math problems I can solve exactly by hand
However, that's not really useful if I want to know how long it would take to do something novel. Fo...
off-by-one errors would go away.
I always have to consciously adjust for off-by-one errors, so this sounded appealing at first. (In the same way that Tau is appealing.)
However, after a bit of thought, I'm unable to come up with an example where this would help. Usually, the issue is that the 7th thru 10th seats aren't actually 10-7=3 seats, but 4 seats. (the 7th, 8th, 9th, and 10th seats).
Calling them the 6th thru 9th seats doesn't solve this. Can someone give an example of what it does solve, outside of timekeeping? (i.e., anything that counting time in beats wouldn't solve?)
BTW it's "canon" not "cannon" - cheers!
Thanks for the correction. I always worry that I'll make similar mistakes in more formal writing.
I don't really understand the reasons behind a lot of the proposed site mechanics, but I've been toying around with an idea similar to your slider, but for a somewhat different purpose.
Consider this paradox:
As far as I can tell, humor and social interaction is crucial to keeping a site fun and alive. People have to be free to say whatever is on their mind, without worrying too much about social repercussions. They have to feel safe and be able to talk freely.
This is, to some extent, at odds with keeping quality high. Having extremely high standards is
Precisely my reaction. I aim for midnight-1:00, but consider 2:00 or 3:00 a mistake. 4:00 or 5:00 is regret incarnate.
Personally, I'm excited about the formation of Solid Metallic Hydrogen in the lab. (Although, it only has 52% odds of being a big deal, as measured by citation count.) SMH may be stable at room temperature, and the SMH to gas phase transition could release more energy than chemical reactions do, making it more energy dense than rocket fuel. Additionally, there's like a ~35% chance of it superconducting at room temperature.
(As a side note, does anyone know whether something like this might make fusion pressures easier to achieve? I realize starting off a li...
group theory / symmetry
The Wikipedia page for group theory seems fairly impenetrable. Do you have a link you'd recommend as a good place to get one’s feet wet in the topic? Same with symmetry.
Thanks!
...reactions like this,...
The relevant bit from the link:
... I'll happily volunteer a few hours a week.
EDIT: AAAUUUGH REDDIT'S DB USES KEY-VALUE PAIRS AIIEEEE IT ONLY HAS TWO TABLES OH GOD WHY WHY SAVE ME YOG-SOTHOTH I HAVE GAZED INTO THE ABYSS AAAAAAAIIIIGH okay. I'll still do it. whimper
My reaction was the complete opposite: an excellent signaling tool.
If I just made a connection between 2 things, and want to bounce ideas off people, I can just say Epistemic effort: Thought about it musingly, and wanted to bounce the idea off a few people and no one will judge me for have a partially formed idea. Perhaps more importantly, anyone not interested in such things will skip the article, instead of wasting their time, and feeling the need do discourage my offending low quality post.
I'm not a fan of "brainstorming" in particular, but th...
Thanks again. I haven't actually read the book, just Yvain's review, but maybe I should make the time investment after all.
Thanks for the comment. It's humbling to get a glimpse of vastly different modes of thought, optimized for radically different types of problems.
Like, I feel like I have this cognitive toolbox I've been filling up with tools for carpentry. If a mode of thought looks useful, I add it. But then I learn that there's such a thing as a shipyard, and that they use an entirely different set of tools, and I wonder how many such tools people have tried to explain to me only for me to roll my eyes and imagine how poorly it would work to drive a nail. When all you ha...
Maybe we could start tagging such stuff with epistemic status: exploratory or epistemic status: exploring hypotheses or something similar? Sort of the opposite of Crocker's rules, in effect. Do you guys think this is a community norm worth adding?
We have a couple concepts around here that could also help if they turned into community norms on these sorts of posts. For example:
triangulating meaning: If we didn't have a word for "bird", I might provide a penguin, a ostrich, and an eagle as the most extreme examples which only share their "bi
I'd agree with you that most abstract beliefs aren't needed for us to simply live our lives. However, it looks like you were making a normative claim that to minimize bias, we shouldn't deal too much with abstract beliefs when we can avoid it.
Similarly, IIRC, this is also your answer to things like discussion over whether EMs will really "be us", and other such abstract philosophical arguments. Perhaps such discussion isn't tractable, but to me it does still seem important for determining whether such a world is a utopia or a dystopia.
So, I would...
Believe Less.
As in, believe fewer things and believe them less strongly? By assigning lower odds to beliefs, in order to fight overconfidence? Just making sure I'm interpreting correctly.
don't bother to hold beliefs on the kind of abstract topics
I've read this sentiment from you a couple times, and don't understand the motive. Have you written about it more in depth somewhere?
I would have argued the opposite. It seems like societal acceptance is almost irrelevant as evidence of whether that world is desirable.
Normally, I'm pretty good at remembering sources I get info from, or at least enough that I can find it again quickly. Not so much in this case. This was about halfway through a TED talk, but unfortunately TED doesn't search their "interactive transcripts" when you use the search function on their page. A normal web search for the sorts of terms I remember doesn't seam to be coming up with anything.
I scanned through all the TED talks in my browser history without much luck, but I have this vague notion that the speaker used the example to make a ...
I'm not really sure how shortform stuff could be implemented either, but I have a suggestion on how it can be used: jokes!
Seriously. If you look at Scott's writing, for example, one of the things which makes it so gripping is the liberal use of amusing phrasing, and mildly comedic exaggerations. Not the sort of thing that makes you actually laugh, but just the sort of thing that is mildly amusing. And, I believe he specifically recommended it in his blog post on writing advice. He didn't phrase his reasoning quite like this, but I think of it as little bit... (read more)