If it's worth saying, but not worth its own post (even in Discussion), then it goes here.


Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should be posted in Discussion, and not Main.

4. Open Threads should start on Monday, and end on Sunday.

New to LessWrong?

New Comment
247 comments, sorted by Click to highlight new comments since: Today at 7:13 PM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Recent news suggest that having measles weakens the immune system for 2 to 3 years afterwards and therefore the Measles vaccine manages to reduce a lot of childhood deaths that weren't thought to be measles related.

NPR article Academic paper

It looks like AI is overtaking Arimaa, which is notable because Arimaa was created specifically as a challenge to AI. Congratulations to the programmer, David Wu.

2Kindly9y
On the subject of Arimaa, I've noted a general feeling of "This game is hard for computers to play -- and that makes it a much better game!" Progress of AI research aside, why should I care if I choose a game in which the top computer beats the top human, or one in which the top human beats the top computer? (Presumably both the top human and the top computer can beat me, in either case.) Is it that in go, you can aspire (unrealistically, perhaps) to be the top player in the world, while in chess, the highest you can ever go is a top human that will still be defeated by computers? Or is it that chess, which computers are good at, feels like a solved problem, while go still feels mysterious and exciting? Not that we've solved either game in the sense of having solved tic-tac-toe or checkers. And I don't think we should care too much about having solved checkers either, for the purposes of actually playing the game.
2jacob_cannell9y
I hadn't heard of Armaa before, but based on about 5 minutes worth of reading about the game, I don't understand how it is significantly more suited to natural reasoning that chess. It inherits many of chess's general features that make serial planning more effective than value function knowledge - thus favoring fast thinkers over slow deep thinkers. Go is much more of a natural reasoning game.
0ShardPhoenix9y
From the sound of it, the AI works more or less like chess AI - search with a hand-tuned evaluation function.

Transhumanism in the real world

Rugby players who get a bottle opener to replace a missing tooth.

-4Lumifer9y
That's no more transhumanism than this.
7drethelin9y
False! It's adding functionality rather than just a cosmetic change.
-2Lumifer9y
Cosmetic changes can be highly functional. Ask any girl :-) On a slightly more serious note, I tend to think of tranhumanist modifications as ones which confer abilities that unenhanced humans do not have. Opening beer bottles isn't one of them.
7Epictetus9y
Having been in a group of drunk people who found that they had no bottle opener, and having seen what bizarre ideas they concoct to get the bottles open, I'd say a bottle opener in one's tooth merits the status of transhumanist modification.
1[anonymous]9y
There was a saying in my youth: "There is no item that is not a beer opener." There was a bit of a competition for creative moves (drinking beer was considered a high status adult move for teenagers, opening them in creative ways even more). Keys. Lighters. Doors, the part where the "tongue" goes in on the frame, not sure the English term. Edges of tables or edges of anything. Using two bottles, locking the caps to each other and pulling apart. I still consider it the coolest manly way to open a beer when you sit at a fairly invulnerable e.g. stone table to just put the cap against the edge and hit it. Another 101 ways.
0Nornagest9y
Strike plate?
-2Lumifer9y
You can open a beer bottle with your natural teeth easily enough. These people lacked in knowledge, not in tools :-P
8NancyLebovitz9y
However, you can damage a tooth by using it to open bottles.
6TezlaKoil9y
Would you consider a Wikipedia brain implant to be a transhumanist modification? After all, ordinary humans can query Wikipedia too!
1Lumifer9y
That's a weird way of putting it. Would I consider an implant which consists of a large chunk of memory with some processing and an efficient neural interface to be transhumanist? Yes, of course. It will give a lot of useful abilities and just filling it with Wikipedia looks like a waste of potential. I don't think trivializing transhumanism to minor cosmetics is a useful approach. Artificial nails make better screwdrivers than natural nails, so is that also a transhumanist modification?
2[anonymous]9y
But nigh-effortless verbatim memorization is, so if you carry around a pen and a pad of paper...
0Lumifer9y
"nigh-effortless" "a pen and a pad of paper" Ahem.
7RowanE9y
Well, you know what they say about "one man's modus ponens".
1Lumifer9y
Here is your first transhumanist then, from pre-Columbian Maya...
1RowanE9y
I think the attitude toward the modifications is a relevant factor as well - wanting to be "more than human" in some respect, even if only a trivial respect such as "more awesome-looking than a regular human" or "more able to open beer bottles than a regular human" - but given that, yeah I'd be totally on board with considering some pre-Columbian Maya or other stone-age person "the first transhumanist".
0Lumifer9y
I see a smooth transition into tool-using, then. Picking up a stick certainly gives you more capabilities compared to a stick-less hominid, and probably makes you much more awesome as well.
2Ishaan9y
NancyLebovitz didn't imply the rugby player was showing signs of ideological transhumanism - only that they're doing something transhumanist. Transhumanists don't have the monopoly on self modification. It's the same sense that Christians refer to kind acts as Christian and bad acts as un-Christian. Transhumanists would claim the first intentional use of fire and writing and all that as transhuman-ish things. (And yes, I would consider self decoration to be a transhumanish thing too. Step into the paleolithic - what's the very first thing you notice which is different about the humans? They have clothes and strings and beads and tattoos, which turn out to have pretty complex social functions. Adam and Eve and all that, it's literally the stuff of myth.)
0Lumifer9y
So, using tools. Traditionally, tool-using is said to be be what distinguishes humans from apes. That makes it just human, not transhuman.
0Ishaan9y
Yes, I bite that bullet: I think "you aught to use tools to do things better" counts as foundational principle of transhuman ideology. It's supposed to be fundamentally about being human.
0Lumifer9y
Well, me might just be having a terminology difference. My understanding of "transhuman" involves being more than just human. Picking up a tool, even a sophisticated tool, doesn't qualify. And "more" implies that you standard garden-variety human doesn't qualify either. I'm not claiming there is an easily discernible bright line, but just as contact lenses don't make you a cyborg, a weirdly shaped metal tooth does not make you a transhuman.
2Ishaan9y
But that's because everyone uses glasses, as a matter of course - it's the status quo now. The person who thought "well, and why should we have to walk around squinting all the time when we can just wear these weird contraption on our heads", at a time when people might look at you funny having wearing glass on your face, I think that's pretty transhuman. As is the guy who said "Let's take it further, and put the refractive material directly on our eyeball" back when people would have looked at you real funny if you suggested they put plastic in their eyes are you crazy that sounds so uncomfortable. Now of course, it's easy to look at these things and say "meh". Edit: If you look at the history of contact lenses, though, what actually happened is less people saying "let's improve" and more people saying "I wonder how the eye works" and doing weird experiments that probably seemed pointless at the time. Something of a case study against the "basic research isn't useful" argument, I think, not that there are many who espouse that here.
[-][anonymous]9y90

(Random) by analogy with what Quirrellmort led people to believe about his imminent death, it would be cool to read a fic, 6th year AU, in which Dumbledore teaches defence...

2knb9y
What does 6th year AU mean?
3zedzed9y
6th year = book 6 (6th year at Hogwarts) = Half-Blood Prince. AU = Alternate Universe.

Pretty awesome set of trolley problems

Sample:

There’s an out of control trolley speeding towards Immanuel Kant. You have the ability to pull a lever and change the trolley’s path so it hits Jeremy Bentham instead. Jeremy Bentham clutches the only existing copy of Kant’s Groundwork of the Metaphysic of Morals. Kant holds the only existing copy of Bentham’s The Principles of Morals and Legislation. Both of them are shouting at you that they have recently started to reconsider their ethical stances.

Is utilitarianism foundational to LessWrong? Asking because for a while I've been toying with the idea of writing a few posts with morality as a theme, from the standpoint of, broadly, virtue ethics -- with some pragmatic and descriptive ethics thrown in. (The themes are quite generous and interlocking, and to be honest I don't know where to start or whether I'll finish it.) This perspective treats stable character traits, with their associated emotions, drives, and motives as the most reasonably likely determiner of moral behaviour, and means to encourage... (read more)

9ChristianKl9y
If it helps you, the 2014 census gave for moral beliefs: Moral Views Accept/lean towards consequentialism: 901, 60.0% Accept/lean towards deontology: 50, 3.3% Accept/lean towards natural law: 48, 3.2% Accept/lean towards virtue ethics: 150, 10.0% Accept/lean towards contractualism: 79, 5.3% Other/no answer: 239, 15.9% Meta-ethics Constructivism: 474, 31.5% Error theory: 60, 4.0% Non-cognitivism: 129, 8.6% Subjectivism: 324, 21.6% Substantive realism: 209, 13.9% In general I don't think there are foundational ideas on LW that shouldn't be questioned. Any idea is up for investigation provided the case is well argued.
9falenas1089y
But there are certain ideas that will be downvoted and dismissed because people feel like they aren't useful to be talking about, like if God exists. I think OP was asking if it was a topic that fell under this category.
6ChristianKl9y
The problem with "does God exist" isn't about the fact that LW is atheist. It's that it's hard to say interesting things about the subject and provide a well argued case. I don't expect to learn something new when I read another post about whether or not God exists. If someone knows the subject well enough to tell me something new, then there no problem with them writing a post to communicate that insight.
7ilzolende9y
I endorse discussion of virtue ethics on LW mostly because I haven't seen many arguments for why I should use it or discussions of how using it works. I've seen a lot of pro-utilitarianism and "how to do things with utilitarianism" pieces and a lot of discussion of deontology in the form of credible precommitments and also as heuristics and rule utilitarianism, but I haven't really seen a virtue ethics piece that remotely approaches Yvain's Consequentialism FAQ in terms of readability and usability.
7pianoforte6119y
When you say virtue ethics, it sounds like you are describing consequentialism implemented on human software. If we're talking about the philosopher's virtue ethics, this question should clarify: Are virtues virtuous because they lead to moral behavior? Or is behavior moral because it cultivates virtue? The first is just applied consequentialism. The second is the philosopher's virtue ethics.
2Dahlen9y
The thing is... that's really beyond the scope of what I care to argue about. I understand the difference, but it's so small as to not be worth the typing time. It's precisely the kind of splitting hairs I don't want to go into. The theme that would get treated is morality, not ethics. It kind of starts off assuming that it is self-evident why good is good, and that human beings do not hold wildly divergent morals or have wildly different internal states in the same situation. Mostly. Sample topics that I'm likely to touch on are: rationality as wisdom; the self-perception of a humble person and how that may be an improvement from the baseline; the intent with which one enters an interaction; a call towards being more understanding to others; respect and disrespect; how to deflect (and why to avoid making) arguments in bad faith; malicious dispositions, and more. Lots of things relevant to community maintenance. These essays aren't yet written, so perhaps that's why it all sounds (and is) so chaotic. There may be more topics which conflict more obviously with utilitarianism, especially if there's a large number of individuals concerned. As for conflicts with consequentialism, they're less likely, but still probable.
2pianoforte6119y
If you don't want to talk about the difference then I respect that, and I wasn't suggesting that you do. If anything I would suggest avoiding the term "virtue ethics" entirely and instead talking about virtue which is more general and a component of most moral systems. I disagree that it is splitting hairs though or a small difference. It makes a large whether or not you wish to cultivate virtue for its own sake (regardless or independent of consequence), or because it helps you achieve other goals. The latter makes fewer assumptions about the goals of your reader.
5BrassLion9y
Consequentialism, where morality is viewed through a lens of what happens due to human actions, is a major part of LessWrong. Utilitarianism specifically, where you judge an act by the results, is a subset of consequentialism and not nearly as widely accepted. Virtue Ethics are generally well liked and it's often said around here that "Consequentialism is what's right, Virtue Ethics are what works." I think that practical guide to virtue ethics would be well received.
5Vaniver9y
No. Individual utility calculations are, as a component of decision theory, but decision-theoretic utility and interpersonal-comparison utility are different things with different assumptions. This is a solid view, and one of the main ones I take--but I observe that listing out goals and developing training regimens have different purposes and uses.
3[anonymous]9y
I think virtue ethics is sufficiently edgy, new, different these days to be interesting. Go on.

new

I agree, scholarship is a problem.

5[anonymous]9y
Okay, ancient enough, but fell into disuse around the Enlightenment, was hardly considered 100-120 years ago, returned among academic philosophers like Philippa Foot, Catholics like MacIntyre tryed to keep it alive, and it is only roughly about now that it is something slowly considered again by the hip young atheist literati classes for whom karma is merely a metaphor and do not literally believe in bad deeds putting a stain on the soul, so in this sense it is only a newly fashionable thing again.
1Gunnar_Zarncke9y
Again I recommend a poll: Is utilitarianism foundational to LessWrong? (use the middle option to see results only) [pollid:964]
0Vaniver9y
Also, have you read this post? The virtue tag only points at it and one other, but searching will likely find more.
-10OrphanWilde9y

To any physicists out there:

This idea came to me while I was replaying the game Portal. Basically, suppose humanity one day developed the ability to create wormholes. Would one be able to generate an infinite amount of energy by placing one end of a wormhole directly below the other before dropping an object into the lower portal (thus periodically resetting said object's gravitational potential energy while leaving its kinetic energy unaffected)? This seems like a blatant violation of the first law of thermodynamics, so I'm guessing it would fail due to s... (read more)

8[anonymous]9y
Gravity is a conserved vector field. Any closed path through a gravitational potential leaves you with the same energy you started with. And if it doesn't you've stolen energy that was creating the gravity in the first place leaving less for the next circle to take and are thus just transforming energy from one to another.
0DanielLC9y
Does that apply when the space isn't simply connected? It could be conservative at every neighborhood, but not conservative over all if you allow portals.
4[anonymous]9y
Gravity would propagate through the connected space. The potential would probably be very VERY weird shaped but i see no reason it wouldnt be conservative or otherwise consistent with GR (the math of which is far beyond me). Though keep in mind in GR space curvature IS gravity and can change over time, I doubt you could maintain a knife edge thin aperture, it would all smooth out. What's really fun though are gravetomagnetic effects. These are to gravity what magnetism is to the electric field. Both the electric field and gravitational field are conservative. But changing or accelerating charges generate magnetic fields which are NOT conservative hence how an electron spinning around a coil in a field in a generator gains energy even though it returns to its starting point. However in doing so it accelerates up to velocity as well generating a counteracting field that cancels out some of the field accelerating it, the motion moving it, or both. Thus the nonconservative fields have a potential energy associated with them that can be extracted from them or used to couple two phenomena that are both coupled to them. To get gravetomagnetic effects you need huuuuuge mass flows and accelerations. But you can similarly steal the energy that drives them. Think frame dragging and extraction of black hole rotation.
0DanielLC9y
I'm assuming GR holds. Does it actually prove the field is conservative, or just irrotational? If it's irrotational and simply connected, then it's conservative, but if you stick a portal in it, it might not be. Portals don't need a knife edge. For example: Flight Through a Wormhole. You do need negative energy density or it will collapse, but that on its own shouldn't break conservation of energy.
4Squark9y
Wormholes don't quite behave like portals in the game. When something drops into a wormhole with zero velocity, the apparent mass of the entry end increases by the mass of the object and the apparent mass of the exit end decreases by the mass of the object. At some point one of the ends should acquire negative mass. I'm not sure what that means: either it literally behaves as a negative mass object or this is an indication of the wormhole becoming unstable and collapsing. Similarly, when something with momentum drops into a wormhole, the momentum is added to the apparent momentum of the entry end and subtracted from the apparent momentum of the exit end. The apparent masses change in a way that ensures energy conservation. This means that the gain in energy of the "cycling" object comes from wormhole mass loss and transfer of mass from the high end to the low end. Again, if it's true that the wormhole becomes unstable when its mass is supposed to go negative, that would be the end of the process.
2shminux9y
If you already postulate having enough negative energy to create a wormhole, there is no extra issues due to one of the throats having negative mass, except the weird acceleration effect, as I mentioned in my other reply.
0Squark9y
Maybe. However, how will the geometry look like when the sign flip occurs? Will it be non-singular?
0shminux9y
There isn't as much difference between negative- and positive-mass wormholes as between negative- and positive mass black holes. Negative-mass black holes have no horizons and a naked repulsive timelike singularity. A negative- (at infinity) mass wormhole would look basically like a regular wormhole. The local spacetime curvature would, of course, be different, but the topology would remain the same, S^2xRxR or similar.
3shminux9y
I have a PhD in Physics and my thesis was, in part, related to wormholes, so here it goes. (Squark covered most of your question already, though.) If something falls into a black hole, it increases the black hole mass. If something escapes a black hole (such as Hawking radiation), it decreases the black hole mass. Same with white holes. A wormhole is basically two black/white holes connected by a throat. One pass through the portal would increase the mass of the entrance and decrease the mass of the exit by the mass of the passing object. A portal with two ends having opposite masses would behave rather strangely: they sort of repel (the equivalent of Newton's law of gravity), but the gravitational force acting on the negative-mass end propels it toward the positive-mass end. As a result, the portal as a whole will tend to accelerate toward the positive end (entrance) and fly away, albeit rather slowly. In addition, due to momentum and angular momentum conservation, the portal will start spinning to counteract the motion of the passing object.
2ZankerH9y
At a glance, it seems like you're asking for extrapolation from a "suppose X - therefore X" - type statement, where X is the invalidation of conservation laws.
4dxu9y
I don't quite understand this statement. The only real premise I can see in my original comment is (Please feel free to correct me if you were in fact referring to some other premise.) Wormholes are generally agreed to be a possible solution to Einstein's equations--they don't, in and of themselves, violate conservation of energy. The scenario I proposed above is a method for generating infinite energy if physics actually worked that way, but since I'm confident that it doesn't, the proposed scenario is almost certainly flawed in some way. I asked my question because I wasn't sure how it was flawed. Whatever the flaw is, however, I doubt it lies in the wormhole premise. EDIT: Also see the replies from shminux and Squark.
0Slider9y
I am just taking wormholes to mean "altered connectivity of space" and leave out the "massive concentrations of mass" aspect. The curious thing about portals portals is that they somehow magically know to flip gravity when a object travels thourht. If the portal is just ordinary space there shouldn't be a sudden gradient to the gravity field but it should go smoothly from one direction to the other. And in additon gravity ought to work throught portals. that would mean that if you have a portal in a ceiling it ought to pull stuff throught it towards the ceiling (towards the center of mass beyond the portal). That is a standard "infinite fall" portal setup should feel equal gravity up and down midway between the portals. That kind of setup could be used to store kinetic energy but it doesn't generate it per se. However if portals aftected the gravity fields it could be that the non-standard gravity environment could be a major problem and would work even when you didn't want it to. That is since the net 0 gravity point of a infinite fall setup needs to transition smoothly to the "standard gravity environment" that likely means that quite a ways "outside" the portal pair there would be a reduced gravity environment.
0falenas1089y
You can probably think about it as the lines of a gravity field also going through the wormhole, and I believe the gravitational force would be 0 around the wormhole. The actual answer involves thinking about gravity and spacetime as a geometry, which I don't think you want to answer your question.

So there was recently an advance related to chips for running neural networks. I'm having a hard time figuring out if we should be happy or sad. I'm not sure if this qualifies as a "computing power" advance or a "cell modeling" one.

0Houshalter9y
I doubt it will help scientists reverse engineer the function of the brain any faster. However it could potentially be very useful for helping AI researchers develop artificial neural networks. ANNs aren't really tied to neuroscience research, and they probably won't help with emulations. But they are the current leading approach to AI, and increased computing power would significantly help AI research, as it has in the past.
0ChristianKl9y
Neural networks chips aren't neurons. Neurons are much more complex than nodes in artificial neural networks.
1jacob_cannell9y
Technically true but also irrelevant. At the physical level, a modern digital transistor based computer running an ANN simulation is also vastly more complex than the node-level ANN model. In terms of simulation complexity, a modern GPU is actually more complex than the brain. It would take at most on the order of 10^17 op/s second to simulate a brain (10^14 synapses @ 10^3 hz), but it takes more than 10^18 op/s second to simulate a GPU (10^9 transistors @ 10^9 hz). Simulating a brain at any detail level beyond its actual computational power is pointless for AI - the ANN level is the exactly correct level of abstraction for actual performance.
3ChristianKl9y
ANN are no neurons. We can't accurately simulate even what a single neuron does. Neurons can express proteins when specific hormones are in their environment. The functioning of roughly a third of the human genome is unknown. Increasing or decreasing the amount of channels for various substances in the cell membrane takes proteins. That's part of long term plasticity. That simulation completely ignores neurotransmitters floating around in the brain and many other factors. You can simulate a ANN at one op/synase but a simulation of a real brain is very incomplete at that level.
0[anonymous]9y
This blew my mind a bit. So why the heck are researchers trying to train neural nets when the nodes of those nets are clearly subpar?
1jacob_cannell9y
Actually the exact opposite is true - ANN neurons and synapses are more powerful per neuron per synapse than their biological equivalents. ANN neurons signal and compute with high precision real numbers with 16 or 32 bits of precision, rather than 1 bit binary pulses with low precision analog summation. The difference depends entirely on the problem, and the ideal strategy probably involves a complex heterogeneous mix of units of varying precision (which you see in the synaptic distribution in the cortex, btw), but in general with high precision neurons/synapses you need less units to implement the same circuit. Also, I should mention that some biological neural circuits implement temporal coding (as in the hippocampus), which allows a neuron to send somewhat higher precision signals (on the order of 5 to 8 bits per spike or so). This has other tradeoffs though, so it isn't worth it in all cases. Brains are more powerful than current ANNs because current ANNs are incredibly small. All of the recent success in deep learning where ANNs are suddenly dominating everywhere was enabled by using GPUs to train ANNs in the range of 1 to 10 million neurons and 1 to 10 billion synapses - which is basically insect to lizard brain size range. (we aren't even up to mouse sized ANNs yet) That is still 3 to 4 orders of magnitude smaller than the human brain - we have a long ways to go still in terms of performance. Thankfully ANN performance is more than doubling every year (combined hardware and software increase).
1ChristianKl9y
The fact that they are subpar doesn't mean that you can learn nothing from ANNs. It also doesn't mean that ANNs can't do a variety of tasks in machine learning with them.
-2jacob_cannell9y
Everything depends on your assumed simulation scale and accuracy. If you want to be pedantic, you could say we can't even simulate transistors, because clearly our simulations of transistors are not accurate down to the quantum level. However, the physics of computation allow us to estimate the approximate level of computational scale separation that any conventional (irreversible) physical computer must have to functional correctly (signal reliably in a noisy environment). The Lanauder limits on switching energies is one bound, but most of the energy (in brains or modern computers) goes to wire transmission energy, and one can derive bounds on signal propagation energy in the vicinity of ~1pJ / bit / mm for reliable signaling. From this we can then plug in the average interconnect distance between synapses and neurons (both directions) and you get a maximum computation rate on the order of 10^15 ops or so, probably closer to 10^13 low precision ops. Deriving all that is well beyond the scope of a little comment.
-1ChristianKl9y
The energy count for signal transmission doesn't include changing the amount of ion channels a neuron has. You might model short term plasticity but you don't get long term plasticity. You also don't model how hormones and other neurotransmitter float around in the brain. An ANN deals only with the electric signal transmission misses essential parts of how the brain works. That doesn't make it bad for the purposes of being an ANN but it's lacking as a model of the brain.
0jacob_cannell9y
Sure, all of that is true, but of the brain's 10 watt budget, more than half is spent on electric signaling and computation, so all the other stuff you mention at most increases the intrinsic simulation complexity by a factor of 2.
1ChristianKl9y
Are you aware of the complexity of folding of a single protein? It might not take much energy but it's very complex. If you have 1000 different types of proteins swimming around in a neurons that interact with each other I don't think you get that by adding a factor of two.

I recently found this blog post by Ben Kuhn where he briefly summarizes ~5 classic LW posts in the space of one blog post. A couple points:

  • I don't think that much of the content of the original posts is lost in Ben's summary, and it's a lot faster to read. Do others agree? Do we think producing a condensed summary of the LW archives at some point might be valuable? (It's possible that, for instance, the longer treatment of these concepts in the original posts pushes them deeper in to your brain, or that since people are so used to skimming, conceptua

... (read more)
8Kindly9y
This ought to be verified by someone to whom the ideas are genuinely unfamiliar.

Just posted this in the previous open thread; reposting here: Has anyone here used fancyhands.com or a similar personal-assistant service? If so, what was your experience like?

(context: I have anxiety issues making phone calls to strangers and certain other ugh fields, and am thinking I may be better off paying someone else to take care of such things rather than trying to bull through the ugh fields.)

2[anonymous]9y
Even with my own business, I found it incredibly hard to find lots of tasks which I could hand off to an assistant from whom I didn't know what their strengths were, and couldn't train.

Hi. I don't post much, but if anyone who knows me can vouch for me here, I would appreciate it.

I have a bit of a Situation, and I would like some help. I'm fairly sure it will be positive utility, not just positive fuzzies. Doesn't stop me feeling ridiculous for needing it. But if any of you can, I would appreciate donations, feedback, or anything else over here: http://www.gofundme.com/usc9j4

I can't understand what my girlfriend is saying when she uses her cell phone to call my cell phone. Often entire words are simply dropped from the audio stream. Is there anything I can do to improve the voice quality?

0moreati9y
A few thoughts based on eliminating/ruling out possible causes: * Can you avoid making cell -> cell calls? If you're both on a smartphone with wifi could you use e.g. Skype or Messenger? * Can you both use a hands free kit? This should eliminate poor positioning of the microphone/earpiece. * Are you or your girlfriend in a poor signal area? Does going outside to make the call reduce the problem? There are options such as GSM signal boosters and femtocells that might be worth exploring. * Some carriers have deployed HD Voice service. See if your phone and carrier(s) support this.
0CronoDAS9y
I have a Nokia Lumia 822 (a Windows Phone). My girlfriend has an iPhone 5s. Our carrier is Verizon.
0moreati9y
Ah, sorry. I wasn't seeking answers to those questions. They were meant to suggest lines of enquiry you should follow, and workarounds you could try.

PSA: If you wear glasses, you might want to take a look behind the little nosepads. Some... stuff... can build up there. According to this unverified source it is oxidized copper from glasses frame + your sweat, and can be cleaned with an old toothbrush + toothpaste.

9Dorikka9y
Sounds like the only disutility of the stuff is that it annoys some people, but it can't annoy you if you dont notice it...so why bring it up?

It mildly annoys me, but I hadn't thought to use toothpaste on it, so there's that.

For some time I've been thinking about just how much of our understanding of the world is tied up in stories and narratives.

Let's take gravity. Even children playing with balls have a good idea of where a ball is going to land after they throw it. They don't know anything about spacetime curvature or Newton's laws. Instead, they amass a lot of data about the behavior of previously-thrown balls and from this they can predict where a newly-thrown ball will land. With experience, this does not even require conscious thought--a skilled ball-player is already m... (read more)

8Lumifer9y
I think what would be useful is to distinguish a story (a typically linear narrative) and a model (a known-to-be-simplified map of some piece of reality). They are sufficiently different and often serve different goals. In particular, stories are rarely quantitative and models usually are.
0Epictetus9y
I like to think about how the two complement each other. You can build a model out of a mass of data, but extrapolation outside the data is tricky business. You can also start with a qualitative description of the phenomena involved and work out the details. A lot of models start off by making some assumptions and figuring out the consequences. Example: you can figure out gas laws by taking lots of measurements, or you can start with the assumption that gases are made of molecules that bounce around and go from there.
4Lumifer9y
We might be understanding the word "story" differently. To me a "story" is a narrative (a linear sequence of words/sentences/paragraphs/etc.) with the general aim of convincing your System 1. It must be simple enough for the System 1 and must be able to be internalized to become effective. There are no calculations in stories and they generally latch onto some basic hardwired human instincts. For example, a simple and successful story is "There are tiny organisms called germs which cause disease. Wash your hands and generally keep clean to avoid disease". No numbers, plugs into the purity/disgust template, mostly works. The three laws of Newton are not a story to me, to pick a counter-example. Nor is the premise that gas consists of identical independent molecules in chaotic motion -- that's an assumption which underlies a particular class of models. Models, as opposed to stories, are usually "boxes" in the sense that you can throw some inputs into the hopper, turn the crank, and get some outputs from the chute. They don't have to be intuitive or even understandable (in which case the box is black), they just have to output correct predictions. The Newton's laws, for example, make correct predictions (within their sphere of applicability and to a limited degree of precision), but we still have no idea how gravity really works.
1Epictetus9y
I was using "story" in a much more general sense. Perhaps I should have chosen a different word. I saw a story as some bit of exposition devised to explain a process. In that sense, I would view the kinetic theory of gases as a story. A gas has pressure because all these tiny particles are bumping into the walls of its container. Temperature is related to the average kinetic energy of the particles. The point here is that we can't see these particles, nor can we directly measure their state. Consider, in contrast, the presentation in Fermi's introductory Thermodynamics book. He eschewed an explanation of what exactly was happening internally and derived his main results from macroscopic behavior. Temperature was defined initially as that which a gas thermometer measures, and later on he developed a thermodynamic definition based on the behavior of reversible heat engines. This sort of approach treats the inner workings of a gas as unknown and only uses that which we can directly observe through instrumental readings. I guess what I really want to distinguish are black boxes from our attempts to guess what's in the box. The latter is what I tried to encapsulate by "story".
0Lumifer9y
Isn't that what science usually calls a "theory"?
2IlyaShpitser9y
You are talking about prediction vs causality. I agree, we understand via causality, and causality lets us take data beyond what is actually observed into the realm of the hypothetical. Good post.

I've begun to notice discussion of AI risk in more and more places in the last year. Many of them reference Superintelligence. It doesn't seem like a confirmation bias/Baader-Meinhoff effect, not really. It's quite an unexpected change. Have others encountered a similar broadening in the sorts of people you encounter talking about this?

5Manfred9y
Yup. Nick Bostrom is basically the man. Above and beyond being the man, he's a respectable focal point for a sea change that has been happening for broader reasons.

Anybody care to weigh in on adding a flag to newbies, and make it part of the LessWrong culture to explain downvotes to flagged newbies?

Identifying what you've done incorrectly to provoke downvotes is a skill that requires training. (Especially since voting behavior in Discussion is much less consistent to voting behavior in Main.)

1Gunnar_Zarncke9y
You can detect newbies by their low karma and moderate positive ratio. The registration age doesn't mean much really.
0philh9y
You have to click through to discover that though, and there are exceptions who have a low ratio but don't need downvotes explained to them. (I don't know if there are such users with a low ratio and low total, though.)
0Gunnar_Zarncke9y
You can see the ration in the tool tip over the karma. Interestingly there are users with positive ratios arbitrarily close to 50%.
4philh9y
You can see a comment's karma ratio by hovering on the comment's karma, but to see the user's ratio you need to click through to the user's page and hover over their karma there.

Is this the place to ask technical questions about how the site works? If so, then I'm wondering why I can't find any of the rationality quote threads on the main discussion page anymore (I thought we'd just stopped doing those, until I saw it pop up in the side bar just now). If not, then I think I just asked anyway. :P

9NancyLebovitz9y
This is a good place to ask about how the site works.
8gjm9y
Here -- it's in Main rather than Discussion.
4Silver_Swift9y
Thanks!

I have nearly finished the second of the Seven Secular Sermons, which is going to premiere at the European Less Wrong Community Weekend in Berlin in June. For final polishing, I'm looking for constuctive feedback especially from native speakers of English. If you'd like to help out, PM me for the current draft.

1[anonymous]9y
Offtopic, but I like your theory of depression have you ever written about it elsewhere longer? Or any recommended online readings?
1chaosmage9y
Glad you like it. No I haven't written about it at more length than in that post, and it is entirely my own speculation, based only on the phenomenology of clinical depression and the rank theory I referenced. I don't have any reading on depression to recommend that is anywhere near as good as SSC. And that's despite my working as a research associate at a depression-focused nonprofit.
4NancyLebovitz9y
The status possibility doesn't explain post-partum depression.
0chaosmage9y
Why not?
0NancyLebovitz9y
The baby is very vulnerable to the mother.
1ChristianKl9y
But the baby is also able to often dictate when the mother sleeps and has power over the mother. At least if the mother lets it.
2NancyLebovitz9y
Here's a quote from the link above: "What triggers the depression response is a lack of obviously relatively weaker (dependent or safe to bully) group members".
1ChristianKl9y
"Weakness" isn't a straightforward word. It's not a precise word. In general this theory hasn't had the amount of work needed to be precise. It can be that the thing that matters for weakness is having power over other people. The power relationship between a mother and her child is complex.
0chaosmage9y
Good point. While this theory does predict that nobody who has recently won a physical fight or successfully bullied someone (in a non-virtual setting) should have acute depression symptoms, I'd rather be cautious about less obviously one-sided imbalances. After all, kids are quite dependent for several more years after postpartum subsides, and they evidently don't confer depression symptoms immunity for that entire period.
0NancyLebovitz9y
It wouldn't surprise me if some bosses are seriously depressed even though they have a complex relationship with employees.
0Elo9y
I would also like to remain updated about this theory and subsequent writings. (repeat to ping you too)
0Elo9y
I would also like to remain updated about this theory and subsequent writings.
[-][anonymous]9y10

The rationalist Tumblr community looks interesting. Any tips on how to start?

8Gondolinian9y
Well, for a start there's the Rationalist Masterlist currently hosted by Yxoque (MathiasZaman here on LW). You could announce your presence there and ask to be added to the list, or just lurk around some of the blogs for a while and send anonymous asks to people to get a feel for the community before you set up an account.
2[anonymous]9y
Thanks!

According to this article, a traumatic brain injury turned a furniture salesman into a mathematician. (Not without side effects, but still.)

There is a bit of conventional wisdom in evolutionary biology that drastic improvements in efficacy are not available through trivial modifications (and that nontrivial modifications which are random are not improvements). This is an example of the principle that evolution is supposed to have already 'harvested' any 'low-hanging fruit'. Although I don't think much of this type of website (note the lack of external l... (read more)

[This comment is no longer endorsed by its author]Reply

Suffering and AIs

Disclaimer - Under utilitarianism suffering is an intrinsically bad thing. While I am not a utilitarian, many people are and I will treat it as true for this post because it is the easiest approach for this issue. Also, apologies if others have already discussed this idea, which seems quite possible

One future moral issue is that AIs may be created for the purpose of doing things that are unpleasant for humans to do. Let's say an AI is designed with the ability to have pain, fear, hope and pleasure of some kind. It might be reasonable to ex... (read more)

0hairyfigment9y
It does seem familiar.
0the-citizen9y
That seems like an interesting article, though I think it is focused on the issue of free-will and morality which is not my focus.

Article on transhumanism (intro bit perfunctory) - but has interviews with Anders Sandberg and Steve Fuller on implications of transhumanist thought. Quite interesting in parts - http://www.theworldweekly.com/reader/i/humanity-20/3757

Same journalist did reasonable job of introducing AI dangers last month - http://www.theworldweekly.com/reader/i/irresistible-rise-ai/3379

[-][anonymous]9y00

Asking for article recommendations: difference between intelligence vs. intellectualism, how a superintelligence is not the same as a superintellectual.

An idea: auto-generated anki-style flashcards for mathematical notation.

Let's say you struggle reading set builder notation. This system would prompt you with progressively more complicated set builder expressions to parse, keeping track of what you find easy or difficult, and providing tooltips/highlighting for each individual term in the expression. If it were an anki card, the B-side would be how you'd read the expression out in natural language. This wouldn't be a substitute for learning how to use set builder notation, but it would give you a lot of p... (read more)

1Richard_Kennaway9y
Auto-generated exercises might be better. Compared with e.g. learning a language, there aren't many elementary components to mathematical notation to be memorised. The exercises might be auto-rated for complexity, and a generalised Anki for this sort of material would generate random examples of various degrees of complexity, and make the distribution of complexity depend in some way on the distribution of your errors with respect to complexity. Language learning materials might be similarly generalised from the simple vocabulary lists that flashcards are usually used for.
0sixes_and_sevens9y
I agree that auto-generated exercises would be a superior utility, but that seems like a much trickier proposition. Also, for clarification, this wouldn't be used for memorising notation, but for training fluency in it. My use of Anki as a comparison might have been misguided.
0Strangeattractor9y
I like the idea of making it easier to understand mathematical notation, and get more practice at it. However, using flash cards to implement it could be problematic. As I learned more and more mathematical notation while studying engineering, it became clear that a lot of the interpretation of the notation depends upon context. For example, if you see vertical lines to either side of an expression, does that mean absolute value or the determinant of a matrix? Is i representing the imaginary number, or current, or the vectors in the same direction as the x-axis? (As an example, electrical engineers use j for the imaginary number, since I represents current.) For a sufficiently narrow topic, the flashcards might be useful, but it might set up false expectations that the meaning of the symbols will apply outside that narrow topic. There is not a one-to-one correspondence between symbols and meaning.
0sixes_and_sevens9y
I was envisioning some sort of context-system, in part for the reason you describe and in part because people probably have specific learning needs, and at any given time they'd probably be focusing on a specific context. Also I reiterate what I've said to other commenters: likening it to Anki flashcards was probably misguided on my part. I'm not talking about generating a bunch of static flashcards, but about presenting a user with a dynamically-generated statement for them to parse. The interface would be reminiscent of something like Anki, but it would probably never show you the same statement twice.
0ChristianKl9y
It's important to understand the notation before you put it into Anki. Automatically generated cards with mathematical notation that the person doesn't yet understand is asking for trouble.
1sixes_and_sevens9y
I may not have presented this well in the original comment. This wouldn't be generating random static cards to put into an Anki deck, but a separate system which dynamically presents expressions made up of known components, and tracks those components instead of specific cards. It seems plausible to restrict these expressions to those composed of notation you've already encountered. In fact, this could work to its advantage. It also seems plausible to determine which components are bottlenecks, and therefore which concepts are the most effective point of intervention for the person studying. If the user hasn't learned, say, hat-and-tilde notation for estimators, and introducing that notation would result in a greater order of available expressions than the next most bottleneck-y piece of notation, it could prompt the user with "hey, this is hat-and-tilde notation for estimators, and it's stopping you from reading a bunch of stuff". It could then direct them to some appropriate material on the subject.
[-][anonymous]9y00

In what conceiveable (which does not imply logicality) universes would Rationalism not work in the sense of unearthing only some truths, not all truths? Some realms of truth would be hidden to Rationalists? To simplify it, I mean largely the aspect that of empiricism, of tying ideas to observations via prediction. What conceivable universes have non-observational truths, for example, Platonic/Kantian "pure apriori deduction" type of mental-only truths? Imagine for convenience's sake a Matrix type simulated universe, not necessarily a natural one,... (read more)

6IlyaShpitser9y
Don't need to posit crazy things, just think about selection bias -- are the sorts of people that tend to become rationalist randomly sampled from the population? If not, why wouldn't there be blind spots in such people just based on that?
0[anonymous]9y
Yes, but if I get the idea right, it is to learn to think in a self-correcting, self-improving way. For example, maybe Kanazawa is right in intelligence suppressing instincts / common sense, but a consistent application of rationality sooner or later would lead to discovering it and forming strategies to correct it. For this reason, it is more of the rules (of self-correction, self-improvement, self-updating sets of beliefs) than the people. What kinds of truths would be potentially invisible to a self-correcting observationalist ruleset even if this was practiced by all kinds of people?
6IlyaShpitser9y
Just pick any of a large set of things the LW-sphere gets consistently wrong. You can't separate the "ism" from the people (the "ists"), in my opinion. The proof of the effectiveness of the "ism" lies in the "ists".
3NancyLebovitz9y
Which things are you thinking of?
3IlyaShpitser9y
A lot of opinions much of LW inherited uncritically from EY, for example. That isn't to say that EY doesn't have many correct opinions, he certainly does, but a lot of his opinions are also idiosyncratic, weird, and technically incorrect. As is true for most of us. The recipe here is to be widely read (LW has a poor scholarship problem too). Not moving away from EY's more idiosynchratic opinions is sort of a bad sign for the "ism."
1NancyLebovitz9y
Could you mention some of the specific beliefs you think are wrong?

Having strong opinions on QM interpretations is "not even wrong."

LW's attitude on B is, at best, "arguable."

Donating to MIRI as an effective use of money is, at best, "arguable."

LW consequentialism is, at best, "arguable."

Shitting on philosophy.

Ratonalism as part of identity (aspiring rationalist) is kind of dangerous.

etc.


What I personally find valuable is "adapting the rationalist kung fu stance" for certain purposes.

4NancyLebovitz9y
Thank you. B?
0IlyaShpitser9y
Bayesian.
2Douglas_Knight9y
I read that "B" and assumed that you had a reason for not spelling it out, so I concluded that you meant Basilisk.
0IlyaShpitser9y
Sorry, bad habit, I guess.
2OrphanWilde9y
[Edited formatting] Strongly agree. http://lesswrong.com/lw/huk/emotional_basilisks/ is an experiment I ran which demonstrates the issue. Eliezer was unable to -consider- the hypothetical; it "had" to be fought. The reason being, the hypothetical implies a contradiction in rationality as Eliezer defines it; if rationalism requires atheism, and atheism doesn't "win" as well as religion, then the "rationality is winning" definition Eliezer uses breaks; suddenly rationality, via winning, can require irrational behavior. Less Wrong has a -massive- blind spot where rationality is concerned; for a web site which spends a significant amount of time discussing how to update "correctness" algorithms, actually posing challenges to "correctness" algorithms is one of the quickest ways to shut somebody's brain down and put them in a reactionary mode.
1Richard_Kennaway9y
It seems to me that he did consider your hypothetical, and argued that it should be fought. I agree: your hypothetical is just another in the tedious series of hypotheticals on LessWrong of the form, "Suppose P were true? Then P would be true!" BTW, you never answered his answer. Should I conclude that you are unable to consider his answer? Eliezer also has Harry Potter in MoR withholding knowledge of the True Patronus from Dumbledore, because he realises that Dumbledore would not be able to cast it, and would no longer be able to cast the ordinary Patronus. Now, he has a war against the Dark Lord to fight, and cannot take the time and risk of trying to persuade Dumbledore to an inner conviction that death is a great evil in order to enable him to cast the True Patronus. It might be worth pursuing after winning that war, if they both survive. All this has a parallel with your hypothetical.
1Jiro9y
The hypothetical (P) is used to get people to draw some conclusions from it. These conclusions must,by definition, be logically implied by the original hypothetical or nobody would be able to make them, so you can describe them as being equivalent to P. Thus, all hypotheticals can be described, using your reasoning, as "Suppose P were true? Then P would be true!" Furthermore, that also means "given Euclid's premises, the sum of the angles of a triangle is 180 degrees" is a type of "Suppose P were true? Then P would be true!"--it begins with a P (Euclid's premises) and concludes something that is logically equivalent to P. I suggest that an argument which begins with P and ends with something logically equivalent to P cannot be usefully described as "Suppose P would be true? Then P would be true!" This makes OW's hypothetical legitimate.
0Richard_Kennaway9y
The argument has to go some distance. OrphanWilde is simply writing his hypothesis into his conclusion.
0Jiro9y
His hypothetical is "suppose atheism doesn't win". His conclusion is not "then atheism doesn't win", so he's not writing his hypothesis into his conclusion. Rather, his conclusion is "then rationality doesn't mean what one of your other premises says it means". That is not saying P and concluding P; it is saying P and concluding something logically equivalent to P.
0TheAncientGeek9y
But that would be a misleading description.
0Jiro9y
Of course it's a misleading description, that's my point. RK said that OW's post was "Suppose P would be true? Then P would be true!" His reason for saying that, as far as I could tell, is that the conclusions of the hypothetical were logically implied by the hypothetical. I don't buy that.
0Miguelatron9y
While the MoR example is a good one, don't bother defending Eliezer's response to the linked post. "Something bad is now arbitrarily good, what do you do?" is a poor strawman to counter "Two good things are opposed to each other in a trade space, how do you optimize?" Don't get me wrong, I like most of what Eliezer has put out here on this site, but it seems that he gets wound up pretty easily and off the cuff comments from him aren't always as well reasoned as his main posts. To allow someone to slide based on the halo effect on a blog about rationality is just wrong. Calling people out when they do something wrong - and being civil about it - is constructive, and let's not forget it's in the name of the site.
0Richard_Kennaway9y
OW's linked post still looks to me more like "Two good things are hypothetically opposed to each other because I arbitrarily say so."
-1OrphanWilde9y
If it isn't worth trying to persuade (whoever), he shouldn't have commented in the first place. There are -lots- of posts that go through Less Wrong. -That- one bothered him. Bothered him on a fundamental level. As it was intended to. I'll note that it bothered you too. It was intended to. And the parallel is... apt, although probably not in the way that you think. I'm not Dumbledore, in this parallel. As for his question? It's not meant for me. I wouldn't agonize over the choice, and no matter what decision I made, I wouldn't feel bad about it afterwards. I have zero issue considering the hypothetical, and find it an inelegant and blunt way of pitting two moral absolutes against one another in an attempt to force somebody else to admit to an ethical hierarchy. The fact that Eliezer himself described the baby eater hypothetical as one which must be fought is the intellectual equivalent to mining the road and running away; he, as far as I know, -invented- that hypothetical, he's the one who set it up as the ultimate butcher block for non-utilitarian ethical systems. "Some hypotheticals must be fought", in this context, just means "That hypothetical is dangerous". It isn't, really. It just requires giving up a single falsehood: That knowing the truth always makes you better off. That that which can be destroyed by the truth, should be. He already implicitly accepts that lesson; his endless fiction of secret societies keeping dangerous knowledge from the rest of society demonstrate this. The truth doesn't always make things better. The truth is a very amoral creature; it doesn't care if things are made better, or worse, it just is. To call -that- a dangerous idea is just stubbornness. Not to say there -isn't- danger in that post, but it is not, in fact, from the hypothetical.
1Richard_Kennaway9y
Ah. People disagreeing prove you right.
-1OrphanWilde9y
We may disagree about what it means to "disagree".
1Richard_Kennaway9y
Eliezer's complete response to your original posting was: This, you take as evidence that he is "bothered on a fundamental level", and you imply that this being "bothered on a fundamental level", whatever that is, is evidence that he is wrong and should just give up the "simple falsehood" that truth is desirable. This is argument by trying to bother people and claiming victory when you judge them to be bothered.
0OrphanWilde9y
Since my argument in this case is that people can be "bothered", then yes, it would be a victory. However, since as far as I know Eliezer didn't claim to be "unbotherable", that doesn't make Eliezer wrong, at least within the context of that discussion. Eliezer didn't disagree with me, he simply refused the legitimacy of the hypothetical.
0TheAncientGeek9y
I 've notice that problem, but I think it is a bit dramatic to call it rationality breaking. I think it's more of a problem of calling two things, the winning thing amd the truth seeking thing, by one name.
1OrphanWilde9y
Do you really think there's a strong firewall in the minds of most of this community between the two concepts? More, do you think the word "rationality", in view of the fact that the word that happens to refer to two concepts which are in occasional opposition, makes for a mentally healthy part of one's identity? Eliezer's sequences certainly don't treat the two ideas as distinct. Indeed, if they did, we'd be calling "the winning thing" by its proper name, pragmatism.
0TheAncientGeek9y
Which values am I supposed to answere that by? Obviously it would be bad by e rationality, but it keeps going because i rationality brings benefits to people who can create a united front against the Enemy,
0OrphanWilde9y
That presumes an enemy. If deliberate, the most likely candidate for the enemy in this case, to my eyes, would be the epistemological rationalists themselves.
0TheAncientGeek9y
I was thinking of the fundies
-2ChristianKl9y
I don't think that's argued. It's also worth noting that the majority of MIRI's funding over it's history comes from a theist.
0Luke_A_Somers9y
Well... QM: Having strong positive beliefs on the subject would be not-even-wrong. Ruling out some is much less so. And that's what he did. Note, I came to the same conclusion long before. MIRI: It's not uncritically accepted on LW more than you'd expect given who runs the joint. Identity: If you're not letting it trap you by thinking it makes you right, if you're not letting it trap you by thinking it makes others wrong, then what dangers are you thinking of? People will get identities. This particular one seems well-suited to mitigating the dangers of identities. Others: more clarification required
0ChristianKl9y
I think there's plenty of criticism voiced about that concept on LW and there are articles advocating to keep one's identity small.
4IlyaShpitser9y
And yet...
0ChristianKl9y
From time to time people use the label aspiring rationalist but I don't think a majority of people on LW do.
4OrphanWilde9y
Depends on how you decide what truth is, and what qualifies it to be "unearthed." But for one universe in which some truth, for some value of truth, can be unearthed, for some value of unearthed, while other truth can't be: Imagine a universe in which 12.879% (exactly) of all matter is a unique kind of matter that shares no qualities in common with any other matter, and is almost entirely nonreactive with all other kinds of matter, and was created by a process not shared in common with any other matter, which had no effect whatsoever on any other matter. Any truths about this matter, including its existence and the percentage of the universe composed of it, would be completely non-observational. The only reaction this matter has with any other matter is when it is in a specific configuration which requires extremely high levels of the local equivalent of negative entropy, at which point it emits a single electromagnetic pulse. This was used once by an intelligence species composed of this unique matter who then went on to die in massive wars, to encode in a series of flashes of light every detail they knew about physics, and was observed by one human-equivalent monk ascetic, who used a language similar to morse code to write down the sequence of pulses, which he described as a holy vision. Centuries later, these pulses were translated into mathematical equations which described the unique physics of this concurrent universe of exotic matter, but no mechanism of proving the existence or nonexistence of this exotic matter, save that the equations are far beyond the mathematics of anyone alive at the time the signal was encoded, and it has become a controversial matter whether or not it was an elaborate hoax by a genius.
4ChristianKl9y
What do you mean with "Rationalism"? The LW standard definition is that it's about systematized winning. If the Matrix overlords punish everybody who tries to do systematized winning than it's bad to engage in it. Especially when the Matrix overlords do it via mind reading. The Christian God might see it as a sin. If you don't use the LW definition of rationalism, then rationalism and empiricism are not the same thing. Rationalism generally refers to gathering knowledge by reasoning as opposed to gathering it by other ways such as experiments or divine revelation. Gödel did prove that it's impossible to find all truths. This website is called Lesswrong because it's not about learning all truths but just about becoming less wrong.
2ike9y
That's misleading. With a finite amount of processing power/storage/etc, you can't find all proofs in any infinite system. We need to show that short truths can't be found, which is a bit harder.
0Houshalter9y
I don't think that's correct. My best understanding of Godel's theorem is that if your system of logic is powerful enough to express itself, then you can create a statement like "this sentence is unprovable". That's pretty short and doesn't rely on infiniteness.
0ike9y
The statement "this sentence is unprovable" necessarily includes all information on how to prove things, so it's always larger than your logical system. It's usually much larger, because "this sentence" requires some tricks to encode. To see this another way, the halting problem can be seen as equivalent to Godel's theorem. But it's trivially possible to have a program of length X+C that solves the halting problem for all programs of length X, where C is a rather low constant; see https://en.wikipedia.org/wiki/Chaitin's_constant#Relationship_to_the_halting_problem for how.
0Houshalter9y
I'm not sure how much space it would take to write down formally, and I'm not sure it matters. At worst it's a few pages, but not entire books, let alone some exponentially huge thing you'd never encounter in reality. It's also not totally arbitrary axioms that would never be encountered in reality. There are reasons why someone might want to define the rules of logic within logic, and then 99% of the hard work is done. But regardless, the interesting thing is that such an unprovable sentence exists at all. That its not possible to prove all true statements with any system of logic. It's possible that the problem is limited to this single edge case, but for all I know these unprovable sentences could be everywhere. Or worse, that it is possible to prove them, and therefore possible to prove false statements. I think the halting problem is related, but I don't see how it's exactly equivalent. In any case the halting problem work around is totally impractical, since it would take multiple ages of the universe to prove the haltingness of a simple loop. If you are referring to the limited memory version, otherwise I'm extremely skeptical.
0ike9y
That's only if your logical system is simple. If you're a human, then the system you're using is probably not a real logical system, and is anyway going to be rather large. See http://www.solipsistslog.com/halting-consequences-godel/
0ChristianKl9y
DeVliegendeHollander post didn't speak about short truths but about all truths.
2ike9y
If we're talking about all truths, then a finiteness argument shows we can never get all truths, no need for Godel. Godel shows that given infinite computing power, we still can't generate all truths, which seems irrelevant to the question. If we can prove all truths smaller than the size of the universe, that would be pretty good, and it isn't ruled out by Godel.
0Douglas_Knight9y
While Gödel killed HIlbert's program as a matter of historical fact, it was later Tarski who proved the theorem that truth is undecidable.
4[anonymous]9y
There's no guarantee we should be able to find any truths using any method. It's a miracle that the universe is at all comprehensible. The question isn't "when can't we learn everything?", it's "why can we learn anything at all?".
-3Lumifer9y
Because entities which can't do not survive.
2CronoDAS9y
Counterexample: Plants. Do they learn?
1Lumifer9y
Of course. Leaves turn to follow the sun, roots grow in the direction of more moist soil...
2CronoDAS9y
Is that really learning, or just reacting to stimuli in a fixed, predetermined pattern?
0[anonymous]9y
Does vaccination imply memory?.. Does being warned by another's volatile metabolites that a herbivore is attacking the population? (Higher) plants are organized by very different principles than animals; it is a never-ending debate on what constitutes 'identity' in them. Without first deciding upon that, can one speak about learning? I don't think they have it, but their patterns of predetermined answers can be very specific.
0[anonymous]9y
Also, there is an interesting study, 'Kin recognition, not competitive interactions, predicts root allocation in young Cakile edentula seedling pairs'. This seems to be more difficult to do than following the sun!
2[anonymous]9y
That just pushes the question back a step. Why can any entity learn?
2tim9y
In the spirit of Lumifer's comment, anything we would consider an entity would have to be able to learn or we wouldn't be considering it at all.
-1DanielLC9y
That would explain why all entities learn. Not why any entities learn. Ignoring things that can't learn doesn't explain the existence if things that can.
0Richard_Kennaway9y
A more useful question to ask would be "how do entities, in fact, learn?" This avoids the trite answer, "because if they didn't, we wouldn't be asking the question".
0Lumifer9y
I think if we follows this chain of questions, what we'll find at the end (except for turtles, of course) is the question "Why is the universe stable/regular instead of utterly chaotic?" A similar question is "Why does the universe even have negentropy?" I don't know any answer to these questions except for "That's what our universe is".
0[anonymous]9y
I suppose what I want to know is the answer to "What features of our universe make it possible for entities to learn?". Which sounds remarkably similar to DeVliegendeHollander's question, perhaps with an implicit assumption that learning won't be present in many (most?) universes.
0Lumifer9y
The fact that the universe is stable/regular enough to be predictable. Subject predictability is a necessary requirement for learning.
0fubarobfusco9y
For that matter, a world in which it is impossible for an organism to become better at surviving by modeling its environment (i.e. learning) is one in which intelligence can't evolve. (And a world in which it is impossible for one organism to be better at surviving than another organism, is one in which evolution doesn't happen at all; indeed, life wouldn't happen.)
2drethelin9y
A universe where humans are running on brains with certain glitches that prevent them from coming to correct conclusions through reasoning about specific topics.

Acting on A Gut Feeling

I've been planning an overnight camping trip for sometime this week; but something about the idea is making me feel... disquiet. Uneasy. I can't figure out why; I've got a nice set of equipment, I have people who know where I'm going, and so on. But I can't shake something resembling an "ugh field" that eases when I think of /not/ taking the trip.

And so, I'm concluding that the rational thing to do is to pay attention to my gut, on the chance that one part of my mind is aware of some detail that the rest of my mind hasn't figured out, and postpone my camping trip until I'm feeling more self-assured about the whole thing.

0Dorikka9y
Hm. I have been camping quite a few times, but would not really be comfortable camping alone. Might be true for you as well due to perceived lowering of risk. ETA: This is more of a preference thing for me than an actual concern thing.
0wadavis9y
It is because you forgot to pack TP. Bring TP and things will be ok.
1DataPacRat9y
:) Don't worry, I've got the essentials. And enough luxuries, like a folding solar panel, that I could head out for a week or more, if I were so inclined, and bought an upgrade to my cellphone dataplan. Considering from various perspectives, a trip to some nearby city and staying at an Airbnb or hotel raises more interest than disquiet; so it seems to be something about going camping, rather than taking a trip, which is bothering me. An imagined day-hike only raises questions about transportation, not unease, so it seems to be something about overnighting. Cooking? Water source? Sleeping? First-aid kit? Emergency plans in case of zombie outbreak (or more probable disasters)? I can't quite put my finger on it. And since almost the whole point of such a trip is to /improve/ my psychological condition by the end of it, I'm starting to feel a tad annoyed at myself for being less than clear about my motivations to me. :P
0DataPacRat9y
After some further mental gymnastics, the plan I've come up with which seems to most greatly reduce the disquiet is to buy a backup cellphone, small enough to turn off, stick in a pocket and forget about until I drop my smartphone in a stream. Something along the lines of taking one of the watchphones from http://www.dx.com/s/850%2b1900?category=529&PriceSort=up and snipping off the wristband, or one of the smaller entries in http://www.dx.com/s/850%2b1900?PriceSort=up&category=531 ; along with the $25/year plan from http://www.speakout7eleven.ca/ . Something on the order of $65 to $85 seems a moderate price for peace of mind. I am, however, going to take at least a day before placing any such order, to find out if such a plan still seems like it /will/ offer increased peace of mind. Not to mention, whether I can come up with (or get suggestions for) any plans which reveal that my actual disquiet arises from some other cause.
2Miguelatron9y
Have you gone camping like this before? If you have, were you by yourself when you did? I'm just trying to eliminate the source of your unease being something simple like stepping out of your comfort zone.
0DataPacRat9y
I have, indeed, gone camping like this before, though it's been a few years since I've done anything solo. The last few times I've gone camping has been with a relative to campgrounds with showers and such amenities, as opposed to solo in a conservation area or along a trail, which is/was my goal for my next hike. My original motivation for the overnighter was to make sure I hadn't forgotten anything important about soloing, and that all my gear's ready for longer trips. I'm in the general Niagara area, and the city papers laud the local rescue teams whenever a tourist needs to get pulled out of the Niagara gorge, so as long as I can dial 911, I should be able to get rescued from any situation I get myself in that's actually worth all this worrying about. The particular spot I'm thinking of going to (43.0911426, -79.284342) is roughly an hour's walk from a city bus stop - half an hour's walk from where I could wave to frequently passing cars, if my phone's dead. My plans for this whole trip have been to make it as simple and easy as possible. Amble down some trails for an hour or two, hang my hammock, cook my dinner, read my ebook, and amble on out the next day, enjoying the peace and quiet and so on. It's the smallest step I can think of beyond camping in a backyard - and since I don't have a backyard, it's pretty much as far within my comfort zone as any camping could be. If /that's/ now outside my comfort zone... then I've got a trunk full of camping gear that's suddenly a lot less useful to me.
0Miguelatron9y
Sounds like it will be a blast. The nerves may just be from going solo then. Sounds like you know what your about though, so I'd just override any trepidation and go for it. I did something similar a few weeks ago (admittedly with some friends). We were probably 40 miles from anywhere where we could flag down a car, and hiked into the woods several miles along the trail. My backpack broke inside the first mile, one of my friends slipped and fell into a stream, there were coyotes in the camp at night, and of course it rained. We all made it out sleepy sore and soggy the next, day but definitely felt better for having gone. Would do again. You'll have a good time, no worries.

Transhumanism-related blog posts:

In Praise of Life (Let’s Ditch the Cult of Longevity)

http://www.patheos.com/blogs/friendlyatheist/2015/05/08/in-praise-of-life-lets-ditch-the-cult-of-longevity/

Overcoming Bias: Why Not?

http://futurisms.thenewatlantis.com/2015/05/overcoming-bias-why-not.html

Also noteworthy:

Prepping for cataclysms, neglecting ordinary emergencies

http://akinokure.blogspot.com/2015/05/prepping-for-cataclysms-neglecting.html

Interesting books:

A cryonics novel:

The New World: A Novel Hardcover – May 5, 2015 by Chris Adrian (Author), Eli Horowitz (... (read more)

Despite medical and police personnel aware of his Alcor bracelet, he was taken to the medical examiner’s office in Santa Barbara, as they did not understand Alcor’s process and assumed that the circumstances surrounding his death would pre-empt any possible donation directives. Since this all transpired late on a Friday evening, Alcor was not notified of the incident until the following Monday morning.

How the hell are they treating this as a successful preservation? The body spent two days "warm and dead".

Looking at their past case reports, this seems to be fairly normal. Unless you're dying of a known terminal condition and go die in their hospice in Arizona, odds are the only thing getting froze is a mindless, decaying corpse.

Cryonicist Ben Best has put a lot of effort into studying and testing personal alarm gadgets you can wear which signal cardiac arrest to try to reduce the incidence of these unattended deanimations and long delays before cryopreservation. I plan to look into those myself.

Ironically, I've noticed that cryonicists talk a lot about how much they believe in scientific, medical and technological progress, but then they don't seem to want to act on it when you present them with evidence of the correctable deficiencies of real, existing cryonics.

Reference:

Personal Alarm Systems for Cryonicists

http://www.benbest.com/cryonics/alarms.html

In Praise of Life (Let’s Ditch the Cult of Longevity)

That article would be better titled "In Praise of Death", and is a string of the usual platitudes and circularities.

Overcoming Bias: Why Not?

Why not? Because (the article says) rationalists are cold, emotionless Vulcans, and valuing reason is a mere prejudice.

Prepping for cataclysms, neglecting ordinary emergencies

Maybe there are people who do that, but the article is pure story-telling, without a single claim of fact. File this one under "fiction".

A cryonics novel:

The New World: A Novel Hardcover – May 5, 2015 by Chris Adrian (Author), Eli Horowitz (Author)

The previous links scored 0 out of 3 for rational content, so coming to this one, I thought, what am I likely to find? Clearly, the way to bet is that it's against cryonics. There's only about a blogpost's worth of story in the idea of corpsicles just being unrevivable, so the novel will have to have revival working, but either it works horribly badly, or the revived people find themselves in a bad situation.

Click through...and I am, I think, pleasantly surprised to find that it might, in the end, be favourable to the idea. Or maybe not, ther... (read more)

6advancedatheist9y
I know "preppers" in Arizona who don't have any savings because they have spent all their money on this survivalist nonsense. They would do better to have put that money in the bank and applied for subsidized health insurance. The blogger agnostic does have a point about how the prepper mentality shows an abandonment of wanting to produce for and sustain the existing society, so that instead you can position yourself to become a scavenger and a parasite on the wealth produced by others if some apocalyptic collapse happens. That ridiculous Walking Dead series, which amounts to nonstop prepper porn, feeds some very damaging fantasies that I don't think we should encourage.
2CAE_Jones9y
I'm now curious: where are the essays that make actual arguments in favor of death? The linked article doesn't make any; it just asserts that death is OK and we're being silly for fighting it, without actually providing a reason (they cite Borges's distopias at the end, but this paragraph has practically nothing in common with the rest of the article, which seems to assume immortality is impossible anyway). Preference goes to arguments against Elven-style immortality (resistant but not completely immune to murder or disaster, suicide is an option, age-related disabilities are not a thing).
2jefftk9y
Here's my argument for why death isn't the supreme enemy: http://www.jefftk.com/p/not-very-anti-death
8Lumifer9y
I have a feeling a lot of discussions of life extension suffer from being conditioned on the implicit set point of what's normal now. Let's imagine that humans are actually replicants and their lifespan runs out in their 40s. That lifespan has a "control dial" and you can turn it to extend the human average life expectancy into the 80s. Would all your arguments apply and construct a case against meddling with that control dial?
6Kawoomba9y
That's a good argument if you were to construct the world from first principles. You wouldn't get the current world order, certainly. But just as arguments against, say, nation-states, or multi-national corporations, or what have you, do little do dissuade believers, the same applies to let-the-natural-order-of-things-proceed advocates. Inertia is what it's all about. The normative power of the present state, if you will. Never mind that "natural" includes antibiotics, but not gene modification. This may seem self-evident, but what I'm pointing out is that by saying "consider this world: would you still think the same way in that world?" you'd be skipping the actual step of difficulty: overcoming said inertia, leaving the cozy home of our local minimum.
8Lumifer9y
That's fine as long as you understand it and are not deluding yourself with a collection of reasons why this cozy local minimum is actually the best ever. The considerable power wielded by inertia should be explicit.
0jefftk9y
Huh? It feels like you're responding to a common thing people say, but not to anything I've said (or believe).
0Lumifer9y
I meant this as a response specifically to
2jefftk9y
More context: I don't think our current lifespan is the perfect length, but there's a lot of room between "longer is probably better" and "effectively unlimited is ideal".
0Lumifer9y
Yes, but are you saying there's going to a maximum somewhere in that space -- some metric will flip over and start going down? What might that metric be?
0jefftk9y
As I wrote in that post, there are some factors that lead to us thinking longer lives would be better, and others that shorter would be better. Maybe this is easier to think about with a related question: what is the ideal length of tenure at a company? Do companies do best when they have entirely employees-for-life, or is it helpful to have some churn? (Ignoring that people can come in with useful relevant knowledge they got working elsewhere.) Clearly too much churn is very bad for the company, but introducing new people to your practices and teaching them help you adapt and modernize, while if everyone has been there forever it can be hard to make adjustments to changing situations. The main issue is that people tend to fixate some on what they learn when they're younger, so if people get much older on average then it would be harder to make progress.
3Lumifer9y
A rather important question here is what's "ideal" and from whose point of view? From the point of the view of the company, sure, you want some churn, but I don't know what the company would correspond to in the discussion of the aging of humanity. You're likely thinking about "society", but as opposed to companies societies do not and should not optimize for profit (or even GDP) at any cost. It's not that hard to get to the "put your old geezers on ice floes and push them off into the ocean" practices. That's true, as a paraphrase of Max Planck's points out, "Science advances one funeral at a time". However it also depends on what does "live forever" mean. Being stabilized at the biological age of 70 would probably lead to very different consequences from being stabilized at the biological age of 25.
0jefftk9y
This probably also depends a lot on the particulars of what "stabilized at the biological age of 25" means. Most 25 year-olds are relatively open to experience, but does that come from being biologically younger or just having had less time to become set in their ways? This also seems like something that may be fixable with better pharma technology if we can figure out how to temporarily put people into a more childlike exploratory open-to-experience state.
1Lumifer9y
I think humans are sacks of chemicals to a much greater degree than most of LW believes. As a simple example, note that injections of testosterone into older men tend to change their personality quite a bit. I don't know if being less open to new experiences is purely a function of the underlying hardware, but it certainly is to a large extent a function of physiology, hormonal balance, etc. I hope you realize you're firmly in the "better living through chemistry" territory now. The idea of putting LSD into the public water supply is not a new one :-)
0OrphanWilde9y
Anecdotally, LSD.
0[anonymous]9y
My take: there's a big difference between calling something good and dealing with a fact.
-2passive_fist9y
Just a PSA: advancedatheist has a fixation on dehumanizing rationalists with an especial focus on rationalists 'not being able to get laid'. Here's some of his posts on this matter: http://lesswrong.com/lw/lzb/open_thread_apr_01_apr_05_2015/c7gr http://lesswrong.com/lw/m4h/when_does_technological_enhancement_feel_natural/cc09 http://lesswrong.com/lw/m1p/open_thread_apr_13_apr_19_2015/cams http://lesswrong.com/lw/dqz/a_marriage_ceremony_for_aspiring_rationalists/72wr It's best not to 'feed the trolls', so to speak.

So why lash out at him for this now when he isn't currently doing that? In any case I don't think he was trolling (deliberately trying to cause anger) so much as he was just morbidly fixated on a topic, and couldn't stop bringing it up,

1passive_fist9y
I'm pointing it out for the benefit of others who may not understand where AA is coming from.

I recommend responding to whatever specific problematic things he might say rather than issuing a general warning.

-1passive_fist9y
I am responding to quite specific problematic things he's saying. My comment is in response to AAs and is in reply to a reply to his comment. If I were to directly reply to him saying the same thing, my intentions would probably be misunderstood.
3philh9y
Another thing AA seems to do quite a lot is link to pro-death blog posts and articles that he doesn't endorse. I get the impression that's what he was doing with some of the above links. IIRC he's signed up for cryonics, so it seems unlikely that he's trying to push a pro-death agenda.
2Richard_Kennaway9y
So, AA, if you're reading down here, why are you signed up for cryonics while posting pro-death links and complaining at length about never getting laid? Optimism for a hereafter, despair for the present, and bitterness for the past. This is not a good conjunction.
0OrphanWilde9y
Maybe he just sees value on challenging the status quo?
2philh9y
I interpret it more as "look at these awful things people are saying about us".
7Luke_A_Somers9y
That's a weirdly weak collection of posts to complain about. It seems more like AA is noting his OWN lack of ability to get laid and has a degree of curiosity on the subject that would naturally result from such a situation. He also (correctly, I expect) anticipates that a noticeable number of people who are or have been in the same boat as him are on LW. I have seen some really obnoxious posts by AA, but these don't strike me as great examples. I am not about to go digging for them.
-6passive_fist9y
4Richard_Kennaway9y
I've noticed. While it certainly informs my attitude to everything he posts, he is mostly still at the level of worth responding to.
2Error9y
Just FYI, it looks like you goofed that last link.
6[anonymous]9y
Should be http://www.alcor.org/blog/dr-laurence-pilgeram-becomes-alcors-135th-patient-on-april-15-2015/
0Dorikka9y
Works now for me
0Error9y
That's odd, because it still doesn't for me. For me the last link is a duplicate of the second-to-last. Um?
2Dorikka9y
Um, sorry. I reflexively checked the last link and it went to a valid page, didn't notice it was the same as the ones above. User error creates the weirdest problems, eh?
[-][anonymous]9y-30

Would this work at least as an early crude hypothesis of how neurotypicals function?

Neurotypicals like social mingling primarily because they play a constant game of social status points, both in the eyes of others (that is real status) and just feeling like getting status (this is more like self-esteem). This should not be understood as a harsh machiavellian cruel game. Usually not. Often it is very warm and friendly. For example, we on the spectrum often finding things like greeting each other superfluous. Needless custom. You notice when people arrive ... (read more)

6Zubon9y
That sounds like a subset of neurotypical behavior. I'm neurotypical and from the very first sentence ("Neurotypicals like social mingling primarily because they play a constant game of social status points, both in the eyes of others (that is real status) and just feeling like getting status (this is more like self-esteem).") I found it contrary to my experience. Which is not to say it is wrong, and it certainly looks like behavior I have seen, but it kind of suggests that there is One Neurotypical Experience as opposed to a spectrum. That is reading the initial "neurotypicals" as "all/most neurotypicals" as opposed to "some neurotypicals" or "some subset of neurotypicals." I think you are trying to describe typical neurotypical behavior, so I would read that "neurotypicals" as trying to describe how most neurotypicals behave. But I am not the most central example of a neurotypical, so others may find it a more accurate description of their social experiences. I don't like social mingling, and I avoid most games of social status points. My extroversion score is 7 out of 100, which is likely a factor in not seeing myself in your description of neurotypicals. There seem to be several assumptions built into that unpacking. For example, it suggests that all/most nerds are on the spectrum. My characterization of neurotypical socialization would include mindlessly following social customs as well as enjoying the social game. I don't think highly social neurotypicals would describe their behavior as a "status micropayment exchange"; that seems like the wrong metaphor and suggests the dominant model as a fixed-sum status game, whereas many (most?) social interactions have no need for an exchange of status points. Even when a social status point game is in play, I would expect more interactions to involve the recognition of point totals rather than an exchange. "Mutual reassurance or reinforcement of each others status" seems on point. If the above is the start of a hy
2ChristianKl9y
If you say Bob likes X because of Y, what do you mean with it? Do you mean that if Y wouldn't be there Bob wouldn't like X? I don't think that there a good reason to believe that if you take status away no neurotically would engage in social mingling or like engaging in it. Apart from that "status" is a word that's quite abstract. It's much more something "map" than "territory". That produces danger to get into too vague to be wrong territory.
2[anonymous]9y
Let's get more meta here. Usually the map-terrain distinction is used to describe how human minds interpret the chunks of reality that are not man-made. When we are talking about something that arises from the behavior of humans, how can we draw that distinction. Plato's classic "What is justice?" is map or terrain? Here the terrain is in human minds too, as justice exists only inside minds and nowhere else, so the distinction seems to be more like is it the grand shared map or a more private map of maps? And the same with status. It does not exist outside the human perception of it. Similar to money, esp. paper/computer number money. I will consider it a typo, assuming you meant neurotypicals like I did i.e. people outside the autism spectrum, or in other words non-geeks. I got the idea from here. If status microtransactions are so important... (to be continued gotta go now)
0ChristianKl9y
If you look at the link you posted it argues: That means that dominance and submission map more directly to the territory than status does. The author doesn't argue that people care about mutually reinforcement of each other status as being high but that people also consciously make moves to submit and place themselves at a low status position. The text invalidates your idea that people engage primarily in social interaction to maximize the amount of status. You don't pick that up if you make the error of not treating status as a model but as reality. Reality is complex. Models simplify reality. Sometimes the simplification keeps the essential elements of what you want to describe. Other times it doesn't. Yes, it's a typo likely because my spellchecker didn't know "neurotypicals".