How many rationalists live in Africa, and especially South Africa? I'm kind of surprised that there is no LW meetup anywhere in Africa, I would have guessed that at least South Africa or Nigeria are sufficiently developed and have sufficiently prevalent internet access to have one. Should somebody who has more conscientiousness than I do (at least for now) start one here in South Africa?
If you're rational and you're in South Africa, why are you still in South Africa? How much do you value your life over the trivial inconvenience of moving?
Oh, dear... From Marginal Revolution's comment section:
What if “Satoshi Nakamoto” is an evil AI, and the whole concept of the blockchain was invented to see if it could devise a way to harvest the processing power of billions of computers. Currently they are just doing meaningless (or seemingly meaningless) math problems. But what if the math problems they were doing weren’t meaningless? What if they were trying to solve some sort of physics problem necessary to create wormholes or something?
What if Satoshi Nakamoto is Roko’s Basilisk?
The bitcoin mining computations are pretty provably meaningless - all it is is looking for hash collisions. If you want examples of convincing millions of people to donate their computing power for meaningful computation, with no financial incentive, look at folding@home or rosetta@home.
What are your thoughts on the following Great Filter hypothesis: (1) Reward-based learning is the only efficient way to create AI. (2) AGI is easy, but FAI is hard to invent because the universe is so unpredictable (intelligent systems themselves being the most unpredictable structures) and nearly all reward functions will diverge once the AI starts to self-improve and create copies of itself. (3) The reward functions needed for a friendly reinforcement learner reflect reality in complex ways. In the case of humans they are learned by trial and error during evolution. (4) Because of this, the invention of FAI requires a simulation in which it can safely learn complex reward functions via evolution or narrow AI, which is time-consuming. (5) However, once AGI is widely regarded as feasible, people will realize that whoever invents it first will have nearly unlimited power. An AI arms race will ensue in which unfriendly AGIs are much more likely to arise.
I don't see why an unfriendly AGI would be significantly less likely to leave a trail of astronomical evidence of its existence than a friendly AI or an interstellar civilisation in general.
Since we'd rather look at well-dressed people than badly dressed people, good clothes have positive externalities, and should therefore be subsidized. (The main problem with this is who would get to decide which clothes count as "good" for this purpose.)
Since we'd rather look at fit people than fat people, physical fitness has positive externalities, and should therefore be subsidised.
Since we'd rather look at people we can visually identify with than people we can't, ethnic segregation has positive externalities, and should therefore be subsidised.
Dead enough by Walter Glannon
To honour donors, we should harvest organs that have the best chance of helping others – before, not after, death
Now imagine that before the stroke our hypothetical patient had expressed a wish to donate his organs after his death. If neurologists could determine that the patient had no chance of recovery, then would that patient really be harmed if transplant surgeons removed life-support, such as ventilators and feeding tubes, and took his organs, instead of waiting for death by natural means? Certainly, the organ recipient would gain: waiting too long before declaring a patient dead could allow the disease process to impair organ function by decreasing blood flow to them, making those organs unsuitable for transplant.
But I contend that the donor would gain too: by harvesting his organs when he can contribute most, we would have honoured his wish to save other lives. And chances are high that we would be taking nothing from him of value. This permanently comatose patient will never see, hear, feel or even perceive the world again whether we leave his organs to whither inside him or not.
This might have the side-effect of putting even more people off signing up for donation. Most people I've talked to about it who are opposed cite horror stories about doctors prematurely "giving up" on donors to get at their organs.
Bonus Stupid Question
I remember reading about how some biologists took some wild foxes, and allowed ones which were friendlier to humans to breed. In the next generation of fox offspring, they let the friendliest ones of those litters reproduce. They repeated this several times. After some number of generations, they found these friendliest of foxes had droopy ears like domesticated dogs. This demonstrates how a simple process of artificial selection, like just selecting for friendlier animal companions, may have been sufficient to lead to the domestication of dogs.
Now, my question is, could we humans do the same thing with octopi? Could we just take a population of octopi, and identify the ones which can meaningfully interact with humans in a friendly and docile way, and let them breed, and iterate this process until we have some kind of domesticated octopi?
If they're not long-lived, they wouldn't make good work animals, but I want to know if octopi could at all be domesticated regardless. The fact they're short-lived might mean humans could breed domesticated octopi even faster.
octopi
Octopuses / octopodes. It's greek, not latin.
SJW / NRX (or symmetric positions more towards the centre on both sides as appropriate)
Collectivism / individualism
Virtue ethics / consequentialism
Apparently since the Enlightenment this idea has gotten about that all the previous generations didn't know how to live properly, even our parents' generation; but that we somehow mysteriously know how to do it right, or at least better. But then if we have offspring, many of them might develop the same attitude towards us.
This really doesn't make sense, because incompetent people generally don't leave descendants. Our ancestors must have gotten a lot of important things right on average for us to have come into existence in the first place. Yet we think we can just reverse the wisdom of the ages in all kinds of areas and not screw things up. It looks like a kind of evolution-denialism, in fact.
How can people who say they believe in evolution also hold the conflicting idea that they know better than the principles derived from the collective evolutionary experiences of human survival?
As far as I can tell, in postmodern western value systems the idea isn't that they know better than Gnon, it's the idea that "principles derived from the collective evolutionary experiences of human survival" shouldn't matter in comparison to postmodern cultural sensibilities, and therefore it's worth expending effort to counter them as opposed to making use of them.
Could putting an 850 nm wavelength LED light array on my forehead for ten minutes or so a day do me any harm? It generates a small amount of heat.
Edit: See this video for a motivation: http://selfhacked.com/2015/07/18/interview-with-dr-michael-hamblin-harvard-professor-and-infrared-therapy-expert/
No way to know unless you measure its output spectrum, but as long as it's mostly IR, it shouldn't.
Question though, why would you do that? I don't see the difference between using this or applying any other heat source.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
I've never been in this situation, but one question I'm confused about is how both sides arrive at an appropriate-sized bribe. What's the stop them taking everything you have?
There's usually an informal standard that's large enough to represent a significant boost to a police officer's income, but small enough that it's worth it for most people to pay rather than risk more fines or worse. There's not much negotiation involved.