Free Will: Good Cognitive Citizenship with Will Wilkinson and Eliezer Yudkowsky <-- This link contains the wrong video, I think. Anyone have the correct video?
The gated version link seems down - try https://www.sciencedirect.com/science/article/abs/pii/016230958990006X ?
..."Update: many people have read this post and suggested that, in the first file example, you should use the much simpler protocol of copying the file to modified to a temp file, modifying the temp file, and then renaming the temp file to overwrite the original file. In fact, that’s probably the most common comment I’ve gotten on this post. If you think this solves the problem, I’m going to ask you to pause for five seconds and consider the problems this might have. (...) The fact that so many people thought that this was a simple solution to the probl
The acceleratingfuture domain's registration has expired (referenced in the starting quote) (http://acceleratingfuture.com/?reqp=1&reqr=)
i think the concept of death is extremely poorly defined under most variations of posthuman societies; death as we interpret it today depends on a number of concepts that are very likely to break down or be irrelevant in a post-human-verse
take, for example, the interpretation of death as the permanent end to a continuous distinct identity:
if i create several thousand partially conscious partial clones of myself to complete a task (say, build a rocketship), and then reabsorb and compress their experiences, have those partial clones died? if i lose 99.5% o...
RIASEC link is broken ( in "a RIASEC personality test might help") - google returns this: http://personality-testing.info/tests/RIASEC.php as the top alternative
“It’s not a kid’s television show,” Andy told me, “Where the antagonist makes the Machiavellian plan and then abandons that plan completely the first time it fails. People fail, they revise, they adjust parameters, they you achieve victory through persistence and hard work.”
J. C. McCrae, Pact WebSerial
in retrospect, that's a highly in-field specific bit of information and difficult to obtain without significant exposure - it's probably a bad example.
for context:
friendster failed at 100m+ users - that's several orders of magnitude more attention than the vast majority of startups ever obtain before failing, and a very unusual point to fail due to scalability problems (with that much attention, and experience scaling, scaling should really be a function of adequate funding more than anything else).
there's a selection effect for startups, at least the one...
i largely agree in context, but i think it's not an entirely accurate picture of reality.
there are definite, well known, documented methods for increasing available resources for the brain, as well as doing the equivalent of decompilation, debugging, etc... sure, the methods are a lot less reliable than what we have available for most simple computer programs.
also, once you get to debugging/adding resources to programming systems which even remotely approximate the complexity of the brain, though, that difference becomes much smaller than you'd expect. in...
This whole incident is a perfect illustration of how technology is equalizing capability. In both the original attack against Sony, and this attack against North Korea, we can't tell the difference between a couple of hackers and a government.
you quote feynman, then proceed to ignore the thing you quoted.
you're ignoring two options that fall right out of the quote:
google him? from the first three search results:
utilons, hedons, altruist-ons, successfully getting others to win - by most measures, few people have won as much, as quickly, as he has, at about 60% through their life expectancy
i don't understand. what's the point of going to all the trouble required to wake up at 3 am, only to then waste your time by being tired and/or depressed?
why do you assume that someone who has the intelligence, self control and dedication required to identify that waking up at 3 am is a requirement for success, makes a plan to make sure that he can deliver on that requirement and then follows through - would then fail so terribly on other fronts?
That doesn't sound terribly rational. One's performance when tired is a well-known case where the lens sees itself very darkly. If you're going to mess with your sleep pattern it is imperative to quantize; measure the thing you care about, experiment, and see whether it is making you worse or better.
[in the context of creatively solving a programming problem]
"You will be wrong. You're going to think of better ideas. ... The facts change. ... When the facts change, do not dig in. Do it over again. See if your answer is still valid in light of the new requirements, the new facts. And if it isn't, change your mind, and don't apologize."
-- Rich Hickey
(note that, in context, he tries to differentiate between reasoning with incomplete information, which you don't need to apologize for - just change your mind and move on - and genuine mistakes or errors)
I haven't seen them mentioned in this thread, so thought I'd add them, since they're probably valid and worth thinking about:
the utility of a math understanding, combined with the skills required for doing things such as mathematical proofs (or having a deep understanding of physics) is low for most humans. much lower than rote memorization of some simple mathematical and algebraic rules. consider, especially, the level of education that most will attain, and that the amount of abstract math and physics exposure in that time is very small. teaching such
“The first magical step you can do after a flood,” he said, “is get a pump and try to redirect water.”
-- Richard James, founding priest of a Toronto based Wicca church, quoted in a thegridto article
When reading this paper, and the background, I have a recurring intuition that the best approach to this problem is a distributed, probabilistic one. I can't seem to make this more coherent on my own, so posting thoughts in the hope discussion will make it clearer:
ie, have a group of related agents, with various levels of trust in each others' judgement, each individually asses how likely a descendant will be to take actions which only progress towards a given set of goals.
While each individual agent may only be able to asses a subset of a individual desc...
The story clearly states Harry's explicit interest in not attending school, so he wouldn't have tried anything to change his sleep pattern for that purpose, and I doubt by the age of 10 he'd found any other important reasons to motivate sleep pattern changing therapy.
I also doubt his parents' preferences matter, here, and even if they did prefer he change his habits, I doubt they'd press him into therapy without his explicit, cooperative, interest.
I found the quote amusing specifically because of this ambiguity (modulus your first point - the question of values seems tangential to me).
I found the mix of optimism (ie. the assumptions that no extinction type events will occur, and that there will be a continuous descendant type relationship between generations far into our future, etc...) and pessimism (ie, the assumption that, on a large enough time scale, most architectural components traceable to now-humans will become obsolete) poignant.
Bokonon: One day the enhanced humans of the future will dig through their code, until they come to the core of their own minds. And there they will find a mass of what appears to be the most poorly written mess of spaghetti code ever devised, its flaws patched over by a massive series of hacks.
Koheleth: And then they will attempt to rewrite that code, destroying the last of their humanity in the process.
Our brains are closest to being sane and functioning rationally at a conscious level near our birth (or maybe earlier). Early childhood behaviour is clear evidence for such.
"Neurons" and "brains" are damaged/mutated results of a mutated "space-virus", or equivalent. All of our individual actions and collective behaviours are biased in externally obvious but not visible to us ways, optimizing for:
terraforming the planet in expectation of invasion (ie, global warming, high CO2 pollution)
spreading the virus into space, with a built in bias for spreading away from our origin (voyager's direction)
I've since learned that some people use the word "rationality" to mean "skills we use to win arguments and convince people to take our point of view to be true", as opposed to the definition which I've come to expect on this site (currently, on an overly poetic whim, I'd summarize it as "a meta-recursively applied, optimized, truth-finding and decision making process" - actual definition here).
I'm trying to understand where the bad is in this idea.
Are you maybe opposed to details of the implementation? Would you think the idea is bad if the option to filter out results is opt-in and explicitly stated? For example, offer users a "only use votes from teenagers when displaying data on the site" option, which they can enable or disable at will.
Intelligent thought and free will, as experienced and exhibited by individual humans is an illusion. Social signalling and other effects have allowed for a handful of meta-intelligences to arise, where individuals are functioning as computational units within the larger coherent whole.
The AI itself is the result of an attempt for the meta-intelligences to reproduce, as well as to build themselves a more reliable substrate to live in; it has already successfully found methods to destroy / disrupt the other intelligences and has high confidence that it... (read more)