How commonly are arguments on LessWrong aimed at specific users? Sometimes, certainly. But it seems the rule, rather than the exception, that articles here dissect commonly encountered lines of thought, absent any attribution. Are they targeting "someone not in the room"? Do we need to put a face to every position?
By the by, "They're making cognitive errors" is an insultingly reductive way to characterize, for instance, the examination of value hierarchies and how awareness of them vs unawareness influence both our reasoning and appraisal of our fellow man's morals.
"Saying you put the value of truth above your value of morality on your list of values is analogous to saying you put your moral of truth above your moral of values; it's like saying bananas are more fruity to you than fruits."
I'm not sure if I understand your meaning here. Do you mean that truth and morality are one in the same, or that one is a subset of the other?
"Where does non-misleadingness fall on your list of supposedly amoral values such as truth and morality? Is non-misleadingness higher than truth or lower?"
Surely to be truthful is to be non-misleading...?
>"Perhaps AIs would treat humans like humans currently treat wildlife and insects, and we will live mostly separate lives, with the AI polluting our habitat and occasionally demolishing a city to make room for its infrastructure, etc."
Planetary surfaces are actually not a great habitat for AI. Earth in particular has a lot of moisture, weather, ice, mud, etc. that poses challenges for mechanical self replication. The asteroid belt is much more ideal. I hope this will mean AI and human habitats won't overlap, and that AI would not want the Earth's minerals simply because the same minerals are available without the difficulty of entering/exiting powerful gravity wells.
Humans are not wrapper-minds.
Aren't we? In fact, doesn't evolution consistently produce minds which optimize for survival and reproduction? Sure, we're able to overcome mortal anxiety long enough to commit suicide. But survival and reproduction is a strongly enough engrained instinctual goal that we're still here to talk about it, 3 billion years on.
Rarely do I get such insightful feedback but I appreciate when I do. It's so hard to step outside of myself, I really value the opportunity to see my thoughts reflected back at me through other lenses than the one I see the world through. I suppose I imagined the obsolete tech would leave little doubt that the Sidekicks aren't sentient, but the story also sort of makes the opposite case throughout when it talks about how personality is built up by external influences. I want the reader to be undecided by the end and it seems I can't have that cake and eat it too (have the protag be the good guy). Thanks again and Merry Christmas
Because the purpose of horror fiction is to entertain. And it is more entertaining to be wrong in an interesting way than it is to be right.
>""I'm going to do high-concept SCP SF worldbuilding literally set in a high-tech underground planet of vaults"
I do not consider this story scifi, nor PriceCo to be particularly high tech.
>"and focus on the details extensively all the way to the end - well, except when I get lazy and don't want to fix any details even when pointed out with easy fixes by a reader"
All fiction breaks down eventually, if you dig deep enough. The fixes were not easy in my estimation. I am thinking now this story was a poor fit for this platform however
I purposefully left it indeterminate so readers could fill in the blanks with their own theories. But broadly it represents a full, immediate and uncontrolled comprehension of recursive, fractal infinity. The pattern of relationships between all things at every scale, microcosm and macrocosm.
More specifically to the story I like to think they were never human, but always those creatures dreaming they were humans, shutting out the awful truth using the dome which represents brainwashing / compartmentalization. Although I am not dead-set on this interp...
Good catch, indeed you're right that it isn't standard evolution and that an AI studies how the robots perish and improves upon them. This is detailed in my novel Little Robot, which follows employees of Evolutionary Robotics who work on that project in a subterranean facility attached to the cave network: https://www.amazon.com/Little-Robot-Alex-Beyman-ebook/dp/B06W56VTJ2
I appreciate your insightful post. We seem similar in our thinking up to a point. Where we diverge is that I am not prejudicial about what form intelligence takes. I care that it is conscious, insofar as we can test for such a thing. I care that it lacks none of our capacities, so that what we offer the universe does not perish along with us. But I do not care that it be humans, specifically, and feel there are carriers of intelligence far more suited to the vacuum of space than we are, or even cyborgs. Does the notion of being superceded disturb you?
Well put! While you're of course right in your implication that conventional "AI as we know it" would not necessarily "desire" anything, an evolved machine species would. Evolution would select for a survival instinct in them as it did in us. All of our activities you observe fall along those same lines are driven by instincts programmed into us by evolution, which we should expect to be common to all products of evolution. I speculate a strong AI trained on human connectomes would also have this quality, for the same reasons.
>"A lot of the steps in your chain are tenuous. For example, if I were making replicators, I'd ensure they were faithful replicators (not that hard from an engineering standpoint). Making faithful replicators negates step 3."
This assumes three things: First, the continued use of deterministic computing into the indefinite future. Quantum computing, though effectively deterministic, would also increase the opportunity for copying errors because of the added difficulty in extracting the result. Second, you assume that the mechanism which ensures faithful ...
Is it reasonable to expect that every future technology be comprehensible to the minds of human beings alive today, otherwise it's impossible? I realize this sounds awfully convenient/magic-like, but is there not a long track record in technological development of feats which were believed impossible, becoming possible as our understanding improves? A famous example being the advent of spectrometry making possible the determination of the composition of stars, and the atmospheres of distant planets:
"In his 1842 book The Positive Philosophy, the French phil...
>"and that the few who do are now even more implausibly superhuman at chipping tunnels hundreds of miles long out of solid rock."
No, there have just been a lot of them over a very long period of time. Each made a little progress on the tunnel before dying out.
>"Look at Biosphere 2 or efforts at engineering stable closed ecosystems: it is not easy!"
This is not a true closed system.
>"and in the long run, protein deficiency as they use up stores, lose a bunch of crops to various errors (possibly contaminating everything), and the soil beco...
It does not say anywhere that every group finishes the tunnel, nor that the tunnel is filled in between cycles. But it does hint that there have been many many groups before who lived and died without leaving the starting PriceCo. This solves the problem of tunnel length vs digging time.
Food supply duration is solved by farming, as explained in the story. There is an unlimited supply of energy and water, after all.
The other issues remain, but then, it's fiction meant to entertain and is tagged as such.
The ending is because I normally write horror and take perverse delight in making small additions to wholesome things which totally subvert and ruin them. It's a compulsion at this point, lol.
The black goo is called Vitriol. A sort of a symbolic constant across many of my stories, present in different forms for various purposes. Typically it represents the corrosive hatred we indulge in, a poisoned well we cannot help but keep returning to even as we feel it killing us.
I'm thankful for your readership and will endeavor not to disappoint you. Tomorrow's will be a neat one.
"I'm not exactly sure what the point is though"
Not to fear transhumanism, not to regard ourselves as finished products, but also not to assume that more intelligent/powerful = more moral
"an earth-swallowing sea of maximizer AI nano"
That's not what the black sea is, but that angle makes sense in retrospect
I appreciate your readership and insights. Some of these challenges have answers, some were just oversights on my part.
1. The central theme was about having the courage to reject an all powerful authority on moral grounds even if it means eternal torment, rather than endlessly rationalize and defend its wrongdoing out of fear. "Are you a totalitarian follower who receives morality from authority figures or are you able to determine right and wrong on your own despite pressure to conform" is the real moral test of the Bible, in this story, rather than being...
Not to worry, I'm secure in my talents, as a tradpubbed author of ten years. If by this time I could not write well, I would choose a different pursuit. I appreciate your good intentions but my ego is uninjured and not in need of coddling. It is a hardened mass of scar tissue as a consequence of growing up autistic in a less sensitive time.
This article in fact was originally posted on a monetized platform, which is why it's in that style you dislike. You certainly have a nose for it. I didn't know to tailor it to this community's preferences as I have only...
Horror movies are quite a popular genre, despite depicting awful, bleak scenarios. Imagine if the only genre of film was romcom. Imagine if no sour or bitter foods existed and every restaurant sold only desserts. I am of the mind that there is much to appreciate about life as a human, even as there is also much to hate. I am not here only to be happy, as such a life would be banal and an incomplete representation of the human experience. Rollercoasters are more enjoyable than funiculars because they have both ups and downs.
>"It is. An argument is only as strong as its weakest link."
If the conclusion hinges upon that link, sure.
>"Reversing entropy and simulation absolutely are."
You do not need to reverse entropy to remake a person. Otherwise we are reversing entropy every time we manufacture copies of something which has broken. Even the "whole universe scan" method does not actually wind back the clock, except in sim.
>"Well you suggest in the article that our simulators would resurrect us, am I missing something?"
Yes. If every intelligent species takes the att...
>"You might begin by arguing that the US military is generally trustworthy, wouldn't ever release doctored footage to spread misinformation"
When the government denied UAPs, the response was "it's not officially real, the authorities have not verified it". Now the government says it is real, and the response has shifted to "you trust the authorities??"
>"Would you think a good title for that article would be "The US military is generally trustworthy"? I think that would be a bad title"
See above. It's always lose/lose with goalpost movers. This doe...
>"Well, as you yourself outline in the article people have basically just accepted death. How much funding is currently going into curing aging? (Which seems to be a much lower hanging fruit currently than any kind of resurrection.) Much less than should be IMO."
A good point. I'm not sure how or if this would change. My suspicion is that as the technology necessary to remake people gets closer to readiness, developed for other reasons, the public's defeatism will diminish. They dare not hope for a second life unless it's either incontrovertibly possible...
These are good points. Can we agree a more accurate title would be "Futurists with STEM knowledge have a much better prediction track record than is generally attributed to futurists on the whole"? Though considerably longer and less eye catching.
UAPs seem to perform something superficially indistinguishable from antigravity btw, whatever they are. Depending of course on whether the US government's increasingly official, open affirmation of this phenomenon persuades you of its authenticity. If there exists an alternate means to do the same kinds of things we wanted antigravity for in the first place, the impossibility of antigravity specifically seems like a moot point.
Because it ties in to the earlier point you mentioned about demand driving technological development. What is there more demand for than the return of departed loved ones? Simulationism was one of two means of retrieving the necessary information to reconstitute a person btw, though I have added a third, much more limited method elsewhere in these comments (mapping the atomic configuration of the still living).
>"You are talking about them in past tense as if they have already achieved their claimed capabilities. I have no doubt that practical mars vehic...
No, it isn't unnecessary as multiple potential methods of retrieving the necessary information exist, and I wanted to cover them when I felt it was appropriate. Are you behaving reasonably? Is it my responsibility to anticipate what you're likely to assume about the contents of an article before you read it? Or could you have simply finished reading before responding? I intend no hostility, though I confess I do feel frustrated.
This exact argument is already presented late in the article: "In the event that we’re in a simulation already, many of the barriers facing the scanning of an entire universe (or at least a solar system, accounting for miniscule external gravitational influences) are solved. That information already exists in the simulation back end."
You might've finished reading before you replied
Ah yes, the age old struggle. "Don't listen to them, listen to me!" In Deuteronomy 4:2 Moses declares, “You shall not add to the word which I am commanding you, nor take away from it, that you may keep the commands of the Lord your God which I command you.” And yet, we still saw Christianity, Islam and Mormonism follow it.