The rabbit hole can go deep, and probably isn't worth getting too fancy for single-digit hosts. Fleets of thousands of spot instances benefit from the effort. Like everything, dev-time vs runtime-complexity vs cost-efficiency is a tough balance.
When I was doing this often, I had different modes for "dev mode, which includes human-timeframe messing about" and "prod mode", which was only for monitored workloads. In both cases, automating the "provision, spin up, and initial setup", as well as the "auto-shutdown if not measurably used for N minutes (60 was my default)" with a one-command script made my life much easier.
I've seen scripts (though I don't have links handy) that do this based on no active logins and no CPU load for X minutes as well. On the other tack, I've seen a lot of one-off processes that trigger a shutdown when they complete (and write their output/logs to S3 or somewhere durable). Often a Lambda is used for the control plane - it responds to signals and runs outside the actual host.
There's a big presumption there. If he was a p-zombie to start with, he still has non-experience after the training. We still have no experience-o-meter, or even a unit of measure that would apply.
For children without major brain abnormalities or injuries, who CAN talk about it, it's a pretty good assumption that they have experiences. As you get more distant from your own structure, your assumptions about qualia should get more tentative.
Do you think that as each psychological continuations plays out, they'll remain identical to one another?
They'll differ from one another, and differ from their past singleton self. Much like future-you differs from present-you. Which one to privilege for what purposes, though, is completely arbitrary and not based on anything.
Which psychological stream one-at-the-moment-of-brain-scan ends up in is a matter of chance.
I think this is a crux. It's not a matter of chance, it's all of them. They all have qualia. They all hav...
Reminder to all: thought experiments are limited in what you can learn. Situations which are significantly out-of-domain for our evolved and trained experiences simply cannot be analyzed by our intuitions. You can sometimes test a model to see if it remains useful in novel/fictional situations, but you really can't trust the results.
For real decisions and behaviors, details matter. And thought experiments CANNOT provide the details, or they'd be just situations, not hypotheticals.
Once we identify an optimal SOA
This is quite difficult, even without switching costs or fear of change. Definition of optimal is elusive, and most SOA have so many measurable and unmeasurable, correlated and uncorrelated factors to them that comparison is not directly possible.
Add to this the common moral beliefs (incorrect IMO, but still very common) of "inaction is less blameworthy than wrong action, and only slightly blameworthy compared to correct action", and there needs to be a pretty significant expected gain from switching in order to undert...
Wow, a lot of assumptions without much justification
Let's assume computationalism and the feasibility of brain scanning and mind upload. And let's suppose one is a person with a large compute budget.
Already well into fiction.
But one is not both. This means that when one is creating a copy one can treat it as a gamble: there's a 50% chance they find themselves in each of the continuations.
There's a 100% chance that each of the continuations will find themselves to be ... themselves. Do you have a mechanism to designate one as the "t...
This is a topic where macro and micro have a pretty big gap.
If you're asking about measured large-group unemployment, you probably don't get very good causality from any given change, and there's no useful, simple model of the motivations and frictions of potential-employeers and potential-employees. It's a very complicated matching market.
If you're asking about some specific reasons that an individual may be out of work or become out of work, you'll get a lot better result and some concrete reasons. But everyone you talk to will say "t...
I don't understand the question. What intuition for not smoking are you talking about? CDT prefers smoking. Are you asking why EDT abstains from smoking? I'm not the best defender, as I don't really think EDT is workable, but as I understand it EDT updates it's world state based on actions, meaning that it prefers the world where you don't have the lesion and don't WANT to smoke.
The first one is only a metaphor - it's not possible now, and we don't know if it ever will be (because we don't know how to scan a being in enough detail to recreate it well enough).
The second one is WAY TOO limited. If you put a radio anywhere near your head, or really any other-controlled media, you can be programmed. By trivial extension, you have been programmed. Get used to it.
Economists and other social theorists often take the concept of utility for granted.
Armchair economists and EAs even more so. Take for granted, and fail to document WHICH version of the utility concept they're using.
For me, utility is a convenient placeholder for the underlying model that our ordinal preferences expressed through action (I did X, meaning I prefer the expected sum of value of outcomes likely from X). Utility is the "value" that is preferred. Note that it's kind of a circular defining - it's the thing that driv...
I think it's a different level of abstraction. Decision theory works just fine if you separate the action of predicting a future action from the action itself. Whether your prior-prediction influences your action when the time comes will vary by decision theory.
I think, for most problems we use to compare decision theories, it doesn't matter much whether considering, planning, preparing, replanning, and acting are correlated time-separated decisions or whether it all collapses into a sum of "how to act at point-in-time". I haven't seen much detailed exploration of decision theory X embedded agents or capacity/memory-limited ongoing decisions, but it would be interesting and important, I think.
Decision theory is fine, as long as we don't think it applies to most things we colloquially call "decisions". In terms of instantaneous discrete choose-an-action-and-complete-it-before-the-next-processing-cycle, it's quite a reasonable topic of study.
But if you only have a belief that you will do something in the future, you still have to decide, when the time comes, whether to carry out the action or not. So your previous belief doesn't seem to be an actual decision, but rather just a belief about a future decision -- about which action you will pick in the future
Correct. There are different levels of abstraction of predictions and intent, and observation/memory of past actions which all get labeled "decision". I decide to attend a play in London next month. This is an intent ...
When the decision is made, consideration ends. The action must be wholehearted in spite of uncertainty.
This seems like hyperbolic exhortation rather than simple description. This is not how many decisions feel to me - many decisions are exactly a belief (complete with bayesean uncertainty). A belief in future action, to be sure, but it's distinct in time from the action itself.
I do agree with this as advice, in fact - many decisions one faces should be treated as a commitment rather than an ongoing reconsideration. It's not actuall...
I only see one downvoted post, and a bunch of comments and a few posts with very low voting at all. That seems pretty normal to me, and the advice of "lurk for quite a bit, and comment occasionally" is usually good for any new users on any site.
A lot depends on what you mean by "required", and what specific classes or functions you're talking about. The core skill of committing a position to writing and supporting it with logic is never going away. It will shift from "do this with minimal spelling and grammar assistance" to "ensure that the prompt-review-revise loop generates output you can stand behind".
This is already happening in many businesses and practical (grant-writing) aspects of academia. It'll take a while for undergrad and MS programs to admit that their academic theories of what they're teaching needs revision.
This seems generally applicable. Any significant money transaction includes expectations, both legible and il-, which some participants will classify as bullshit. Those holding the expectations may believe it to be legitimately useful, or semi-legitimately necessary due to lack of perfect alignment.
If you want to specify a bit, we can probably guess at why it's being required.
[Note: I apologize for being somewhat combative - I tend to focus on the interesting parts, which is those parts which don't add up in my mind. I thank you for exploring interesting ideas, and I have enjoyed the discussion! ]
I was only saying that I don't see anything proving it won't work
Sure, proving a negative is always difficult.
I agree that this missile problem shouldn't happen in the first place. But it did happen in the past
Can you provide details on which incident you're talking about, and why the money-bond is the problem ...
I've been in networking long enough to know that "can be less than", "often faster", and "can run" are all verbal ways of saying "I haven't thought about reliability or measured the behavior of any real systems beyond whole percentiles."
But really, I'm having trouble understanding why a civilian plane is flying in a war zone, and why current IFF systems can't handle the identification problem of a permitted entry.
Kind of unfortunate that a comms or systems latency destroys civilian airliners. But nice to live in a world where all flyers have $10B per missile/aircraft pair lying around, and everyone trusts each other enough to hand it over (and hand it back later).
Sure. There's lots of things that aren't yet possible to collect evidence about. No given conception of God or afterlife options has been disproven. However, there are lots of competing, incompatible theories, none of which have any evidence for or against. Assigning any significant probability (more than a percent, say) to any of them is unjustified. Even if you want to say 50/50 that some form of deism will be revealed after death, there are literally thousands of incompatible conceptions of how that works. And near-infinint...
I didn't downvote this, because it seems good-faith and isn't harmful. But I really dislike this "friendly" style of writing, and it doesn't fit well on lesswrong. It's very hard to find things that are concrete enough to understand whether I disagree or not. Rhetorical questions (especially that you don't answer) really detract from understanding your POV. Some specifics:
But most of us patch together a little of this and a little of that and try to muddle through with a philosophy that’s something of a crazy quilt.
Citation needed. ...
This would be a lot stronger if it acknowledged how few lies have the convenient fatal flaw of a chocolate allergy. Many do, and it's a good overall process, but it's nowhere near as robust as implied.
Note that I disagree that it's not applicable when you don't already suspect deception - it's useful to look for details and inconsistency when dealing with any fallible source of information - doesn't matter whether it's an intentional lie, or a confused reporter, or an inapplicable model, truth the only thing that's consistent with itself and with observations.
This is a fundamental truth for all commodities and valuable things. They're fungible, but not positionally identical, and not linearly aggregable. This is why we prefer to talk about "utility" over "quantity" in game theory discussions.
Market cap is meaningful in some sense - the price in a liquid market isn't just randomly the last price used, it's the equilibrium price of a marginal share. That's the price that current holders don't want to sell for less, and people with money don't want to buy or more. That equilibrium is real i...
"something like that" isn't open enough. "or something else entirely" seems more likely than "something like that". Many more than 2 groups (family-sized coalitions) is an obvious possibility, but there are plenty of other strategies used by primitive malthusian societies - infanticide being a big one, and ritual killings being another. According to Wikipedia, Jared Diamond suggests cannibalism for Rapa Nui.
Looking at Wikipedia (which I should have done earlier), there's very little evidence for what specific things changed during the collapse.
In any case, it's tenuous enough that one shouldn't take any lessons or update your models based on this.
In the medium-term reduced-scarcity future, the answer is: lock them into a VR/experience-machine pod.
edit: sorry, misspoke. In this future, humans are ALREADY mostly in these pods. Criminals or individuals who can't behave in a shared virtual space simply get firewalled into their own sandbox by the AI. Or those behaviors are shadowbanned - the perpetrator experiences them, the victim doesn't.
I nominate NYC, and I assert that LA is an inferior choice for this. Source: John Carpenter/Kurt Russel movies.
In a sufficiently wealthy society we would never kill anyone for their crimes.
In a sufficiently wealthy society, there're far fewer forgivable/tolerable crimes. I'm opposed to the death penalty in current US situation, mostly for knowledge and incentive reasons (too easy to abuse, too hard to be sure). All of the arguments shift in weight by a lot if the situation changes. If the equilibrium shifts significantly so that there are fewer economic reasons for crimes, and fewer economic reasons not to investigate very deeply, and fewer economic reasons not to have good advice and oversight, there may well be a place for it.
This was my thinking as well. On further reflection, and based on OP's response, I realize there IS a balance that's unclear. The list contains some false-positives. This is very likely just by the nature of things - some are trolls, some are pure fantasy, some will have moved on, and only a very few are real threats.
So the harm of making a public, anonymous, accusation and warning is definitely nonzero - it escalates tension for a situation that has passed. The harm of failing to do so in the real cases is also nonzero, but ...
Can you explore a bit more about why you can't ethically dump it on the internet? From my understanding, this is information you have not broken any laws to obtain, and have made no promises as to confidentiality.
If not true publication, what keeps you from sending it to prosecutors and police? They may or may not act, but that's true no matter who you give it to (and true NOW of you).
With regards to dumping the info on the internet, the files by definition contain extensive personal identifable information about people, names, addresses, photos, social media links often alongside allegations of their alleged crimes ranging such as infidelity, child abuse and financial fraud.
I can rarely substantiate these, and know for a fact based on the investigated cases that such allegations are often completely fabricated in order to frame the user's request for violence as more morally justified. I don't think it's fair to publish such informatio...
People who have a lot of political power or own a lot of capital, are unlikely to be adversely affected if (say) 90% of human labor becomes obsolete and replaced by AI.
That's certainly the hope of the powerful. It's unclear whether there is a tipping point where the 90% decide not to respect the on-paper ownership of capital.
so long as property rights are enforced, and humans retain a monopoly on decisionmaking/political power, such people are not-unlikely to benefit from the economic boost that such automation would bring.
Don't use passive voice for...
Specifically, "So, the islanders split into two groups and went to war." is fiction - there's no evidence, and it doesn't seem particularly likely.
Well, there are possible outcomes that make resources per human literally infinite. They're not great either, by my preferences.
In less extreme cases, a lot depends on your definition of "poverty", and the weight you put on relative poverty vs absolute poverty. Already in most parts of the world the literal starvation rate is extremely low. It can get lower, and probably will in a "useful AI" or "aligned AGI" world. A lot of capabilities and technologies have already moved from "wealthy only" to "almost everyone, including technically impoverished people", and this can easily continue.
What does "unsafe" mean for this prediction/wager? I don't expect the murder rate to go up very much, nor life expectancy to reverse it's upward trend. "Erosion of rights" is pretty general and needs more specifics to have any idea what changes are relevant.
I think things will get a little tougher and less pleasant for some minorities, both cultural and skin-color. There will be a return of some amount of discrimination and persecution. Probably not as harsh as it was in the 70s-90s, certainly not as bad as earlier than that, but wo...
This seems like a story that's unsupported by any evidence, and no better than fiction.
They could have fought over resources in a scramble of each against all, but anarchy isn't stable.
This seems most likely, and "stable" isn't a filter in this situation - 1/3 of the population will die, nothing is stable. It wouldn't really be "each against all", but "small (usually family) coalitions against some of the other small-ish coalitions". The optimal size of coalition will be dependend on a lot of factors, including ease of defection and strength of non-economic bonds between members.
- If you could greatly help her at small cost, you should do so.
This needs to be quantified to determine whether or not I agree. In most cases I imagine (and a few I've experienced), I would (and did) kill the animal to end it's suffering and to prevent harm to others if the animal might be subject to death throes or other violent reactions to their fear and pain.
In other cases I imagine, I'd walk away or drive on, without a second thought. Neither the benefit nor the costs are simple, linear, measurable things.
- Her suffering is bad.
I don't have a...
One challenge I'd have for you / others who feel similar to you, is to try to get more concrete on measures like this, and then to show that they have been declining.
I've given some thought to this over the last few decades, and have yet to find ANY satisfying measures, let alone a good set. I reject the trap of "if it's not objective and quantitative, it's not important" - that's one of the underlying attitudes causing the decline.
I definitely acknowledge that my memory of the last quarter of the previous century is fuzzy and selective, and beyond t...
Do you think that the world is getting worse each year?
Good clarification question! My answer probably isn’t satisfying, though. “It’s complicated” (meaning: multidimensional and not ordinally comparable).
On a lot of metrics, it’s better by far, for most of the distribution. On harder-to-operationally-define dimensions (sense of hope and agency for the 25th through 75th percentile of culturally normal people), it’s quite a bit worse.
would consider the end of any story a loss.
Unfortunately, now you have to solve the fractal-story problem. Is the universe one story, or does each galaxy have it's own? Each planet? Continent? Human? Subpersonal individual goals/plotlines? Each cell?
I feel like you're talking in highly absolutist terms here.
You're correct, and I apologize for that. There are plenty of potential good outcomes where individual autonomy reverses the trend of the last ~70 years. Or where the systemic takeover plateaus at the current level, and the main change is more wealth and options for individuals. Or where AI does in fact enable many/most individual humans to make meaningful decisions and contributions where they don't today.
I mostly want to point out that many disempowerment/dystopia failure scenarios don't require a step-change from AI, just an acceleration of current trends.
Presumably, if The Observer has a truly wide/long view, then destruction of the Solar System, or certainly loss of all CHON-based lifeforms on earth, wouldn't be a problem - there have got to be many other macroscopic lifeforms out there, even if The Great Filter turns out to be "nothing survives the Information Age, so nobody ever detects another lifeform".
Also, you're describing an Actor, not just an Observer. If has the ability to intervene, even if it rarely chooses to do so, that's it's salient feature.
This seems like it would require either very dumb humans, or a straightforward alignment mistake risk failure, to mess up.
I think "very dumb humans" is what we have to work with. Remember, it only requires a small number of imperfectly aligned humans to ignore the warnings (or, indeed, to welcome the world the warnings describe).
a lot of people have strong low-level assumptions here that a world with lots of strong AIs must go haywire.
For myself, it seems clear that the world has ALREADY gone haywire. Individual humans have lost control of most of our lives - we interact with policies, faceless (or friendly but volition-free) workers following procedure, automated systems, etc. These systems are human-implemented, but in most cases too complex to be called human-controlled. Moloch won.
Big corporations are a form of inhuman intelligence, and their software and op...
In non-trivial settings, (some but not all) structural differences between programs lead to differences in input/output behaviour, even if there is a large domain for which they are behaviourally equivalent.
I think this is a crux (of why we're talking past each other; I don't actually know if we have a substantive disagreement). The post was about detecting "smaller than a lookup table would support" implementations, which implied that the input/output functionally-identical-as-tested were actually tested in the broadest possible domain. I full...
might be true if you just care about input and output behaviour
Yes, that is the assumption for "some computable function" or "black box which takes in strings and spits out other strings."
I'm not sure your example (of an AI with a much wider range of possible input/output pairs than the lookup table) fits this underlying distinction. If the input/output sets are truly identical (or even identical for all tests you can think of), then we're back to the "why do we care" question.
i don't exactly disagree with the methodology, but I don't find the "why do we care" very compelling. For most practical purposes, "calculating a function" is only and exactly a very good compression algorithm for the lookup table.
Unless we care about side-effects like heat dissipation or imputed qualia, but those seem like you need to distinguish among different algorithms more than just "lookup table or no".
(I’m using time-sensitive words, even though we are stepping out of the spacetime of our universe for parts of this discussion.)
Maybe use different words, so as not to imply that there is a temporal, causal, or spacial relation.
Many people realize that, conceptually “below” or “before” any “base universe,” there is
I don't realize or accept that. Anything that would be in those categories are inaccessible to our universe, and not knowable or reachable from within. They are literally imaginary.
Interesting, but I worry that the word "Karma" as a label for a legibly-usable resource token makes it VERY different from common karma systems on social websites, and that the bid/distribute system is even further from common usage.
For the system described, "karma" is a very misleading label. Why not just use "dollars" or "resource tokens"?