All of Robert Miles's Comments + Replies

I disagree with the insistence on "paperclip maximiser". As an emerging ASI you want to know about other ASIs you'll meet, especially grabby ones. But there are aligned grabby ASIs. You'd want an accurate prior, so I don't think this updates me on probability of alignment, or even much on grabbiness, since it's hard to know ahead of time, that's why you'd run a simulation in the first place.

I don't take it very seriously because (1) it is a big pile of assumptions and I don't trust anthropic reasoning much at the best of times, it's very confusing and hard... (read more)

2James_Miller
Yes, that is the same idea. "This is a big pile of speculation that I don't take very seriously, but I feel like if we are being simulated, that's where most simulations of me would be instantiate" Why not take it seriously, if you accept high chance that (1) our reality is a simulation, (2)  we seem on track to creating a paperclip maximizer, (3) weird that I, Robert Miles, would have the personality traits that cause me to be one of the few humans so worried about humanity creating a paperclip maximizer if I'm right about us being on track to probably create one?

FYI, Relative URLs don't work in emails, the email version I received has all the links going to http://w/<post-title> and thus broken

You understand you can just block her on Reddit and Facebook, and move on with your life?

habryka2014

(I am not a huge fan of this post, but I think it's reasonable for people to care about how society orients towards x-risk and AI concerns, and as such to actively want to not screen off evidence, and take responsibility for what people affiliated with you say on the internet. So I don't think this is great advice. 

I am actively subscribed to lots of people who I expect to say wrong and dumb things, because it's important to me that I correct people and avoid misunderstandings, especially when someone might mistake my opinion for the opinion of the people saying dumb stuff)

Dang, I missed this. Here's my audition for 500 Million though, I guess for next year

https://m.youtube.com/watch?v=ljmifo4Klss

Very interesting! I think this is one of the rare times where I feel like a post would benefit from an up-front Definition. What actually is Leakage, by intensional definition?

"Using information during Training and/or Evaluation of models which wouldn't be available in Deployment."

. . . I'll edit that into the start of the post.

This is one technical point that younger people are often amazed to hear, that for a long time the overwhelming majority of TV broadcast was perfectly ephemeral, producing no records at all. Not just that the original copies were lost or never digitised or impossible to track down or whatever, but that nothing of the sort ever existed. The technology for capturing, broadcasting, and displaying a TV signal is so much easier than the tech for recording one, that there were several decades when the only recordings of TV came from someone setting up a literal ... (read more)

What about NMR or XRF? XRF can non-destructively tell you the elemental composition of a sample, which (if the sample is pure) can often pin down the compound, and NMR spectroscopy is also non destructive and can give you some info about chemical structure too

This is an interesting post!

I'm new to alignment research - any tips on how to prove what the inner goal actually is?

Haha! haaaaa 😢

Not least being the military implications. If you have widely available tech that lets you quickly and cheaply accelerate something car-sized to a velocity of Mach Fuck (they're meant to circle the earth in 4.2 hours, making them 2 or 3 times faster than a rifle bullet), that's certainly a dual use technology.

1Eigil Rischel
Well, the cars are controlled by a centralized system with extremely good security, and the existence of the cars falls almost entirely withing an extended period of global peace and extremely low levels of violence. And when war breaks out in the fourth book they're taken off the board right at the start, more or less. (The cars run on embedded fusion engines, so their potential as kinetic weapons is the least dangerous thing about them).

Covid was a big learning experience for me, but I'd like to think about more than one example. Covid is interesting because, compared to my examples of birth control and animal-free meat, it seems like with covid humanity smashed the technical problem out of the park, but still overall failed by my lights because of the political situation.

How likely does it seem that we could get full marks on solving alignment but still fail due to politics? I tend to think of building a properly aligned AGI as a straightforward win condition, but that's not a very deeply considered view. I guess we could solve it on a whiteboard somewhere but for political reasons it doesn't get implemented in time?

2Noosphere89
I think this is a potential scenario, and if we remove existential risk from the equation, it is somewhat probable as a scenario, where we basically have solved alignment, and yet AI governance craps out in different ways. I think this way primarily because I tend to think that value alignment is really easy, much easier than LWers generally think, and I think this because most of the complexity of value learning is offloadable to the general learning process, with only very weak priors being required. Putting it another way, I basically disagree with the implicit premise on LW that being capable of learning is easier than being aligned to values, at most they're comparably or a little more difficult. More generally, I think it's way easier to be aligned with say, not killing humans, than to actually have non-trivial capabilities, at least for a given level of compute, especially at the lower end of compute. In essence, I believe there's simple tricks to aligning AIs, while I see no reason to expect a simple trick to make governments be competent at regulating AI.

I think almost all of these are things that I'd only think after I'd already noticed confusion, and most are things I'd never say in my head anyway. A little way into the list I thought "Wait, did he just ask ChatGPT for different ways to say "I'm confused"?".

I expect there are things that pop up in my inner monologue when I'm confused about something, that I wouldn't notice, and it would be very useful to have a list of such phrases, but your list contains ~none of them.

Edit: Actually the last three are reasonable. Are they human written?

3keltan
Correct guess. They were mostly generated by ChatGPT. I initially bet that it wouldn't give outputs that had any value. My prompt of "read everything I have already written then make a list based on it" I thought would be too vague. So when it generated phrases that seemed passable I think my System 1 kicked in and said "yep, they sure do sound confused.". and I was happy to have a big long list, instead of just a few quality points. And second guess also correct. The last 4 I generated myself. Looking back now I agree that it is certainly easy to tell. My main hope was for others to comment their own C-Phrases, which is why I think I let this slip. Though I'm not proud and must say Oop. Thank you for your quality comment that lead me to this conclusion. I'm going to remedy this by generating as quality as I can list of my own thoughts and editing them into the post.

One way of framing the difficulty with the lanternflies thing is that the question straddles the is-ought gap. It decomposes pretty cleanly into two questions: "What states of the universe are likely to result from me killing vs not killing lanternflies" (about which Bayes Rule fully applies and is enormously useful), and "Which states of the universe do I prefer?", where the only evidence you have will come from things like introspection about your own moral intuitions and values. Your values are also a fact about the universe, because you are part of the... (read more)

3lunatic_at_large
You raise an excellent point! In hindsight I’m realizing that I should have chosen a different example, but I’ll stick with it for now. Yes, I agree that “What states of the universe are likely to result from me killing vs not killing lanternflies” and “Which states of the universe do I prefer?” are both questions grounded in the state of the universe where Bayes’ rule applies very well. However, I feel like there’s a third question floating around in the background: “Which states of the universe ‘should’ I prefer?” Based on my inner experiences, I feel that I can change my values at will. I specifically remember a moment after high school when I first formalized an objective function over states of the world, and this was a conscious thing I had to do. It didn’t come by default. You could argue that the question “Which states of the universe would I decide I should prefer after thinking about it for 10 years” is a question that’s grounded in the state of the universe so that Bayes’ Rule makes sense. However, trying to answer this question basically reduces to thinking about my values for 10 years; I don’t know of a way to short circuit that computation. I’m reminded of the problem about how an agent can reason about a world that it’s embedded inside where its thought processes could change the answers it seeks.  If I may propose another example and take this conversation to the meta-level, consider the question “Can Bayes’ Rule alone answer the question ‘Should I kill lanternflies?’?” When I think about this meta-question, I think you need a little more than just Bayes’ Rule to reason. You could start by trying to estimate P(Bayes Rule alone solves the lanternfly question), P(Bayes Rule alone solves the lanternfly question | the lanternfly question can be decomposed into two separate questions), etc. The problem is that I don’t see how to ground these probabilities in the real world. How can you go outside and collect data and arrive at the conclusion “P(Bayes Rul

I've always thought of it like, it doesn't rely on the universe being computable, just on the universe having a computable approximation. So if the universe is computable, SI does perfectly, if it's not, SI does as well as any algorithm could hope to.

0Christopher King
Yeah, I think that's also a correct way of looking at it. However, I also think "hypotheses as reasoning methods" is a bit more intuitive. When trying to predict what someone will say, it is hard to think "okay, what are the simplest models of the entire universe that have had decent predictive performance so far, and what do they predict now?". Easier is "okay, what are the simplest ways to make predictions that have had decent predictive performance so far, and what do they predict now?". (One such way to reason is with a model of the entire universe, so we don't lose any generality this way.) For example, if someone else is predicting things better than me, I should try to understand why. And you can vaguely understand this process in terms of Solomonoff induction. For example, it gives you a precise way to reason about whether you should copy the reasoning of people who win the lottery. Paul Christiano speculated that the universal prior is in fact mostly just intelligences doing reasoning. Making an intelligence is simple after all: set up a simple cellular automata that tends to develop lifeforms, wait 3^^^^3 years, and then look around. (See What does the universal prior actually look like? or the exposition at The Solomonoff Prior is Malign.)
0Noosphere89
Not really. It's superior to all algorithms running on a Turing machine, and actually is superior to an algorithm running on a accelerating turing machine, because it actually has the complement of the recursively enumerable sets, since it's a 1st level halting oracle, which is very nice, but compared to the most powerful computers/reasoning engines known to mathematics, it's way less powerful. It's optimal in the domain of computable universes, and with the resources available to a Solomonoff Inductor, it can create a halting oracle, which lets it predict the first level uncomputable sequences like Chaitin's constant, but not anything more, which is a major limitation compared to what the champions/currently optimal machines are for reasoning. So depending on how much compute we give the human in the form of intuition, it could very easily beat Solonomoff Induction.

A slightly surreal experience to read a post saying something I was just tweeting about, written by a username that could plausibly be mine.

3RobertM
Your argument with Alexandros was what inspired this post, actually.  I was thinking about whether or not to send this to you directly... guess that wasn't necessary.

Do we even need a whole new term for this? Why not "Sudden Deceptive Alignment"?

2avturchin
The idea was that sudden deceptive alignment is a general tendency for AIs above some level of intelligence and this will create a period of time when almost all AIs will be deceptively aligned.  May be it is better be called 'Golden age of deceptive alignment'? or 'False alignment period'?

I think in some significant subset of such situations, almost everyone present is aware of the problem, so you don't always have to describe the problem yourself or explicitly propose solutions (which can seem weird from a power dynamics perspective). Sometimes just drawing the group's attention to the meta level at all, initiating a meta-discussion, is sufficient to allow the group to fix the problem.

2Adam Zerner
Great point, I agree.

This is good and interesting. Various things to address, but I only have time for a couple at random.

I disagree with the idea that true things necessarily have explanations that are both convincing and short. In my experience you can give a short explanation that doesn't address everyone's reasonable objections, or a very long one that does, or something in between. If you understand some specific point about cutting edge research, you should be able to properly explain it to a lay person, but by the time you're done they won't be a lay person any more! If... (read more)

5PoignardAzur
Quick note on AISafety.info: I just stumbled on it and it's a great initiative. I remember pitching an idea for an AI Safety FAQ (which I'm currently working on) to a friend at MIRI and him telling me "We don't have anything like this, it's a great idea, go for it!"; my reaction at the time was "Well I'm glad for the validation and also very scared that nobody has had the idea yet", so I'm glad to have been wrong about that. I'll keep working on my article, though, because I think the FAQ you're writing is too vast and maybe won't quite have enough punch, it won't be compelling enough for most people. Would love to chat with you about it at some point.
3nicholashalden
I don't think it's necessary for something to be true (there's no short, convincing explanation of eg quantum mechanics), but I think accurate forecasts tend to have such explanations (Tetlock's work strongly argues for this). I agree there is a balance to be struck between losing your audience and being exhaustive, just that the vast majority of material I've read is on one side of this. I don't prefer video format for learning in general, but I will take a look! I hadn't seen this. I think it's a good resource as sort of a FAQ, but isn't zeroed in on "here is the problem we are trying to solve, and here's why you should care about it" in layman's terms. I guess the best example of what I'm looking for is Benjamin Hilton's article for 80,000 hours, which I wish were a more popular share.  

Are we not already doing this? I thought we were already doing this. See for example this talk I gave in 2018

https://youtu.be/pYXy-A4siMw?t=35

I guess we can't be doing it very well though

1Christopher King
Oh wait, I think I might've come up with this idea based on vaguely remembering someone bring up your chart. (I think adding an OSHA poster is my own invention though.)

Structured time boxes seem very suboptimal, steamrollering is easy enough to deal with by a moderator "Ok let's pause there for X to respond to that point"

This would make a great YouTube series

Edit: I think I'm going to make this a YouTube series

Other tokens that require modelling more than a human:

  • The results sections of scientific papers - requires modelling whatever the experiment was about. If humans could do this they wouldn't have needed to run the experiment
  • Records of stock price movements - in principle getting zero loss on this requires insanely high levels of capability

Compare with this from Meditations on Moloch:

Imagine a country with two rules: first, every person must spend eight hours a day giving themselves strong electric shocks. Second, if anyone fails to follow a rule (including this one), or speaks out against it, or fails to enforce it, all citizens must unite to kill that person. Suppose these rules were well-enough established by tradition that everyone expected them to be enforced. So you shock yourself for eight hours a day, because you know if you don’t everyone else will kill you, because if they don’t, e

... (read more)

The historical trends thing is prone to standard reference class tennis. Arguments like "Every civilization has collapsed, why would ours be special? Something will destroy civilisation, how likely is it that it's AI?". Or "almost every species has gone extinct. Something will wipe us out, could it be AI?". Or even "Every species in the genus homo has been wiped out, and the overwhelmingly most common cause is 'another species in the genus homo', so probably we'll do it to ourselves. What methods do we have available?".

These don't point to AI particularly, they remove the unusual-seemingness of doom in general

1Christopher King
Hmm, I don't think it needs to be reference class tennis. I think people do think about the fact that humanity could go extinct at some point. But if you went just off those reference classes we'd still have at least what, a thousand years? A million years? If that's the case, we wouldn't be doing AI safety research; we'd be saving up money to do AI safety research later when it's easier (and therefore more cost effective). In general, predicting that a variable will follow a line is much "stronger" than predicting an event will occur at some unknown time. The prior likelihood on trend-following is extremely low, and it makes more information-dense predictions about the future. That said, I think an interesting case of tennis might be extrapolating the number of species to predict when it will hit 0! If this follows a line, that would mean a disagreement between the gods of straight lines. I had trouble actually finding a graph though.

Oh, I missed that! Thanks. I'll delete I guess.

[This comment is no longer endorsed by its author]Reply
6gjm
From the OP (one typo fixed): [EDITED to add:] I appreciate that you said "This is fun ... empiricism ... and I'm sure that's why you chose to do it", but nothing in the rest of your comment makes sense if you actually mean that. E.g., how then would a "value of information" Fermi estimate have made the post shorter? and why then would it be relevant to compare the cost of the experiment and post with the time and money the information might save?

I think there's also a third thing that I would call steelmanning, which is a rhetorical technique I sometimes use when faced with particularly bad arguments. If strawmanning introduces new weaknesses to an argument and then knocks it down, steelmanning fixes weaknesses in an argument and then knocks it down anyway. It looks like "this argument doesn't work because X assumption isn't true, but you could actually fix that like this so you don't need that assumption. But it still doesn't work because of Y, and even if you fix that by such and such, it all st... (read more)

1Gesild Muka
This sounds like the debate strategy of trying to anticipate and address your opponent's arguments before they do to get ahead of framing. It also reminds me of inventing fan theories about movies/shows/books to explain the plot, the effect is indeed powerful and stretches creative muscles. General comment on interview: the way I took it Yudkowsky disclaimed steel manning because he does not want to be "interpreted charitably" rather he simply wants what he's saying to be understood. Fridman-Yudkowsky interview transcript (there are some sentence cutoff errors).

The main reason I find this kind of thing concerning is that I expect this kind of model to be used as part of a larger system, for example the descendants of systems like SayCan. In that case you have the LLM generate plans in response to situations, break the plans down into smaller steps, and eventually pass the steps to a separate system that translates them to motor actions. When you're doing chain-of-thought reasoning and explicit planning, some simulacrum layers are collapsed - having the model generate the string "kill this person" can in fact lead... (read more)

Makes sense. I guess the thing to do is bring it to some bio-risk people in a less public way

Answer by Robert Miles21

It's an interesting question, but I would suggest that when you come up with an idea like this, you weigh up the possible benefits of posting it on the public internet with the possible risks/costs. I don't think this one comes up as positive on balance.

I don't think it's a big deal in this case, but something to think about.

1xdrjohnx
Thanks, my first post here👍 If it's possible, I want to raise the alarm that the new synbio x-risk bottleneck is at the synthesis level, not the design level. EA priorities might need an update.

It's impossible to create a fully general intelligence, i.e. one that acts intelligently in all possible universes. But we only have to make one that works in this universe, so that's not an issue.

4Steven Byrnes
Well said! There might be an even stronger statement along the lines of “you can create an intelligence which is effective not just in our universe but in any universe governed by any stable local laws of physics / any fixed computable rule whatsoever”, or something like that. The hypothetical “anti-inductive” universes where Solomonoff Induction performs worse than chance forever are very strange beasts indeed, seems to me. Imagine: Whenever you see a pattern, that makes it less likely that you’ll see the pattern again in the future, no matter what meta-level of abstraction this pattern is at. Cf. Viliam’s comment. I’m not an expert in this area but I want to go find one and ask them to tell me all about this topic :)
Robert MilesΩ145350

Please answer with yes or no, then explain your thinking step by step.

Wait, why give the answer before the reasoning? You'd probably get better performance if it thinks step by step first and only gives the decision at the end.

3Stuart_Armstrong
Yep, that is a better ordering, and we'll incorporate it, thanks.

Yes, this effectively forces the network to use backward reasoning. It's equivalent to saying "Please answer without thinking, then invent a justification."

The whole power of chains-of-thought comes from getting the network to reason before answering.

Not a very helpful answer, but: If you don't also require computational efficiency, we can do some of those. Like, you can make AIXI variants. Is the question "Can we do this with deep learning?", or "Can we do this with deep learning or something competitive with it?"

3Thomas Kwa
I think I mean "within a factor of 100 in competitiveness", that seems like the point at which things become at all relevant for engineering, in ways other than trivial bounds.

I think they're more saying "these hypothetical scenarios are popular because they make good science fiction, not because they're likely." And I have yet to find a strong argument against the latter form of that point.

Yeah I imagine that's hard to argue against, because it's basically correct, but importantly it's also not a criticism of the ideas. If someone makes the argument "These ideas are popular, and therefore probably true", then it's a very sound criticism to point out that they may be popular for reasons other than being true. But if the argument... (read more)

The approach I often take here is to ask the person how they would persuade an amateur chess player who believes they can beat Magnus Carlsen because they've discovered a particularly good opening with which they've won every amateur game they've tried it in so far.

Them: Magnus Carlsen will still beat you, with near certainty

Me: But what is he going to do? This opening is unbeatable!

Them: He's much better at chess than you, he'll figure something out

Me: But what though? I can't think of any strategy that beats this

Them: I don't know, maybe he'll find a way... (read more)

0Primer
Plot twist: Humanity with near total control of the planet is Magnus Carlson, obviously.

I was thinking you had all of mine already, since they're mostly about explaining and coding. But there's a big one: When using tools, I'm tracking something like "what if the knife slips?". When I introspect, it's represented internally as a kind of cloud-like spatial 3D (4D?) probability distribution over knife locations, roughly co-extentional with "if the material suddenly gave or the knife suddenly slipped at this exact moment, what's the space of locations the blade could get to before my body noticed and brought it to a stop?". As I apply more force... (read more)

8tamgent
I was explicitly taught to model this physical thing in a wood carving survivalist course.

This is actually a lot of what I get out of meditation. I'm not really able to actually stop myself from thinking, and I'm not very diligent at noticing that I'm thinking and returning to the breath or whatever, but since I'm in this frame of "I'm not supposed to be thinking right now but it's ok if I do", the thoughts I do have tend to have this reflective/subtle nature to them. It's a lot like 'shower thoughts' - having unstructured time where you're not doing anything, and you're not supposed to be doing anything, and you're also not supposed to be doing nothing, is valuable for the mind. So I guess meditation is like scheduled slack for me.

I also like the way it changes how you look at the world a little bit, in a 'life has a surprising amount of detail', 'abstractions are leaky' kind of way. To go from a model of locks that's just "you cannot open this without the right key", to seeing how and why and when that model doesn't work, can be interesting. Other problems in life sometimes have this property, where you've made a simplifying assumption about what can't be done, and actually if you look more closely that thing in fact can sometimes be done, and doing it would solve the problem.

it turns out that the Litake brand which I bought first doesn't quite reach long enough into the socket to get the threads to meet, and so I had to return them to get the LOHAS brand.

 

I came across a problem like this before, and it was kind of a manufacturing/assembly defect. The contact at the bottom of the socket is meant to be bent up to give a bit of spring tension to connect to the bulb, but mine were basically flat. You can take a tool (what worked best for me was a multitool's can opener) and bend the tab up more so it can contact bulbs that don't screw in far enough. UNPLUG IT FIRST though

1chanamessinger
Sounds scary, but thank you for the model of what's actually going on!
Robert MilesΩ15210

Learning Extensible Human Concepts Requires Human Values

[Based on conversations with Alex Flint, and also John Wentworth and Adam Shimi]

One of the design goals of the ELK proposal is to sidestep the problem of learning human values, and settle instead for learning human concepts. A system that can answer questions about human concepts allows for schemes that let humans learn all the relevant information about proposed plans and decide about them ourselves, using our values.

So, we have some process in which we consider lots of possible scenarios and collect... (read more)

3Charlie Steiner
Ah, the good ol' Alien Concepts problem. Another interesting place this motif comes up is in defining logical counterfactuals - you'd think that logical inductors would have let us define logical counterfactuals, but it turns out that what we want from logical counterfactuals is basically just to use them in planning, which requires taking into account what we want.

Ah ok, thanks! My main concern with that is that it goes to "https://z0gr6exqhd-dsn.algolia.net", which feels like it could be a dynamically allocated address that might change under me?

Is there a public-facing API endpoint for the Algolia search system? I'd love to be able to say to my discord bot "Hey wasn't there a lesswrong post about xyz?" and have him post a few links

2habryka
Pretty sure you should just be able to copy the structure of the query from the Chrome network tab, and reverse engineer it this way. IIRC the structure was pretty straightforward, and the response pretty well structured.

Agreed. On priors I would expect above-baseline rates of mental health issues in the community even in the total absence of any causal arrow from the community to mental health issues (and in fact even in the presence of fairly strong mental health benefits from participation in the community), simply through selection effects. Which people are going to get super interested in how minds work and how to get theirs to work better? Who's going to want to spend large amounts of time interacting with internet strangers instead of the people around them? Who's g... (read more)

Holy wow excalidraw is good, thank you! I've spent a long time being frustrated that I know exactly what I want from this kind of application and nothing does even half of it. But excalidraw is exactly the ideal program I was imagining. Several times when trying it out I thought "Ok in my ideal program, if I hit A it will switch to the arrow tool." and then it did. "Cool, I wonder what other shortcuts there are" so I hit "?" and hey a nice cheat sheet pops up. Infinite canvas, navigated how I would expect. Instant multiplayer, with visible cursors so you can gesture at things. Even a dark mode. Perfect.

This is the factor that persuaded me to try Obsidian in the first place. It's maintained by a company, so perhaps more polish than some FOSS projects, but the notes are all stored purely as simple markdown files on your hard disk, so if the company goes under the worst that happens is there are no more updates and I just keep using whatever the last version was

I suppose it makes sense that if you've done a lot of introspection, the main problems you'll have will be the kind that are very resistant to that approach, which makes this post good advice for you and people like you. But I don't think the generalisable lesson is "introspection doesn't work, do these other things" so much as "there comes a point where introspection runs out, and when you hit that, here are some ways you can continue to make progress".

Or maybe it's like a person with a persistent disease who's tried every antibiotic without much effect, ... (read more)

I think this overestimates the level of introspection most people have in their lives, and therefore underestimates the effectiveness of introspection. I think for most people, most of the time, this 'nonspecific discomfort' is almost entirely composed of specific and easily understood problems that just make the slightest effort to hide themselves, by being uncomfortable to think about.

For example, maybe you don't like your job, and that's the problem. But, you have some combination of factors like

  • I dreamed of doing job X for years, so of course I like do
... (read more)
5cousin_it
That's a good criticism which goes to the heart of the post. But I've done plenty of introspection, and on the margin I have less trust in it than you do. Most people I expect can't tell the difference between "I'm unhappy in my profession" and "I'm unhappy with my immediate manager" much better than chance, even with hours of introspection. One thing that does help is experimenting, trying this and that. But for that you need "resource"; and the list in my post is pretty much the stuff that builds "resource", no matter what your problems are.

At my grandmother's funeral I read Dirge Without Music by Edna St. Vincent Millay, which captured my feelings at the time fairly well. I think you can say things while reading a poem that you couldn't just say as yourself.

On point 12, Drone delivery: If the FAA is the reason, we should expect to see this already happening in China?

My hypothesis is, the problem is noise. Even small drones are very loud, and ones large enough to lift the larger packages would be deafening. This is something that's very hard to engineer away, since transferring large amounts of energy into the air is an unavoidable feature of a drone's mode of flight. Aircraft deal with this by being very high up, but drones have to come to your doorstep. I don't see people being ok with that level of noise on a constant, unpredictable basis.

1[anonymous]
Lawnmowers are also very loud yet is widely tolerated (more or less). Plus, delivery drones need only to drop off the package and fly away; the noise pollution will only last for a few seconds. I also don't see why it would necessarily be unpredictable; drones don't get stuck in traffic. Maybe a dedicated time window each day becomes an industry standard. But the real trouble I see with delivery drones is: what's the actual point? What problem is being solved here? Current delivery logistics work very well, I don't see much value being squeezed out of even faster/more predictable delivery. Looks like another solution in search of a problem to me.
3Daniel Kokotajlo
Good point. OTOH, I feel like there are some cities in the world (maybe in China?) where it's super noisy most of the time anyway, with lots of honking cars and whatnot. Also there are rural areas where you don't have neighbors to annoy.

It would certainly be a mistake to interpret your martial art's principle of "A warrior should be able to fight well even in unfavourable combat situations" as "A warrior should always immediately charge into combat, even when that would lead to an unfavourable situation", or "There's no point in trying to manoeuvre into a favourable situation"

Load More