Posts

Sorted by New

Wiki Contributions

Comments

Sorted by

Well I've heard those bank APIs break a lot. I think I am trying to say that software lifespan is not at all what it used to be 10-15 years ago. Software is just not a *thing* that gets depreciated, its a thing that never stops changing. This company here too separates infrastructure engineering from software, but that's not how the big kids play, and I am learning some bitter lessons about why. It really is better if the developers are in charge of deployment. Or at least constantly collaborating with the DevOps crew and the OPs crew. Granted every project has its special requirements, so no idea works everywhere. But "throw it over the wall" is going away.

Maybe this is all just this years buzzwords, but I don't think so. I am seeing some startups going after rust-belt manufacturing software, where they are often still running on XP, and dare not change anything. These startups want to sell support for a much more highly automated process, with much more flexibility. Good business model or not, you just can't do that sort of thing in a waterfall release process.

Sorry but the software world described here has little to do with my daily work in software. As most apps have moved to webapps, and most servers are now in the Cloud, and most devices are IoT cloud-connected, as all these trends have happened, the paradigm for software has evolved to maximizing change.

Software never was very re-usable itself, but frameworks and APIs turned out to have huge value, so now we have systems everywhere based a a layered approach from OS up to application, where application software is quite abstracted from the OS and hardware and support software ( e.g. webserver or database). However frameworks also change quickly these days - JQuery-Angular-React-Vue.js .

Cloud engineering is all about reliability, scalability, and a very rapid change process. This is accomplished through infrastructure automation, and process automation. Well-organized shops aim to release daily, and at the same time, have very good quality. We use CI/CD patterns that automate every step from build to deployment.

Containers are everywhere, but the next step is Kubernetes and serverless in the Cloud, where we hardly touch the infrastructure, and focus on code and API. I see no chance that code will last long enough to depreciate.

Making high-quality software is all about the process and the architecture. You just can't meet today's requirements building monoliths, on manually-managed servers.

It feels to me like you are straying off the technical issues by looking at a huge picture.

In this case, a picture so huge it's unsolvable. So here's an assertion which might be interesting: Its better to focus on clusters of small, manageable machine-ethics problems and gradually build up to a Grand Scheme, or more likely in my guess, a Grand Messy But Workable System, rather than teasing-out a Bible of global ethical abstraction. There's no working consensus on ethical rules anyway, outside the Three Laws.

An example, maybe already solved: autonomous cars are coming quite soon, much sooner than most of us thought. Several people have wondered about the machine ethics of a car in a crash situation, assuming you accept Google's position that humans will never react fast enough to resume control. Various trolley problem-like scenarios of minimizing irrevocable hurt to humans have been kicked around. But I think I already read a solution to the decision problem in the discussion-

a) Ethical-decisions-during-crash is going to be a very rare occurrence. b) The over-all reduction in accidents is much more significant than a small subset of accidents theoretically made worse by the robot cars. c) Humans can't agree on complex algorithms for the hypothetical proposed scenarios anyway. d) Machines always need a default mode when the planned-for reactions conflict.

So if you accept a-d above then you'll probably agree that simply having the car slow to stop and pull over to the side as best it can, is the default which will produce the least damage. This is the same routine to follow if the car comes upon debris in the road, a wreck, confusing safety beacons, some catastrophe with the road itself and so forth. It's pretty much what you'd tell your teenager to do.

But I think there are lessons to draw from the robot cars: 1) The robot, though fully autonomous in every-day situations, will encounter in an accident, an ever-narrowing range of options in its decision-tree so that it will end up with the default option only. In contrast, a human will panic and take action which often adds options to an already-over-loaded decision tree, options which can't be evaluated in real-time and whose outcomes are probably worse than just stopping as fast as possible anyway. 2) Robots don't have to be perfect, they just have to be better than humans in the aggregate, and, see #1, default to avoiding action when disaster strikes. 3) Once you get to #2, then you are already better than humans and therefore saving lives and property. At this point the engineers can further tune the robot to improve gradually.

So what about the paper-clip-monster, the AGI that wants to run the world and most important, writes its own code? I agree it could be done in theory, just as we'll surely have computers running artificial evolution scenarios with DNA, and data-mining/surveillance on a scale so huge it makes the Stazi look like kindergarten. But as everyone has noted, writing your own code is utterly uncharted territory. A lot of LW commentators treat the prospect with myth: they propose an AGI that is better described as an alien overlord than a machine. Myth may be the only way humans can wrap their brains around an idea so big. Engineers won't even try. They'll break the problem up into bits, do a lot of error-checking at a level of action they do understand, and run it in the lab to see what happens. For instance if there is still a layered approach to software, the OS might have the safety mechanisms built in, and maybe won't be self-upgradable, while the self-written code will run in apps that rely on the OS, then after a hundred similar steps of divide-and-conquer the system will be useful and controllable. But truly, I too am just hand-waving in a vacuum. Please continue...

Sugar. Fruit or a glass of good juice or whatever works for you. Brain consumes quite a lot of energy, as probably all of you can quantify better than I. It is well understood in the software world that nobody can work well for hours straight. Everybody needs to take breaks. Young people foolishly believe they can do good work for hours on no sleep, but I don't agree.

Quiet. I am a bit deaf now, enough to have trouble parsing conversations. When I put on hearing protectors ( 10db? 20db? they work pretty well for $20) my IQ rises by 20 points. really.

Habit. Many years ago I had a friend who was the most prolific American author after Asimov. He had fantastic work habits of course. Every night at about 10PM he unplugged the phone and required himself to sit at the typewriter until 3 or 4 when he went to bed. If he wrote nothing he told me he didn't breate himself, just sitting was his job. He did his research in the afternoon. I think part of his success was that he didn't expect himself to be good at starting work, he expected distractions.