No Current Event’s Coming Up | No Locations | Learn More >

From Rule Followers to Decision Makers: How the Maturity of Software Mirrors Our Own

In our youth, we humans are simply incapable of rational decisions. The part of our brain that truly anticipates consequences, the prefrontal cortex, matures for most around age 25. During that maturation process, in lieu of responsible decision making, we are taught The Rules. The rules are kept simple at first – they must be easy to understand and remember. As we mature, our capabilities increase, as does our set of rules.

Software systems have matured similarly. The most rudimentary programs are strict sets of rules: Do this sequence of steps in exactly this order. If this happens, do that. No arguing, no contemplation, just do it. As hardware capabilities increased – more memory, faster processing – the complexity of algorithms increased as well. Larger and larger problems were tackled by more and more complex programs – rule sets.

Human intelligence, as noted above, does naturally develop into decision making, and a key part of this is prediction of consequence. Without this ability, we would be limited to doing only what the rules proscribe. We would be stuck the first time we encounter a situation outside of those rules. The rules themselves would even be stuck – as we only add to them when we discover new or improve existing capabilities.

We are now in the age of similar maturation in our software systems. While Artificial Intelligence (AI) has been pervasive in the news for a while now, its adoption across the entire landscape of enterprise computing systems is still a work in progress.

At LogistiVIEW, our Warehouse Execution System (WES) certainly knows how to follow the rules. No warehouse or distribution center is allowed to ignore physical storage requirements, safety, or employment agreements. I don’t think anyone would argue that the ice cream must be stored in the freezer, for instance.

But a system that stops there is only capable of doing exactly what is defined in the strict set of rules. So we also, in our WES, consider consequences. Consequences include knowing the downstream impact of prioritizing one task over another. Consequences include anticipating overtime for some workers while others would be sent home early, (thus warranting a rebalancing).

Anticipating consequences requires access to information: past performance, present workload and operating framework.

One interesting by-product of this maturation in intelligence is about what requires a “rule” itself. Some rules coded in software systems are there not so much because of business requirements (the ice cream needs to be frozen). Some are there because legacy systems couldn’t anticipate consequences. For example, earlier systems may have required hard assignment of workers to zones by rule because that was the way to manage, not because there is a physical requirement to enforcing that limit.

Similar to humans, as systems grow in their understanding, some “rules” evolve into “guidelines” and some will go obsolete completely when they no longer automatically produce good results.

Share This Post:

Facebook
Twitter
LinkedIn
Email