Feeding Data Analytics from Augmented Reality
"You cannot, not communicate."
So opened the first class of my college career. Decades later, I still clearly remember Dr. Hemmer educating the freshmen in Communications 101 that we are always communicating, whether we are trying or not.
At LogistiVIEW, our augmented reality VIEW devices gather so much data to communicate, we run the risk of "TMI." In our world, you cannot, not gather data! In this post, we explore some valuable information to be gained just by virtue of what we do.
A VIEW device is constantly gathering information from its environment in order to perform its duties. If it is indeed creating augmented reality, it must continuously gather data on the reality it is trying to augment.
Like us humans, the VIEW device is continually processing visual and audible input. Additionally, it may be processing geographic information from wireless sensors.
Also like us, most of this sensory information is processed and forgotten, winnowing the retained information down to just what is needed to serve the immediate purposes. In the case of most business execution systems, such as WMS, TMS, ERP, etc., only task completion notices are worth reporting.
The space between, however, is loaded with potentially interesting information. How did the human perform on the task? Did it go as planned or were there deviations? How long did each step take?
Imagine if we were able to make note of every time:
Individually, each such event is insignificant. However, for the sake of data analytics, we can look for common factors, such as individuals, crews, shifts, area, etc.
Such findings can lead to further observations, training, and most importantly, communication with the workforce. Whether the factors are environmental (maybe the lighting is bad at certain times of day?), systemic (the stuff is never where the system thinks it is in this part of the building), or just plain human learning curve, the data is there to figure it out.