Production line downtime can cost tens of thousands of dollars per minute, so how do we better anticipate robot failures in order to optimize output?
In this blog we show a method to rapidly acquire industrial robot data in order to train and test Edge ML models, which we then apply to a predictive maintenance use case.
Capturing robot data was previously a complex and fragmented task. READY Robotics now makes it possible for us to access data using a single standardized interface for 100s of models of industrial robot and cobot (from ABB, EPSON, FANUC, Kawasaki, Stäubli, Yaskawa, Universal Robots and more).
Using the Edge Impulse machine learning platform we were then able to ingest that data, train and test an accurate ML model in under one hour.
I’ll be back (with some sensor data)
We're using a 6-axis robot which has 6 individual motors driving each of its joints, each with position and torque sensors providing feedback to the joint controller. Increased torque is needed to compensate for worn mechanical gears - our hypothesis was this should be detectable by machine learning using torque sensor data from the robot joints.
We were able to obtain data streams for position and torque for each of the six rotational joints using READY Robotics Forge/OS 5.
We found a sample rate of 10Hz was enough, although higher rates are possible. Forge/OS uses protobuf for efficiency internally but we converted this to JSON format for simplicity:
Sending training data to Edge Impulse
At READY Labs we have robots behind firewalls running a custom plugin which, if the user permits it, enables an outbound only connection for robot data. This is the setup we chose for ML training data acquisition, although you could do the same with a log on a USB stick if your robot is air-gapped.
We uploaded the live robot JSON data to Edge Impulse using a little NodeJS client based on their Ingestion API code example. We also integrated with Edge Impulse studio using the very nice Remote Management Protocol. Now, when a user presses the capture button in Edge Impulse project, live robot data appears. Magic!
Integrating our Forge/OS robot data stream with Edge Impulse made data capture a breeze
A tale of two robots
We sourced two identical robots - one new, and one old with known joint issues. (Ideally, we would do a longitudinal study with just one robot degrading over time in order to eliminate the possibility of unanticipated variables, but we’ll save that for another blog).
We sampled the torque from two robots running an identical program with samples taken on identical movements. These were labeled ‘good’ and ‘bad’ according to which robot they were acquired from.
On visual inspection, the two sample sets did look very, very similar. Small variations aside, your humble author had to double check they weren’t collecting data from the same robot twice.
Edge Impulse’s Feature Explorer automatically detected that Torque RMS (Root Mean Squared) was distinctly higher for the bad joints on our old robot! This reflects the fact that a greater amount of torque was applied over the entire sample period, even though there were no obvious peaks in the raw data.
This is easily seen in the feature explorer graph, giving us confidence our ML algorithms will be able to distinguish between the two cases.
Next we passed the features into the Anomaly Detection (K-means) algorithm. The idea is we train the algorithm with data from a known good system - it then monitors the system, alerting us if any new data looks unusual.
This method is unsupervised learning; the labels we applied to the samples are not needed by the algorithm. Instead, we made sure the training set only contained “good” samples and the algorithm learnt the entire training set was expected behavior. The anomaly score is based on how far a new item is from the center of clusters of that training data, as shown in the graph below.
Testing the anomaly detection showed 98.75% accuracy - this is a great start and with more refinement certainly could form the basis of automated anomaly detection for our robot in deployment.
Training a Neural Network (NN) Classifier
Next we tried deep learning. The NN Classifier uses supervised learning which means it takes labeled training data as an input - so we moved a mix of both “good” and “bad” labeled samples back into our training set.
After a minute or so the Edge Impulse platform had trained a TensorFlow Lite classifier based on our robot data. The confusion matrix showed that against the validation set our initial ML model had an accuracy of 96.6%. In fact, the model always succeeded in identifying the “bad” robot. Where it fell down is two false positives, where it mistook the “good” robot for “bad”.
We could add more labels given the context of the robot’s operation known by Forge/OS (the program stages, payload, robot state, error codes etc.) and the classifier could be trained to identify even more robot conditions. This has other valuable uses that we can explore further in the future.
With Forge/OS 5 we were able to quickly stream live robot joint and torque data in a standardized format. Combined with Edge Impulse’s ML platform we were able to generate valuable insights from that data in under an hour.
At around 20 kilobytes in size, the ML model was extremely lightweight so could easily be deployed to the same machine controlling the robot with Forge/OS for low-latency predictive maintenance without requiring Internet connectivity.
We’re excited to be enabling a future with Forge/OS where data from industrial robots used in production is finally accessible to modern, high productivity analysis tools like Edge Impulse. There are a ton of possibilities, this is just the start!