Month: May 2020

Speed Test: How Fast is Forge/OS?

Introduction

Even with extensive training, robot programming is typically complex and time consuming. With PolyScope, Universal Robots made programming significantly faster and easier – as long as you are programming their robot. Unfortunately, 99% of the global robot installed base is made up of other robot brands, each with their own programming language.

Forge/OS enables easy programming on not one, but multiple brands of robots. How fast and easy is programming with Forge/OS? We conducted a “speed test” against PolyScope to find out.

Overview

The Forge/OS programming application Task Canvas is a significant improvement to the programming paradigms in use today for controlling industrial robots. Task Canvas solves the problem that industrial robots are too hard to use and require too much training. In addition, Task Canvas is cross-brand, meaning that once you learn to program one brand of robot, you can program on many other brands.

In this article we provide quantitative support for how much easier Task Canvas is than UR’s (Universal Robots) PolyScope environment for programming a common task, picking and placing parts into a grid.

In summary, our findings show that in comparison to PolyScope, TaskCanvas

  • Reduces training time by 22% 
  • Enables users to program a grid task 23% faster
  • Results in less points of confusion while developing the application

Not only is it easier to learn, faster to program, and results in users having less frustrations when using Task Canvas, but users are also able to program any supported robot too. We even get comments that programming in Task Canvas is fun!

The Study

The study consisted of 20 users who had not programmed robots before and also did not have any automation experience. We randomly divided them into two groups, one for Task Canvas and the other group for PolyScope. During the study one of the Task Canvas participants dropped out.

Each participant was given training on how to program a grid task and taught each node/block that they’d need to program the task. Participants were trained using plastic disks and then had to program a similar grid task without guidance from the instructor.

When participants programmed the task, we measured the time it took them to develop it as well as the “confusion points” that they ran into. 

Training Details

The instructor trained the participants individually in how to program a grid task using 12 plastic disks, a suction gripper, and a 3×4 grid. Participants were taught about all of the features and steps that they’d have to use to program the grid task as well as the order that those nodes/blocks should be added. They were free to ask any questions and taught about features in the order in which one would most efficiently program the task. 

 The training task is programmatically identical to the task that they were asked to program during the task programming section. The only exception is that since the gripper was changed from a suction gripper to a 3-finger Schunk gripper, participants did have to be taught how to actuate them. Training sessions were timed.

Task Programming

After participants completed training, they were asked to program a grid task using 9 cuts of steel pipe, a 3-finger Schunk gripper, and a 3×3 grid. This task is more challenging than what they were trained on and necessitated more precision than the training task. Parts needed to be picked up and placed precisely or they wouldn’t fit into the grid.

We measured both the total time that it took participants to complete the task (Total Programming Time) as well as the time that it took them to complete the “logic” of the program (Logic Programming Time). The Logic Programming Time includes not only placing all the nodes/blocks of the task correctly but aligning waypoints roughly where they need to go. The difference between the Logic Programming Time and the Total Programming Time is that the latter includes the additional time that it took to execute the completed task(typically around two minutes and 10 seconds) as well as the time that users spent making grid edits (when applicable). Not all users needed to make grid edits and there was a high degree of variability in the amount of time that these edits took. This skewed the data for Total Programming Time and although READY still outperformed UR by an average of 6 minutes and 57 seconds, this analysis focuses on Logic Programming Time.

In addition to time, we also measured the number of Points of Confusion for participants. Points of Confusion are defined as the sum of:

  1. The number of times that the participant had to ask a question. Participants were encouraged to ask the instructor a question if they couldn’t proceed or if they thought it would take more than 30 seconds to troubleshoot.
  2. The number of errors a program contained when the participant tried to run it for the first time. All of these errors were explained to the participant for them to fix after the initial task execution attempt.
  3. The number of times that the instructor had to step in to prevent the participant from making a major error. 

The Results

We used 1-tailed tests to evaluate these results for statistical significance with an alpha of 0.05.

Training TimeStatistically Significant(p=.0017)Training sessions were 22% longer, being on average 6 minutes and 47 seconds longer for participants in the PolyScope group than in the TaskCanvas group.
Logic Programming TimeStatistically Significant(p=.0313)Task Canvas users took 23% less time to program their task than the UR PolyScope users. The mean time in Task Canvas was 26 minutes and 33 seconds compared to 34 minutes and 16 seconds on UR.
Points of ConfusionApproaching Statistical Significance(p=.0545)Users had 36% less points of confusion when using Task Canvas. This means they had less questions or challenges with a step while doing the task.

Results Analysis

Task Canvas’ favorable scores for Training Time, Logic Programming Time, and Points of Confusion are likely the result of it being more intuitive and easy to understand than PolyScope. Some processes in Task Canvas are simpler and more streamlined than Polyscope. For example, Forge/OS’s native support of pneumatic grippers simplified the process of actuating the Schunk 3-finger gripper. Whereas 3 nodes are required to do this action in PolyScope, only one block is required in Task Canvas. Another factor that contributes, albeit slightly, to this time discrepancy is that Task Canvas only requires that users align 3 grid corners instead of 4.

Conclusion

Anecdotally, we see that Task Canvas is easier for users to program than other robot interfaces. We have customers, such as Alicat Scientific, who were able to implement a task on their own and run lights out in just a week after receiving their system. A common refrain we get from prospects is that after a lot of work in a robot’s native interface, they finally get a program running, but then struggle with how long it takes to get a new program up and running. Task Canvas makes programming robots simpler, and better yet, works on many different robot brands. 

READY Robotics developed Forge/OS, an industrial operating system, to allow for vendor independence and plug and play usage of robots and peripherals. Running on Forge/OS, Task Canvas, a visual, flowchart based programming application for automation that is the only cross-brand general purpose robot programming application on the market. Task Canvas democratizes robotic programming by enabling manufacturers to augment and upskill existing staff. Through these features, READY can help you reach ROIs previously thought unattainable.