For image processing that reacts in real-time with greater flexibility and throughput, 3D printers should leverage fully integrated machine vision technology using a PC-based control approach
In additive manufacturing, just one small issue can waste hours of printing. The extruder’s flow rate, the machine’s feed and more need to synchronize perfectly to guarantee a printed part meets specifications. Otherwise, deposited material may be too thin, too thick, missing or not bonding to previous layers properly.
The ability to detect and correct for these problems quickly allows us to salvage the time, material and part before it’s too late. However, it’s not feasible to have someone stand around the machine 24/7 to watch its progress.
There’s no better watchdog in these cases than a machine vision system. When integrated directly into the machine control environment, machine vision can respond in real time, so the controller can automatically adjust and optimize the material deposition rate and machine feed to guarantee part quality.
Nearsightedness can hamper vision technology implementations
Unfortunately, many vision technologies remain closed off, operating as “black boxes” in the control architecture. Smart cameras with on-board processors provide a low-cost option, but with limitations in power, communication and integration. Standalone vision cameras with built-in processors require separate programming tools, often managed by outside experts, which increases costs and complexity. Machine vision software running on a separate PC provides greater power, but communication and processing delays between the separate CPUs slow down reaction times.
In all of these approaches, the results must be sent back to the main controller with each inspection. Sometimes the results are simple pass/fail data. Other times, they’re images with large amounts of data for full traceability. The transfer time is another factor to consider, one that impacts real-time capabilities.
A clearer vision of image processing
For additive manufacturing, a fourth option makes the most sense: fully integrated machine vision running on the same industrial PC (IPC) that also serves as the machine controller. The PC-based approach delivers a reliable framework for real-time calling of modular software elements as well as inherent scalability to grow vision functionality with your machines.
TwinCAT Vision software, for example, uses the same engineering and runtime of Beckhoff’s universal automation platform. This means development and runtime happen in conjunction with key functions like CNC and PLC, as well as more advanced capabilities, such as IoT, simulation, machine learning and more.
In addition, outside experts are not needed for programming. PLC programmers now have IEC 61131-3 languages and Continuous Function Chart (CFC) available for vision programming. Keeping this expertise in-house with your primary control engineering staff saves time and cost, especially for small adjustments.
The runtime is closely synchronized to fieldbus updates, axis positions and other variables, so synchronization for the machine, triggers and lighting occur in concert with additive manufacturing processes. In addition to modularizing multiple PLC, CNC, machine vision and other functions in software on the controller, you can use core isolation technology to enhance performance by assigning each process to a specific CPU core.
By leveraging the EtherCAT industrial Ethernet system on the networking side, you can maintain real-time communication between the camera, motion axes and printing head. This leads to substantial increases in throughput and part quality.
Broader camera selection with integrated vision
TwinCAT Vision uses the GigE vision standard. This provides a standardized and efficient communication protocol based on Gigabit Ethernet with scalable speeds. As such, you can select the camera, along with any necessary lighting and lensing, that works best for your process.
With this open approach, the camera hardware could range from simple monochrome to advanced 3CCD color or a thermal camera, enabling the machine to make more complex corrective measures in real time. Either way, you can connect any manufacturer’s GigE Vision camera and program the application using TwinCAT Vision.
If you want the machine to adjust the extruder’s flow rate, for example, you can easily implement a profilometer. Want to automatically optimize feed rate? Try an infrared camera. Running standard inspection algorithms or using your in-house IP is possible to detect a wide range of potential manufacturing flaws.
Whatever the future development of the system may turn out to be, it will require no fundamental changes to the IPC or networking technology. The fully integrated approach to machine vision offers the same advantages as 3D printing: the freedom to build exactly what you want without having to fit a mold.
Want to learn more about implementing image processing directly in the PLC environment? Download our whitepaper on machine vision with the form below!
Paxton Shantz is the Digital Manufacturing Industry Manager for Beckhoff Automation LLC.
Comments