2019 was a great year for SpotOn! Let us take you on a short year in review.
Earlier in the year, our own Bruce Bayne was featured on the IDEAlliance GAMUT Printing and Packaging podcast. The episode focuses on new BrandQ requirements and the Formula to Connect the Global Supply Chain. Listen to the podcast.
This year, we also completed some significant updates to both our Analyze/Verify and Flexo software. Both sets of software had two new releases this year.
Many of the 2019 updates to Flexo were general bug fixes. But there are two exciting features that are worthy to note.
Flexo is now able to measure SCTV, or Spot Color Tonal Value. The addition of SCTV allows users to calibrate tonal values of spot colors as defined in ISO 20654-2017. The methodology works with all inks and all printing conditions.
The other feature is the addition of a new measurement report. Now Flexo has two reports: a measurement report and a job report.
The difference between the two reports is that the measurement report shows a single measurement instance (although you could have multiple spot colors) while the job report shows the trend of all measurements for a job.
Analyze and Verify Updates
Like Flexo, many of this year’s updates to SpotOn! Analyze and Verify were bug fixes.
Besides those fixes, here are some other improvements to the software:
Improved license check functionality to reduce error messages
Updated eXact driver
Improved Verify device report layout
Added instrument and mode to Verify when importing Barbieri data
Updated G7 Colorspace to 2019 specs
With the updates to Flexo, Analyze and Verify, Windows 32-bit operating systems are no longer supported. The minimum Windows configuration is now Windows 7, 64-bit. Operating systems below Windows 7 SP1 and Mac OS 10.10.5 will no longer be supported.
We are happy to continue to improve upon our products and services. We have already started working on Version 3 of SpotOn! and we are looking forward to its release in 2020.
If you have any questions about any of our software or other services, please email us at email@example.com.
Calibration of a single printing device is not always the easiest task, and matching multiple printers to one another is an even bigger challenge. One question that has come up frequently, especially with the rise of digital printing, is “what is the best way to profile multiple devices of the same model?”. If you are trying to achieve a close visual match between printing devices, there are three key things to consider before putting ink on the sheet:
1) Printer gamuts have to be pretty close between devices. This of course has a lot to do with substrate texture and ink texture (rough textured media and/or UV inks both exhibit more light scattering properties due to their roughness vs. solvent on glossy substrates).
2) It is necessary to evaluate more than just the worst ∆E value. You need to know how all the patches in a control strip compare in ∆E, not just the worst or the average. When choosing a control strip,the more patches, the better, as long as the chart doesn’t become too large for practical daily use. The more patches under 1 ∆E, the more likely the printing is visually close, as you are comparing all patches in the strip to themselves and ranking them on visual closeness.
3) You can’t compare to an industry reference, like GRACoL, when visually comparing devices. You have to compare one device as the reference to the other, because that’s what you’re looking at in the viewing area. You can’t see GRACoL, as there is no perfect GRACoL proof, but you certainly can see the difference between printer A and printer B, so make printer A the reference when comparing those two devices. Hopefully with grouping tests you can compare multiple devices to one device.
Tight calibration of the device and the ability to truly recalibrate back to the same known state the device was in when profiled is key. From my experience, the automated “recalibration” process does not always work well in the field. Some RIPs are better than others, but the bottom line is for true recalibration to work successfully it has to be a two-part process. First, you have to achieve the same solid ink value that was in the original calibration, and second, you have to then create the same curve along the values between 0% and 100%. Most RIPs do the latter, but few actually do the former during the automated recalibration process. This is important, because if you can’t fully recalibrate the printer, the original profile is eventually going to be too far off the mark to be useful.
Also consider: very rarely do two of the exact same devices that are exactly the same age print the same color right out of the box. I’ve proven this many times when evaluating color output data during calibration sessions. There is no way to successfully use a single profile for multiple devices that aren’t even close and achieve a tight visual match. My advice is to target the same source reference space (GRACoL as an example) for each device, then calibrate and profile each device as carefully as possible to achieve as tight a match as the RIP can provide to the source reference space. When finished you can compare how close each device is to one another by printing a test chart and comparing the measured results. Now that being said, RIPs that have iterative optimization have a much better chance of achieving a tight calibration between multiple devices than RIPs that can only rely on ink limits, linearization, and icc profiles alone.
You certainly can and should run comparison tests between all your devices (ideally on a single substate all devices can print on) to identify which devices are the closest to one another and group them accordingly. The point here is to get to know each and every device (it’s gamut, how consistently it prints, etc.). Maybe you get lucky and find several devices that actually are close enough to calibrate using a single profile. Only after going through the process of calibration and evaluating the results can you truly know the color capability of each device.
I have installed many pairs of Epson aqueous printers and have never found two that calibrate the same or profile the same, however, following the process described above will get them to the closest possible visual match.
SpotOn! Verify is the ideal tool for comparing the calibration results of each printing device. SpotOn! Analyze is the ideal tool for setting ink limits and examining the color differences between each printing device. Try them for yourself!
Recently, a color process control manager at a large print production facility wanted to know if there is a more comprehensive chart available for daily digital color evaluations than an 12647-7 proofing wedge. He pointed out the IT8.7-4 has too many patches, and the P2P51 has too many gray finder patches. Reiterating a thought we’ve all had many times, he asked: “Am I overthinking the value of additional patches?”
There is a tradeoff between patch count and how effective a chart is at gathering QC information. There is also something to be said for both extremes; too many patches and too few patches. Too many patches on a noisy (grainy, low screen ruling, etc.) printing device can cause unwanted noise in the measurement data (like using a 1 pixel eyedropper setting in photoshop to determine the dot percentage in a noisy image). Too few patches and you are not sampling enough colors to accurately model how the device is printing.
I just dissected the TC3.5 patch set and found it to be lacking in the 3 color grays. There are not many patches and none are G7 compliant gray patches. In my opinion, this eliminates the TC3.5 for any G7 evaluation. In fact, most of the currently available charts are not very good in the gray areas, especially if you are trying to evaluate G7 compliance. Idealliance built the TC1617 to address this lack of G7 gray patches in the IT8.7-4, but even this chart has too many patches for day-to-day evaluations.
The 3-row 2013 12647-7 chart (the replacement for the 2009 2-row chart) was built as a very good compromise between patch count and patch value. It has a decent number of patches to effectively evaluate print consistency, which includes G7 compliant gray patches, the typical array of CMYKRGB tone ramps, pastel patches, saturated patches, and a good assortment of dirty patches. These dirty patches were purposely built with CMY values and then with 100% GCR values excluding the 3rd color and replacing it with K. This was done because many separations, especially those done with ink reduction products, are made with GCR these days. It’s hard to beat what’s in that 3-row, 84-patch control strip.
While considering charts and patch values, it’s almost more important to note the metrics and tolerances we place on these patches for conformance to specifications. If you look at the metrics we currently use for pass/fail, they are very CMYK printing press centric. Commercial print, specifically offset printing, has been the forefront of most industry standard and best practice development. Therefore much of the data gathering and evaluation is based on printing devices where C, M, Y, and K ink thicknesses are controllable by the operator. This means most metrics are tied to effective control of those ink thicknesses, which is largely irrelevant to the digital world.
We should be asking: “What are we passing and failing?”
For the G7 Colorspace metrics (currently the most stringent) we are evaluating:
Substrate – Paper color is good to evaluate
Solid CMYK – Very useful to press operators, but not much of a typical image or job is just solid C, M, Y, or K. This makes these patches poor for evaluating digital print consistency, especially visual consistency.
Solid RGB overprints – In my opinion, this is more important than Solid CMYK, as overprinted colors are what we see when we look at printed material. Still, these are only the solids, no tints.
CMY gray balance and tone – This is very important in controlling and evaluating print consistency, although it’s more important in print processes that lay down individual CMYK inks like offset.
All the other patches (pastels, saturated, dirty colors, skintones, CMYKRGB tints) are all lumped into a single metric called ‘All’ and then given a whopping average ∆E of 1.5 or 2.0 and a worst patch ∆E of 5.0 (95th percentile). That’s huge! A virtual barn door to let almost anything outside of grays and CMYKRGB solids pass.
These are not very visually oriented metrics and tolerances. So the big question to ask is what are you evaluating with your chart, or more importantly, what metrics and tolerances are you using to evaluate your chart? For G7 you could just use a P2P and eliminate the gray finder patches (columns 6-12), because the metrics are really only focused on CMYKRGB solids and the gray patches.
Bottom line, if we are looking for print consistency, we need to look at establishing new metrics that truly help us determine how visually consistent a print is. After a great deal of research, I believe this should be based on a cumulative relative frequency model (CRF) that evaluates all colors in a chart. In a CRF model, each and every one of the patches is relevant to visual consistency and is being counted within the evaluation. I have found the 3-row control strip does an excellent job of evaluating visual print consistency when using CRF. I’ve also performed the experiment in live production many times and have continued to get feedback from users who say using CRF and the 3-row control strip is the best method they’ve found to evaluate visual consistency.
If you would like to see the true power of CRF and real world metrics, try SpotOn! Verify. The trial is free, and our team will help you get started.
When it comes to printing consistent color there are two pieces of hardware that matter most:
A Consistent Printer
A Measuring Device
There is no such thing as a consistent printer! Printer variance is inevitable. One that varies very little over a week is much better than one that prints one way at 8am and quite differently at 5pm. The key is to monitor your printer’s consistency over time to both understand how variable it is and to know when to take corrective action if it varies too much. Variations can be minimized by setting up a process control program whereby you regularly measure the printer’s performance. Additionally, process control software allows you to track any variation and take corrective action should the printer’s performance go outside defined tolerances.
Definition of Process Control: An engineering discipline that deals with architectures, mechanisms and algorithms for maintaining the output of specific process within a desired range.
Process Control in our Daily Lives
How do we translate this definition into something we can understand in our daily lives? When talking about process control I’ve been asking people this question; “Can you drive down a straight stretch of road with your eyes closed?” I think we all know the answer to this question. Even though the road is straight, the answer is no.
Our newSpotOn! Verify 2.5 software explained plain and simple using a real customer and their printers!
Bonnie and Clyde, two identical inkjet printers, were originally calibrated to match the GRACoL specification as proofers. Over many months they drifted from the GRACoL target, as inkjet printers will do over time. After some time, the two printers were no longer printing the same. It was quite noticeable that they looked different when printing the same file on both printers.