August 21, 2019
The Lucky Break We Owe to a Competitor
Thirty years of running a company that tirelessly hones its technology, responds to market changes and customer wishes and works hard to create solutions gives you plenty of stories to tell. Johannes Reilhofer, the boss at Reilhofer KG, has fond memories of a benchmark test he conducted on a competitor’s product using the eolANALYSER. The following article describes – in the words of our CEO – how this was converted into a genuine innovation for which the company pocketed a HENRY FORD AWARD.
“It’s many years ago that we laid the foundation for our end-of-line ANALYSER, a diagnosis system to detect manufacturing errors in engines and transmission units. At the time we had integrated our latest system into an automaker’s test bed. We noticed that a competitor’s system was also hooked up to the same rig. So the purpose was to perform a benchmark test. The transmission production department gave us 30 transmission units with prior damage and 20 that were flawless. We were asked to practice on them. After two days we had reliably identified the ‘good’ ones and the ‘bad’ ones, but the competitor matched our performance across the board.
With this partial success under our belts, we were then asked to check the current production, again as a contest between us, the competitor – and the acoustic, human tester. With a highly trained ear, this expert was an employee of the transmission manufacturer. He was the adjudicator and also the one who decided what was good – and what was bad. The measurement cycle lasted 12 days in total. Incredibly, the numbers we judged to be damaged rose by the day. In the end it was around 13%. But our competitor was arriving at almost the same results as well. In contrast, the human tester recorded the usual damage quota of between 0.6 and 1.1% per day. Nothing had changed in his view. So something was evidently amiss. There was obviously a logical error. Computerised damage analysis wasn’t working, and we did not know why. Our competitor started packing up his equipment. He said they would reconfigure the collective.
But our feeling was that we were onto something sensational. Entirely different measurement hardware, entirely different diagnosis software – but precisely the same, disastrous findings. How could the results have missed the mark so completely? Where was the difference? It became apparent that our computer’s perfect memory was the decisive stumbling block. We soon realised: people become accustomed to the typical sound of a production series and recognise outliers relative to this.
In other words: every test method that works with rigid tolerances for production errors is destined to fail, as the aggregate total of all aspects acting on production will, with time, inevitably go beyond these limits. So we would have to simulate the ‘imperfect memory’ of a human being in order to obtain ‘human’ findings. And this is how we developed our ‘breathing collective’. All of the diagnosis technology we now use for end-of-line applications operates based on a ‘breathing collective’ that continuously adapts to mean behavioural patterns while simultaneously considering a variety of absolute values. The latter are defined by the virtual transmission behaviour between the test bed, the vehicle and the position of the driver’s head. We were therefore fortunate to experience the competition at precisely the right moment. It inspired us to engage with these ideas, which were fairly revolutionary at the time.
We conducted meticulous testing to ensure that our method was reliable. People tend to mistrust a diagnosis system that adjusts itself. For many it was too much of a good thing. First we tested 52,000 transmission units, then 200,000 more – and the results were convincing. We received the 2001 HENRY FORD AWARD in recognition of our achievement and were immediately commissioned to equip all EoL test beds at the more or less eponymous company in Detroit.