Getting precise surface flow data from on-track wool tuft tests
This idea isn't exactly novel, as it already exists in research applications. It's likely been used on cars too, perhaps even beyond the wind tunnel during on-track sessions, but there's not much documentation for that. I figured I'd have a go at implementing some sort of tuft-tracking program anyway though, as the end result could still be useful, even if its just copying someone else. Trying to get useable tuft results from a single photo or video frame is both tedious, and inaccurate. A full video is more representative, but getting quantitative results from simply watching a video is impossible.
I started by looking into current methods of tuft tracking as seen in lab settings, looking through recent (in terms of having access to modern technology) papers such as those by Ma and Chen (2022), and Omata and Shirayama (2017). Unfortunately, neither of the methods used are suitable for imperfect test scenarios (especially regarding lighting and contrast). The first paper uses luminescent tufts in a dark environment to significantly increase natural contrast, then bases tuft detection on bright pixels within the overall image. The second paper relies on intense external lighting to increase contrast, then employs an edge-detection method, which even in the lab conditions, was shown to be imperfect where the background was not simple/invariant.
My first idea was motion identification, so I set up an optical flow environment in Matlab and gave the default process a test on what I considered to be a video in realistic/non-ideal conditions (low lighting, shadows, low resolution). For basically copying an existing template for the code and feeding it the raw poor-quality footage the results were impressive. There was promising tuft identification that suggested room for improvement and optimisation, but the fundamental issue was that tufts that weren't moving didn't get identified. Obvious, right? I considered combining this method with one of the more typical methods (edge detection, template matching, etc), but there was no meaningful link between the two method, and no clear way of reliably linking the data from multiple methods like this.
The image below shows the output, with red areas attempting to locate the base of the motion vectors (expanded, and merged into nearby areas if close enough), and green lines joining the furthest distance within each red area. Initially, the edges of the video where being detected as having motion independent of the overall frame, which was fixed by a no-detection buffer around the edge. This improved output beyond the results shown below, but was still unreliable.
Failing to find any suitbable methods out of the box, I decided to try making my own algorithm. It was based on the method used by Ma and Chen, but more robust to poor-quality recording, poor environmental conditions (contrast, shadows, varying background brightness), and able to accommodate backwards-facing tufts as a result of reversed local flow direction. My first guess at an algorithm worked as well as I could hope for (after lots of fine-tuning parameters and threshold values), and is explained and tested over the rest of this page.
The first major difference was the requirement for the user to input tuft origin coordinates by manually clicking on each tuft in a UI. This adds an extra step, but the rewards are a significant freedom of how the algorithm then goes about detecting the rest of the tuft, which allows much more resilience to poor lighting. The hardest part of the tuft tracking is the initial identification of individual tufts against a complex background of lines and shadows and varying brightness, which this method avoids. Critically, the required accuracy of the user input click is low, both regarding successful tuft identification, and resulting vector angle estimate accuracy. This means clicking rapidly through tufts is possible; I was able to comfortably select two tufts per second during testing, meaning even large grids can be selected in 1-2 minutes.
The meaning of the other inputs and the blue circles will be introduced through discussion of the subsequent algorithm.
After confirming the tuft selection and before running, the user must also manually select a rectangular region that represents a stationary reference. This allows the program to determine global image shift due to camera vibration (critical for on-track vehicle use), and adjust the pixel coordinates for the tufts accordingly. In the example below, I've selected a region towards the top of the image that contains no tufts. A more robust alternative would be to place a small contrasting sticker somewhere on the surface of interest, but away from any tufts.
The algorithm then loops through each tuft in each frame using the steps below.
Create a vector of length = Search Radius (as per user input above, and shown by the outer blue circles), anchored at the user-selected origin point.
Sweep the vector through a range of 45 degrees at 15 degree intervals either side of the initial tuft direction (right-most user input value). This initial tuft direction value reduces the accuracy requirement of the initial tuft selection clicks, and relies on clear tape being used, guaranteeing that even if the flow is reversed, the first section of the tuft will always be in the expected direction.
For each 15 degree interval, sample each pixel that the vector passes over, and record their average brightness.
Using a weighting of the average brightness of each 15 degree interval, determine the direction with lowest overall brightness (alternatively, highest overall brightness if using light tufts on a dark background).
Move the active path point to the end of this vector (length = Search Radius)
After the first step, reduce Search Radius automatically to two thirds of the input value, and allow the search vector to sweep through 60 degrees either side of the final vector angle from the previous step. Initially, this logic worked, but resulted in significant overshoots. Paths sometimes (albeit rarely) got lost as a result, and there was increase uncertainty in the final angle estimate from increase variance in the position of each path point. The solution was to dynamically adjust the path step length depending on the best angle identified (smaller step if path curves more). This remained independent of the search radius.
Adjust the current pixel coordinates to account for global frame shift due to camera vibration
After travelling the minimum number of path steps (another adjustable user input), the average brightness of all pixels with a radius of Search Radius * 5 from the current path point is calculated, and used to dynamically adjust a threshold for identification of the end of a tuft. When the darkest 5 pixels in the entire vector search window are not at least 20% darker than the average of the larger surrounding area (or 20% lighter for light tufts/dark background), then the path stops and the current point is designated as the end point of the tuft.
The final tuft angle for each frame is taken as the angle of the vector between the tuft endpoint, and the point 40% of the way along the path. By taking the vector across the latter 60% of the tuft, effects of tuft stiffness are greatly reduced (and the section of tuft "trapped" under the clear tape is avoided).
A tuft "curvature" metric is calculated, as the angle difference between a two vectors extending from the 70% position (one back to the 40% mark, and one to the tuft endpoint/100% mark). Even in extreme cases such as reversed airflow, the tufts should be close to straight beyond 40% of their total length, so this curvature metric is quickly able to identify tufts with questionably high curvature (suggesting the search algorithm may have become lost and wandered off the tuft), and the frame/s they occur on, for quick verification/debugging of the results.
Initial unoptimised path algorithm showing significant oscillation/overshoot, some paths getting lost or incorrectly reversing, and the effects of lacking a dynamically adjustable brightness-based end-point identification method (i.e. some paths terminating too soon, or continuing beyond the end of the tuft).
Above are example results from the final optimised algorithm. 22 tuft paths were calculated across approximately 200 frames in about 2 seconds on a single core. The algorithm here is slightly less efficient than the one above, but errors were noticed less than 0.005% of the time during normal-use testing. Those that did rarely occur were consistently due to a poor choice of initial search radius combined with an excessively inaccurate initial tuft origin selection. The review UI as shown above allows the user to browse through the paths in all video frame, which in combination with the tuft curvature metric, allows quick identification of any errors in the results.
Illustrated algorithm steps showing how a low accuracy origin selection finds and centres on the tuft.
Illustration of how starting at 40% path distance reduces the effect of tuft stiffness in angle estimate accuracy. Starting measurement after 40% increases the sensitivity of angle estimates to slight imperfections in the search path for negligible other benefit.
Example of global frame motion tracking in action as it adjusts the user-selected origin point coordinates to match image translation
After tuning the tracking algorithm, it was time to turn the results into useable outputs. The first output is a vector field showing average tuft anlges over the duration of the input video. Interpolation between each tuft provides increased resolution, making the results more intuitive and readable. The risk of interpolation is that areas of complex flow may be overly-simplified and thus inaccurate, but this is ok as long as the user is aware of this limitation. Such awareness is increased through various other outputs included below.
For quantitative comparison of angles, the individual tuft vectors can be printed/exported, along with their coordinates. To improve usability of the outputs (both numerica and visual) it is possible to apply a distortion shift to the result to eliminate the effects of camera perspective and fish-eye. The use of a consistently-spaced grid of tufts will help provide suitable anchor points for such a transformation. Additionally, a custom origin point (such as the upper-left most tuft) can be used to shift all results coordinates for easier numerical comparison to CFD data via a common reference.
The tufts I used in my couple of tests were not ideal (high stiffness, not straight), and neither was the rather rough 3D printed surface that I had taped them to, so I didn't go as far as looking at exact quantitative accuracy (despite it being one of the driving factors behind me attempting this project...). Instead I just visually compared surface flow results in CFD with the vector field. Below are the two results superimposed, taken on the pressure side of a simple aerofoil shape with varying angle of attack between the two halves (greater on the left) and small endplates.
Generally the results are pretty good. There are some discrepancies at the top right and top left, but from looking through the search paths on each frame, they were doing a fine job of tracking the tuft, so the offender here is simply the poor physical setup I used. I didn't plan on trying to find the perfect wool or string for these initial tests; for now I just want the algorithm to work. Later, I'll give the program to the FSAE team and they can test it for real.
The next test used the same part/surfaces, but with an obstruction moved from left to right, then back again. This was to test the capability of the algorithm to track more chaotic tufts (particularly an issue when the tuft motion and frame rate result in a large area of blur), and also as a good showcase of the animated vector field output (both below).
The path tracking fell short of the ends of the tuft in a few places, but typically only where there was significant blur. These tests utilised only a 60 Hz frame rate, which was increased to 240 Hz for following tests. The main error was the top-most tuft latching onto a background cable, however this was a consequence of the testing environment, which was specifically left un-lit and with no uniform background to create a worst-case scenario. Tufts extending beyond the trailing edge of a surface were not a good idea anyway, not least because of this issue.
Next was a separation test, which utilised the suction side of the same aerofoil shape (see first image below). Separation can be identified visually with reasonable confidence using the average vector field. To increase reliability of the separation identification, a separate formula is used that combines rate of change of average angle between neighbouring tufts with a check for flow that at least occasionally (>1 frame) has a negative stream-wise velocity. If these two checks both return positive, then the affected tufts are flagged as being in separated flow. In this test, no endplate was used, hence the flow re-attaches as the tip vortex forms near the edge of the suction surface. The small area of detected separation was a false detection before I had fully fine-tuned the formula, where it picked up the difference in angle between itself and the tuft to the left, but without also checking for occasionally reversed flow. The separation detection and average vector angles are superimposed in the second image below.
The last main output I wanted was some sort of turbulence/wake estimate. Originally, the goal was to have a quantitative measure of turbulence/unsteadiness in the flow at each tuft point, but through testing it became clear that tufts were simply too big to capture true small-scale turbulence accurately.
Next best was a qualitative measure to give a rough idea of transience in different parts of the flow. By producing a map of these measures, it is still possible to compare different tufts quantitatively, however there is no physical meaning of any of the results. In total, I tested 7 different independent formulas for "turbulence":
Total sum of angular travel of each tuft over all frames
Total sum of angular change in curvature (the two vectors as mentioned earlier) of each tuft over all frames
Angle of a window encompassing the median 50% of all angles calculated for each tuft
RMS of the fluctuations in tuft tip velocity (thinking more quantitatively in terms of the equation for turbulent kinetic energy)
Count of oscillations of each tuft across its mean angle
Average tuft oscillation frequency about its mean angle
Sum of the power of all frequencies experienced by each tuft, from 1 to Nyquist
The results were tested on the separation video (as used above), and a cylinder wake scenario (11cm diameter, 5cm height, Re = 50000 approx.). The low height was a mistake in the end as it meant there were no real oscillations in the wake region, and certainly no recirculation thanks to the influence of the top/tip vortices. Unfortunately I didn't have a taller cylinder to use a the time. CFD results for the tested scenario are shown below, first showing the average surface flow comparison, and then contours of TKE.
"Turbulence" formulas 1 and 2 (total motion):
"Turbulence" formulas 3 and 4 (other methods):
"Turbulence" formulas 5, 6, and 7 (frequency-based):
There are two dominant trends among the seven results, with one trend seemingly approximating a typical "wake" shape such as that defined by total pressure, and the other (formulas 5 and 6), when inversed, appears to be a reasonable approximation of TKE. I'm dubious of the merits of this though, given there's no direct mathematical connection. The oscillation frequency methods instead rely on more physical properties of the tufts (primarily flow disturbance and momentum), which result in high-frequency but small-scale oscillations even when in otherwise undisturbed, steady flow. The total pressure similarity is also somewhat coincidental/situational; there's no physical correlation between local available energy magnitude, and the rate of change of angular velocity.
Of all the results, formula 6 (average oscillation frequency) gives the best result for a TKE approximation (which at this point I'm not ruling out the possibility of coincidence). For a more intuitive "wake" visualisation, formulas 4 (tip velocity fluctuation) and 7 (frequency power) are both similar, but I prefer formula 4 for its slightly higher relevance to the mathematical definition of turbulence. There's also the possibility of combining formulas to fine-tune an estimate.
The separation case offered another chance to try the formulas out, but I didn't have CFD to compare to so there was no validation data unfortunately, although I would still not be particularly happy with any tuft data from these tests anyway, thanks to their stiffness and the plastic surface's roughness. The seven formulas will need to be tested much more rigorously at some point, ideally using surfaces on the actual car. For now, here are the two "best" formulas (4 and 7) for the separation case (rotation is the same as the rest of the separation test media):
I kept the legend for the frequency output (in Hz) as it has meaning that is both physical and intuitive, where most of the other outputs don't.
These results make it clear that while the tip velocity fluctuations appeared to approximate the inverse of total pressure in the previous wake case, it was just a coincidence. Here, the region covered by the vortex (which should have near the lowest total pressure in the whole flow field) has some of the lowest tip velocity fluctuation values on the whole surface, which is not consistent with an inverse correlation.
I'm not sure what to make of he oscillation results...The dark blue area is consistent with the area of separation, but it is unclear if there is physical meaning to the red and cyan areas that correlates to a particular real flow metric. The only conclusion I can confidently make for now is that the tip velocity fluctuation formula is a good qualitative visualisation of flow transience/large-scale turbulence.
I'll add to this once some better testing has been done using a more suitable tuft material and with higher tuft resolution (which lower stiffness tufts would allow), and some proper CFD comparisons have been made to see if any of the above formulas really are useful, or perhaps to inspire new formulas that may be better.