Extracting precise data from in-the-field wool tuft tests
Optical tuft tracking isn't exactly novel, as it already exists in research applications. It's likely been used on cars too, perhaps even beyond the wind tunnel during on-track sessions, but there's not much documentation for that. I figured I'd have a go at implementing some sort of tuft-tracking program anyway though, as the end result could still be useful, even if its just copying someone else. Trying to get useable tuft results from a single photo or video frame is both tedious, and inaccurate. A full video is more representative, but getting quantitative results from simply watching a video is impossible.
I started by looking into current methods of tuft tracking as seen in lab settings, looking through recent (in terms of having access to modern technology) papers such as those by Ma and Chen (2022), and Omata and Shirayama (2017). Unfortunately, neither of the methods used are suitable for imperfect test scenarios (especially regarding lighting and contrast). The first paper uses luminescent tufts in a dark environment to significantly increase natural contrast, then bases tuft detection on bright pixels within the overall image. The second paper relies on intense external lighting to increase contrast, then employs an edge-detection method, which even in the lab conditions, was shown to be imperfect where the background was not simple/invariant.
My first idea was motion identification, so I set up an optical flow environment in Matlab and gave the default process a test on what I considered to be a video in realistic/non-ideal conditions (low lighting, shadows, low resolution). For basically copying an existing template for the code and feeding it the raw poor-quality footage the results were impressive. There was promising tuft identification that suggested room for improvement and optimisation, but the fundamental issue was that tufts that weren't moving much didn't get identified well. Obvious, right? I considered combining this method with one of the more typical methods (edge detection, template matching, etc), but there was no clear way of reliably linking the data from multiple methods like this.
The image below shows the output, with red areas attempting to locate the base of the motion vectors (expanded, and merged into nearby areas if close enough), and green lines joining the furthest distance within each red area. Initially, the edges of the video were being detected as having motion independent of the overall frame, which was fixed by a no-detection buffer around the edge. This improved output beyond the results shown below but was still unreliable, especially where tufts became blurred.
Failing to find any suitbable methods out of the box, I decided to try making my own algorithm. It was based on the method used by Ma and Chen, but more robust to poor-quality recording, poor environmental conditions (contrast, shadows, varying background brightness), and able to accommodate backwards-facing tufts as a result of reversed local flow direction. My first guess at an algorithm worked as well as I could hope for (after lots of fine-tuning parameters and threshold values), and is explained and tested over the rest of this page.
The first major difference was the requirement for the user to input tuft origin coordinates by manually clicking on each tuft in a UI. This adds an extra step, but the rewards are a significant freedom of how the algorithm then goes about detecting the rest of the tuft, which allows much more resilience to poor lighting. The hardest part of the tuft tracking is the initial identification of individual tufts against a complex background of lines and shadows and varying brightness, which this method avoids. Critically, the required accuracy of the user input click is low, both regarding successful tuft identification, and resulting vector angle estimate accuracy. This means clicking rapidly through tufts is possible; I was able to select two tufts per second during testing with adequate accuracy, meaning even large grids of hundreds of tufts can be selected in 1-2 minutes.
The meaning of the other inputs and the blue circles will be introduced through discussion of the subsequent algorithm.
After confirming the tuft selection and before running, the user must also manually select a rectangular region that represents a stationary reference. This allows the program to determine global image shift due to camera vibration (critical for on-track vehicle use), and adjust the pixel coordinates for the tufts accordingly. In the example below, I've selected a region towards the top of the image that contains no tufts. A more robust alternative would be to place a small contrasting sticker somewhere on the surface of interest, but away from any tufts.
The algorithm then loops through each tuft in each frame using the steps below.
Create a vector of length = Search Radius (as per user input above, and shown by the outer blue circles), anchored at the user-selected origin point.
Sweep the vector through a range of 45 degrees at 15 degree intervals either side of the initial tuft direction (right-most user input value). This initial tuft direction value reduces the accuracy requirement of the initial tuft selection clicks, and relies on clear tape being used, guaranteeing that even if the flow is reversed, the first section of the tuft will always be in the expected direction.
For each 15 degree interval, sample each pixel that the vector passes over, and record their average brightness.
Using a weighting of the average brightness of each 15 degree interval, determine the direction with lowest overall brightness (alternatively, highest overall brightness if using light tufts on a dark background).
Move the active path point to the end of this vector (length = Search Radius)
After the first step, reduce Search Radius automatically to two thirds of the input value, and allow the search vector to sweep through 60 degrees either side of the final vector angle from the previous step. Initially, this logic worked, but resulted in significant overshoots. Paths sometimes (albeit rarely) got lost as a result, and there was increase uncertainty in the final angle estimate from increase variance in the position of each path point. The solution was to dynamically adjust the path step length depending on the best angle identified (smaller step if path curves more). This remained independent of the search radius.
Adjust the current pixel coordinates to account for global frame shift due to camera vibration
After travelling the minimum number of path steps (another adjustable user input), the average brightness of all pixels with a radius of Search Radius * 5 from the current path point is calculated, and used to dynamically adjust a threshold for identification of the end of a tuft. When the darkest 5 pixels in the entire vector search window are not at least 20% darker than the average of the larger surrounding area (or 20% lighter for light tufts/dark background), then the path stops and the current point is designated as the end point of the tuft.
The final tuft angle for each frame is taken as the angle of the vector between the tuft endpoint, and the point 40% of the way along the path. By taking the vector across the latter 60% of the tuft, effects of tuft stiffness are greatly reduced (and the section of tuft "trapped" under the clear tape is avoided).
A tuft "curvature" metric is calculated, as the angle difference between a two vectors extending from the 70% position (one back to the 40% mark, and one to the tuft endpoint/100% mark). Even in extreme cases such as reversed airflow, the tufts should be close to straight beyond 40% of their total length, so this curvature metric is quickly able to identify tufts with questionably high curvature (suggesting the search algorithm may have become lost and wandered off the tuft), and the frame/s they occur on, for quick verification/debugging of the results.
Initial unoptimised path algorithm showing significant oscillation/overshoot, some paths getting lost or incorrectly reversing, and the effects of lacking a dynamically adjustable brightness-based end-point identification method (i.e. some paths terminating too soon, or continuing beyond the end of the tuft).
Above are example results from the final optimised algorithm. 22 tuft paths were calculated across approximately 200 frames in about 2 seconds on a single core. The algorithm here is slightly less efficient than the one above, but errors were noticed less than 0.005% of the time during normal-use testing. Those that did rarely occur were consistently due to a poor choice of initial search radius combined with an excessively inaccurate initial tuft origin selection. The review UI as shown above allows the user to browse through the paths in all video frame, which in combination with the tuft curvature metric, allows quick identification of any errors in the results.
Illustrated algorithm steps showing how a low accuracy origin selection finds and centres on the tuft.
Illustration of how starting at 40% path distance reduces the effect of tuft stiffness in angle estimate accuracy. Starting measurement even further along the tuft length increases the sensitivity of angle estimates to slight imperfections in the search path for negligible additional benefit.
Example of global frame motion tracking in action as it adjusts the user-selected origin point coordinate/s to match image translation
After tuning the tracking algorithm, it was time to turn the results into useable outputs. The first output is a vector field showing average tuft anlges over the duration of the input video. Interpolation between each tuft provides increased resolution, making the results more intuitive and readable. The risk of interpolation is that areas of complex flow may be overly-simplified and thus inaccurate, but this is ok as long as the user is aware of this limitation. Such awareness is increased through various other outputs included below.
For quantitative comparison of angles, the individual tuft vectors can be printed/exported, along with their coordinates. To improve usability of the outputs (both numerica and visual) it is possible to apply a distortion shift to the result to eliminate the effects of camera perspective and fish-eye. The use of a consistently-spaced grid of tufts will help provide suitable anchor points for such a transformation. Additionally, a custom origin point (such as the upper-left most tuft) can be used to shift all results coordinates for easier numerical comparison to CFD data via a common reference.
The tufts I used in my couple of tests were not ideal (high stiffness, not straight), and neither was the rather rough 3D printed surface that I had taped them to, so I didn't go as far as looking at exact quantitative accuracy (despite it being one of the driving factors behind me attempting this project...). Instead I just visually compared surface flow results in CFD with the vector field. Below are the two results superimposed, taken on the pressure side of a simple aerofoil shape with varying angle of attack between the two halves (greater on the left), small endplates (not shown), and small Gurneys to create some more subtle spanwise flow components.
Generally the results are pretty good. There are some discrepancies at the top right and top left, but from looking through the search paths on each frame, they were doing a fine job of tracking the tuft, so the offender here is simply the poor physical setup I used. I didn't plan on trying to find the perfect wool or string for these initial tests; for now I just want the algorithm to work. Later, I'll give the program to the FSAE team and they can test it for real.
The next test used the same part/surfaces, but with an obstruction moved from left to right, then back again. This was to test the capability of the algorithm to track more chaotic tufts (particularly an issue when the tuft motion and frame rate result in a large area of blur), and also as a good showcase of the animated vector field output (both below).
The path tracking fell short of the ends of the tuft in a few places, but typically only where there was significant blur. These tests utilised only a 60 Hz frame rate, which was increased to 240 Hz for following tests to reduce motion blur. The main error was the tracking for the top-most tuft latching onto a background cable, however this was a consequence of the testing environment, which was specifically left un-lit and with no uniform background to create a worst-case scenario. Tufts extending beyond the trailing edge of a surface were not a good idea anyway, not least because of this issue.
Next was a separation test, which utilised the suction side of the same aerofoil shape (see first image below). Separation can be identified visually with reasonable confidence using the average vector field. To increase reliability and automate the separation identification, a separate formula is used that combines spatial rate of change of average angle between neighbouring upstream tufts with a check for flow that at least occasionally (>5% of frames) has a negative stream-wise velocity component. If these two checks both return positive (or for the front-most tufts, only the negative velocity criteria is assessed), then the affected tufts are flagged as exhibiting separated flow.
In this test, no endplate was used, hence the flow re-attaches as the tip vortex forms near the edge of the suction surface. Prior to adding the rate of change criteria, the top right-most tuft was flagged as displaying flow separation in some tests due to its extreme angle.
The separation detection and average vector angles are superimposed in the second image below.
At this point I'd achieved the original goal of making a reliable tuft-tracking algorithm, and some results visualisations. I wanted to experiment further though and see how much information I could extract from these tufts.
I started looking at whether I could get a rough idea of relative transience/turbulence in different parts of the flow. Obviously this would have to be qualitative in terms of magnitude, but if the results are reliable enough, they may allow a quantitative relative comparison between tufts. In total, I tested 7 different independent formulas for "turbulence":
Total sum of angular travel of each tuft over all frames
Total sum of angular change in curvature (the two vectors as mentioned earlier) of each tuft over all frames
Angle of a window encompassing the median 50% of all instantaneous angles for each tuft
RMS of the fluctuations in tuft tip velocity (thinking more quantitatively in terms of the equation for turbulent kinetic energy)
Count of oscillations of each tuft across its mean angle
Average tuft oscillation frequency about its mean angle
Sum of the power of all frequencies experienced by each tuft in the range of 1 to 10 Hz
The results were tested on the separation video (as used above), and a cylinder wake scenario (11cm diameter, 5cm height, Re = 50000 approx.). The low height was a mistake in the end as it meant there were no real oscillations in the wake region, and no recirculation thanks to the influence of the top/tip vortices. Unfortunately I didn't have a taller cylinder to use a the time. CFD results for the tested scenario are shown below, first showing the average surface flow comparison, and then contours of TKE.
"Turbulence" formulas 1 and 2 (total motion):
"Turbulence" formulas 3 and 4 (other methods):
"Turbulence" formulas 5, 6, and 7 (frequency-based):
There are two dominant trends among the seven results, with one trend seemingly approximating a typical "wake" shape such as that defined by total pressure (although it's obviously not total pressure influencing this tuft behaviour...how could it?), and the other (formulas 5 and 6), when inversed, appears to be a reasonable approximation of TKE. I'm dubious of the merits of this though, given there's no direct mathematical connection. The oscillation frequency methods instead rely on more physical properties of the tufts (primarily flow disturbance and momentum), which result in high-frequency but small-scale oscillations even when in otherwise undisturbed, steady flow. The total pressure similarity is also somewhat coincidental/situational; there's no physical correlation between local available energy magnitude, and the rate of change of angular velocity.
Of all the results, formula 6 (average oscillation frequency) gives the best result for a TKE approximation (although at this point I'm not ruling out the possibility of coincidence). For a more intuitive "wake" visualisation, formulas 4 (tip velocity fluctuation) and 7 (frequency power) are both similar, but I prefer formula 4 for its slightly higher relevance to the mathematical definition of turbulence. There's also the possibility of combining formulas to fine-tune an estimate.
The separation case offered another chance to try the formulas out, but I didn't have CFD to compare to so there was no validation data unfortunately, although I would still not be particularly happy with any tuft data from these tests anyway due to their stiffness and the surface roughness. The seven formulas will need to be tested much more rigorously at some point. For now, here are the two "best" formulas (4 (left) and 7 (right)) for the separation case (image rotation is the same as the rest of the separation tests above):
The tip velocity results show larger "turbulence" in the region I'd expect, with the left half of the surface seeing much lower values, as well as lower values on the right where the tip vortex is strongest.
I'm not sure what to make of the oscillation results...The dark blue area is consistent with the area of separation, but it is unclear if there is physical meaning to the red and cyan areas that correlates to a particular real flow metric. The only conclusion I can confidently make until further testing is conducted is that the tip velocity fluctuation formula seems to be a good qualitative visualisation of flow transience/large-scale turbulence.
Later, I had a look around some shops selling sewing and textile supplies to examine different thread and wool types that might improve the reliability of the results: cotton, polyester, polyethylene and silk, all with different strand weights and counts. The most promising was an elastic thread, which didn't hold bends or curves and was very flexible.
Elastic thread has good flexibility (left) and does not maintain bends or curvature (right), while remaining lightweight, low-profile, and easily visible.
I ran another test, this time with a larger and taller square cylinder and using the elastic thread. The results were not what I expected, and suggested that the pure cotton actually worked better. While the elastic thread was softer and more compliant, it suffered from three undesirable characteristics:
It was slightly grippy and occasionally got stuck at lower velocities, even on smooth surfaces
The elastic nature meant there was a significant spring/inertia effect in rapidly oscillating flow, and in high-angle flow
The cotton coating shifted around easily over the elastic core, resulting in inconsistent stiffness along the length of the tufts, and between different tufts, throwing off the turbulence estimations. The bare elastic core had far too much friction to be usable without the cotton coating.
As a result, angle estimations were less accurate, and turbulence estimations were too noisy to be interpretable.
The main drawback of the pure cotton that led to my search for an alternative was the bends, kinks, and curvature that it held. This caused inconsistencies between tufts in response to the airflow (although not as bad as the elastic). I expect pre-treating the thread by hanging it up with weights and steaming it will yield much straighter tufts and thus better results.
At least by running the new elastic test, I was able to confirm the code still worked fine for light tufts on a dark background, even with very low contrast and some blue tape to throw it off. The 47 tufts tracked below over 300 frames took about 10 seconds to compute on a single core. Parallel-processed "for" loops could be substituted in to significantly speed this up instead of computing very tuft and frame in series.
Example of light tufts on a dark background with low contrast and a variable background brightness around the edges introduced by the blue tape. The tufts in the recirculation region obviously need to be spaced further apart.
I'll add future results here if I end up testing a better cotton implementation. I want to try something a little simpler that avoids recirculation but still has some variety in transience, using some longer tufts (~40mm instead of these 30mm tufts) to help reduce the influence of stiffness. At this point, the best option may be to simply run the tests on the FSAE car, which offers high velocities, and the perfect flow field complexity (given it's exactly the application this algorithm was conceived for).