Improper measurement of oceanographic structures have been shown to lead to enormous errors, but the question still remains: How “little” sampling can I get away with?
In the first two blogs of this three-part series we began laying out the groundwork for “how often” you should take a Sound Velocity Profile. We first explained what exactly is sound velocity, which oceanographic parameters impact its value, and what errors in measurements of each (eg Conductivity, Temperate, Pressure) will do to your multibeam data. We then laid out the oceanographic and regional factors that can lead to challenging conditions. Such conditions can be geographic in nature (proximity to freshwater & saltwater mixing) or purely physical in nature (ie. KH Waves, internal waves). Improper measurement of such oceanographic structures have been shown to lead to enormous errors, but the question still remains: How “little” sampling can I get away with?
Have you ever flipped that question around and asked, “What happens to my survey data if I don’t sample enough?” Or, “Is there a point of diminishing returns on collecting more sound velocity data?”
Thankfully the work1 of some REALLY smart people, John Hughes Clarke and Semme Djikstra, help us get a good handle on this.
Why do surveyors want to know how little sampling they can get away with?
Why do surveyors want to know how little sampling they can get away with? “Time is money” is a phrase that rings particularly true in the survey industry, making it a cold, hard fact that stopping to do a sound velocity cast is money lost. AML has provided a tool for you to quantify that cost in our MVP calculator.
About 20 years ago, John Hughes Clarke – then head of the Ocean Mapping Group at the University of New Brunswick (UNB) and now leading the Center for Coastal and Ocean Mapping/Joint Hydrographic Center (CCOM/JHC) at the University of New
Hampshire (UNH) – conducted a multibeam survey in conjunction with the Canadian Hydrographic Service on the Northeast corner of Georges Bank off the coast of Maine. As you may recall from a prior blog, Georges Bank is an area known for internal waves due to the presence of a strong thermocline that migrates back and forth in response to tidal variations. The result is a region with high spatial and temporal variability in sound velocity structure. A Moving Vessel Profiler (MVP) was deployed during the survey with profiles taken every 2.3 minutes along 45 km transects in 100 m water depth.
The approach that John Hughes Clarke took during the data analysis was as ingenious as it is simplistic. Rather than taking fewer profiles, he simply downsampled the profiles to illustrate how “little” he could get away with in terms of the impact that SV data density has on multibeam data quality.
Moving from top to bottom on the set of plots above, we first see all the SV data from every profile ranging from 1515 m/s in pink to 1480 m/s in blue. We can see areas with strong stratification or layering, clear evidence of tidal mixing, and entry of cold water at depth near the end of the survey. Below that plot, the data is downsampled by reducing the number of profiles from 2.3 minute intervals to 17, 34, 70, and finally 140 minute intervals, respectively. It’s worth noting that an interval between profiles of 2 hours and 20 minutes is by no means extreme, and for many organisations may be considered too frequent! Regardless, it is evident that the 140 minute data is not representative of the regional oceanography.
Following the downsampling, the data was interpolated between SV profiles by merging the data between two profiles, with a bias depending on which profile is geographically closest at any particular point.
Even at 17.5 minute intervals we can see differences in the data, and by the 140 minute version the results are not nearly representative of the oceanography: the internal waves and tidal mixing are non existent, while the cold water is influencing 50% of the area whereas it should be around just 10-15%. One quick conclusion that resulted from this part of the study was that interpolation is only useful and productive if the SV profile frequency is greater than the rate of change in oceanographic conditions. Following this conclusion, the analysis focused on how this translates down to the multibeam data.
The sound velocity data for the interpolated and real-time were separately applied to multibeam data for all time intervals. Differences grids were created looking between the 2.5 minute data versus the 17.5 min and the 2.5 min versus the 140 minute data. The root-mean square (RMS) errors were then calculated from which errors exceeding 5% were observed. These errors were greater during times when the sound velocity structure was complex.
From this study it was then concluded that under typical survey conditions:
- There is no indication as to how complex the oceanography may be.
- Attempts to improve results using interpolation are only useful if the sampling period is shorter than the time or distance scale of which the oceanography changes.
What's changed since 1999?
About 20 years later, AML was provided another data set and analysis from Semme Djikstra, Research Scientist at UNH, to revisit this methodology but in a slightly different location – roughly 90 km inland from Georges Bank in the Gulf of Maine in significantly shallower water, roughly 15 meters. As before, the area is known for a strong thermocline during the summer months which results in an internal wave that propagates slowly over Georges Bank onto the continental shelf. The demarcation between two water masses ebbs and flows over tens of kilometers leading to a complex sound velocity structure. As before, the survey utilized an MVP to operate in continuous mode with profiles taken every 2.3 minutes over a 45 km transect.
The profiles shown above are representative of how much the SV structure changes over time. By focusing on just an hour of the data interesting patterns emerge.
1 Profile / Hour
If you take just a single profile every hour, the data seems acceptable with errors increasing as you move from nadir to the outer beams. The interpolation appears to improve the results of the data, but is this real? Well… let’s add in another profile.
2 Profiles / Hour
With two profiles, differences start to emerge between the observed and interpolated with a clear break in both data sets roughly a few minutes after the second profile is applied. Again the interpolation helps, the addition of a second profile shows a decrease in the maximum observed error from 3.5+ cm to 3.0 cm but could it just be “smoothing the error” across the length of the transect as well?
4 Profiles / Hour
With four profiles, the complex SV structure is very apparent and the errors show a further decrease.
8 Profiles / Hour
Finally we double the number of profiles again to 8 per hour and the errors are further reduced and when interpolated are nearly absent.
Much has changed since 1999, but the direct correlation between accurate sound velocity data and quality survey data remains the same.
From both of these studies, we learned that there is no smoking gun to indicate when to do a profile, but there are consequences to not sampling enough.
I know this seems rather convenient since AML just happens to sell a reliable piece of deck equipment to remove all uncertainty but the data doesn’t lie. The improvements have been studied by well-respected academics, measured and presented. If you wish to get access to any of the academic papers used to construct these blogs, please don’t hesitate to contact AML!
1 Hughes Clarke, J.E., Lamplugh, M., Kammerer, E. . Integration of near-continuous sound speed profile information. Paper Presented at Canadian Hydrographic Conference 2000, Montreal, QC, 15-19 May. Retrieved from http://www.omg.unb.ca/omg/papers/HUGHES.PDF