Sonar savvy: using echo sounders to characterize zooplankton swarms

By Natalie Chazal, PhD student, OSU Department of Fisheries, Wildlife, & Conservation Sciences, Geospatial Ecology of Marine Megafauna Lab

I’m Natalie Chazal, the GEMM Lab’s newest PhD student! This past spring I received my MS in Biological and Agricultural Engineering with Dr. Natalie Nelson’s Biosystems Analytics Lab at North Carolina State University. My thesis focused on using shellfish sanitation datasets to look at water quality trends in North Carolina and to forecast water quality for shellfish farmers in Florida. Now, I’m excited to be studying gray whales in the GEMM Lab!

Since the beginning of the Fall term, I’ve jumped into a project that will use our past 8 years of sonar data collected using a Garmin echo sounder during the GRANITE project work with gray whales off the Newport, OR coast. Echo sounder data is commonly used recreationally to detect bottom depth and for finding fish and my goal is to use these data to assess relative prey abundance at gray whale sightings over time and space. 

There are also scientific grade echo sounders that are built to be incredibly precise and very exact in the projection and reception of the sonar pulses. Both types of echosounders can be used to determine the depth of the ocean floor, structures within the water column, and organisms that are swimming within the sonar’s “cone” of acoustic sensing. The precision and stability of the scientific grade equipment allows us to answer questions related to the specific species of organisms, the substrate type at the sea floor, and even animal behavior. However, scientific grade echo sounders can be expensive, overly large for our small research vessel, and require expertise to operate. When it comes to generalists, like gray whales, we can answer questions about relative prey abundances without the use of such exact equipment (Benoit-Bird 2016; Brough 2019). 

While there are many variations of echo sounders that are specific to their purpose, commercially available, single beam echo sounders generally function in the same way (Fig. 1). First, a “ping” or short burst of sound at a specific frequency is produced from a transducer. The ping then travels downward and once it hits an object, some of the sound energy bounces off of the object and some moves into the object. The sound that bounces off of the object is either reflected or scattered. Sound energy that is either reflected or scattered back in the direction of the source is then received by the transducer. We can figure out the depth of the signal using the amount of travel time the ping took (SeaBeam Instruments 2000).

Figure 1. Diagram of how sound is scattered, reflected, and transmitted in marine environments (SeaBeam Instruments, 2000).

The data produced by this process is then displayed in real-time, on the screen on board the boat. Figure 2 is an example of the display that we see while on board RUBY (the GEMM Lab’s rigid-hull inflatable research boat): 

Figure 2. Photo of the echo sounder display on board RUBY. On the left is a map that is used for navigation. On the right is the real time feed where we can see the ocean bottom shown as the bright yellow area with the distinct boundary towards the lower portion of the screen. The more orange layer above that, with the  more “cloudy” structure  is a mysid swarm.

Once off the boat, we can download this echo sounder data and process it in the lab to recreate echograms similar to those seen on the boat. The echograms are shown with the time on the x-axis, depth on the y-axis, and are colored by the intensity of sound that was returned (Fig. 3). Echograms give us a sort of picture of what we see in the water column. When we look at these images as humans, we can infer what these objects are, given that we know what habitat we were in. Below (Fig. 3) are some example classifications of different fish and zooplankton swarms and what they look like in an echogram (Kaltenberg 2010).

Figure 3. Panel of echogram examples, from Kaltenberg 2010, for different fish and zooplankton aggregations that have been classified both visually (like we do in real time on the boat) as well as statistically (which we hope to do with the mysid aggregations). 

For our specific application, we are going to focus on characterizing mysid swarms, which are considered to be the main prey target of PCFG whales in our study area. With the echograms generated by the GRANITE fieldwork, we can gather relative mysid swarm densities, giving us an idea of how much prey is available to foraging gray whales. Because we have 8 years of GRANITE echosounder data, with 2,662 km of tracklines at gray whale sightings, we are going to need an automated process. This demand is where image segmentation can come in! If we treat our echograms like photographs, we can train models to identify mysid swarms within echograms, reducing our echogram processing load. Automating and standardizing the process can also help to reduce error. 

We are planning to utilize U-Nets, which are a method of image segmentation where the image goes through a series of compressions (encoders) and expansions (decoders), which is common when using convolutional neural nets (CNNs) for image segmentation. The encoder is generally a pre-trained classification network (CNNs work very well for this) that is used to classify pixels into a lower resolution category. The decoder then takes the low resolution categorized pixels and reprojects them back into an image to get a segmented mask. What makes U-Nets unique is that they re-introduce the higher resolution encoder information back into the decoder process through skip connections. This process allows for generalizations to be made for the image segmentation without sacrificing fine-scale details (Brautaset 2020; Ordoñez 2022; Slonimer 2023; Vohra 2023).

Figure 4. Diagram of the encoder, decoder architecture for U-Nets used in biomedical image segmentation. Note the skip connections illustrated by the gray lines connecting the higher resolution image information on the left, with the decoder process on the right (Ronneberger 2015)

What we hope to get from this analysis is an output image that provides us only the parts of the echogram that contain mysid swarms. Once the mysid swarms are found within the echograms, we can use both the intensity and the size of the swarm in the echogram as a proxy for the relative abundance of gray whale prey. We plan to quantify these estimates across multiple spatial and temporal scales, to link prey availability to changing environmental conditions and gray whale health and distribution metrics. This application is what will make our study particularly unique! By leveraging the GRANITE project’s extensive datasets, this study will be one of the first studies that quantifies prey variability in the Oregon coastal system and uses those results to directly assess prey availability on the body condition of gray whales. 

However, I have a little while to go before the data will be ready for any analysis. So far, I’ve been reading as much as I can about how sonar works in the marine environment, how sonar data structures work, and how others are using recreational sonar for robust analyses. There have been a few bumps in the road while starting this project (especially with disentangling the data structures produced from our particular GARMIN echosounder), but my new teammates in the GEMM Lab have been incredibly generous with their time and knowledge to help me set up a strong foundation for this project, and beyond. 

References

  1. Kaltenberg A. (2010) Bio-physical interactions of small pelagic fish schools and zooplankton prey in the California Current System over multiple scales. Oregon State University, Dissertation. https://ir.library.oregonstate.edu/concern/graduate_thesis_or_dissertations/z890rz74t
  2. SeaBeam Instruments. (2000) Multibeam Sonar Theory of Operation. L-3 Communications, East Walpole MA. https://www3.mbari.org/data/mbsystem/sonarfunction/SeaBeamMultibeamTheoryOperation.pdf
  3. Benoit-Bird K., Lawson G. (2016) Ecological insights from pelagic habitats acquired using active acoustic techniques. Annual Review of Marine Science. https://doi.org/10.1146/annurev-marine-122414-034001
  4. Brough T., Rayment W., Dawson S. (2019) Using a recreational grade echosounder to quantify the potential prey field of coastal predators. PLoS One. https://doi.org/10.1371/journal.pone.0217013
  5. Brautaset O., Waldeland A., Johnsen E., Malde K., Eikvil L., Salberg A, Handegard N. (2020) Acoustic classification in multifrequency echosounder data using deep convolutional neural networks. ICES Journal of Marine Science 77, 1391–1400. https://doi.org/10.1093/icesjms/fsz235
  6. Ordoñez A., Utseth I., Brautaset O., Korneliussen R., Handegard N. (2022) Evaluation of echosounder data preparation strategies for modern machine learning models. Fisheries Research 254, 106411. https://doi.org/10.1016/j.fishres.2022.106411
  7. Slonimer A., Dosso S., Albu A., Cote M., Marques T., Rezvanifar A., Ersahin K., Mudge T., Gauthier S., (2023) Classification of Herring, Salmon, and Bubbles in Multifrequency Echograms Using U-Net Neural Networks. IEEE Journal of Oceanic Engineering 48, 1236–1254. https://doi.org/10.1109/JOE.2023.3272393
  8. Ronneberger O., Fischer P., Brox T. (2015) U-Net: Convolutional Networks for Biomedical Image Segmentation. https://doi.org/10.48550/arXiv.1505.04597