Classifying cetacean behavior

Clara Bird, Masters Student, OSU Department of Fisheries and Wildlife, Geospatial Ecology of Marine Megafauna Lab

The GEMM lab recently completed its fourth field season studying gray whales along the Oregon coast. The 2019 field season was an especially exciting one, we collected rare footage of several interesting gray whale behaviors including GoPro footage of a gray whale feeding on the seafloor, drone footage of a gray whale breaching, and drone footage of surface feeding (check out our recently released highlight video here). For my master’s thesis, I’ll use the drone footage to analyze gray whale behavior and how it varies across space, time, and individual. But before I ask how behavior is related to other variables, I need to understand how to best classify the behaviors.

How do we collect data on behavior?

One of the most important tools in behavioral ecology is an ‘ethogram’. An ethogram is a list of defined behaviors that the researcher expects to see based on prior knowledge. It is important because it provides a standardized list of behaviors so the data can be properly analyzed. For example, without an ethogram, someone observing human behavior could say that their subject was walking on one occasion, but then say strolling on a different occasion when they actually meant walking. It is important to pre-determine how behaviors will be recorded so that data classification is consistent throughout the study. Table 1 provides a sample from the ethogram I use to analyze gray whale behavior. The specificity of the behaviors depends on how the data is collected.

Table 1. Sample from gray whale ethogram. Based on ethogram from Torres et al. (2018).

In marine mammal ecology, it is challenging to define specific behaviors because from the traditional viewpoint of a boat, we can only see what the individuals are doing at the surface. The most common method of collecting behavioral data is called a ‘focal follow’. In focal follows an individual, or group, is followed for a set period of time and its behavioral state is recorded at set intervals.  For example, a researcher might decide to follow an animal for an hour and record its behavioral state at each minute (Mann 1999). In some studies, they also recorded the location of the whale at each time point. When we use drones our methods are a little different; we collect behavioral data in the form of continuous 15-minute videos of the whale. While we collect data for a shorter amount of time than a typical focal follow, we can analyze the whole video and record what the whale was doing at each second with the added benefit of being able to review the video to ensure accuracy. Additionally, from the drone’s perspective, we can see what the whales are doing below the surface, which can dramatically improve our ability to identify and describe behaviors (Torres et al. 2018).

Categorizing Behaviors

In our ethogram, the behaviors are already categorized into primary states. Primary states are the broadest behavioral states, and in my study, they are foraging, traveling, socializing, and resting. We categorize the specific behaviors we observe in the drone videos into these categories because they are associated with the function of a behavior. While our categorization is based on prior knowledge and critical evaluation, this process can still be somewhat subjective.  Quantitative methods provide an objective interpretation of the behaviors that can confirm our broad categorization and provide insight into relationships between categories.  These methods include path characterization, cluster analysis, and sequence analysis.

Path characterization classifies behaviors using characteristics of their track line, this method is similar to the RST method that fellow GEMM lab graduate student Lisa Hildebrand described in a recent blog. Mayo and Marx (1990) analyzed the paths of surface foraging North Atlantic Right Whales and were able to classify the paths into primary states; they found that the path of a traveling whale was more linear and then paths of foraging or socializing whales that were more convoluted (Fig 1). I plan to analyze the drone GPS track line as a proxy for the whale’s track line to help distinguish between traveling and foraging in the cases where the 15-minute snapshot does not provide enough context.

Figure 1. Figure from Mayo and Marx (1990) showing different track lines symbolized by behavior category.

Cluster analysis looks for natural groupings in behavior. For example, Hastie et al. (2004) used cluster analysis to find that there were four natural groupings of bottlenose dolphin surface behaviors (Fig. 2). I am considering using this method to see if there are natural groupings of behaviors within the foraging primary state that might relate to different prey types or habitat. This process is analogous to breaking human foraging down into sub-categories like fishing or farming by looking for different foraging behaviors that typically occur together.

Figure 2. Figure from Hastie et al. (2004) showing the results of a hierarchical cluster analysis.

Lastly, sequence analysis also looks for groupings of behaviors but, unlike cluster analysis, it also uses the order in which behaviors occur. Slooten (1994) used this method to classify Hector’s dolphin surface behaviors and found that there were five classes of behaviors and certain behaviors connected the different categories (Fig. 3). This method is interesting because if there are certain behaviors that are consistently in the same order then that indicates that the order of events is important. What function does a specific sequence of behaviors provide that the behaviors out of that order do not?

Figure 3. Figure from Slooten (1994) showing the results of sequence analysis.

Think about harvesting fruits and vegetables from a garden: the order of how things are done matters and you might use different methods to harvest different kinds of produce. Without knowing what food was being harvested, these methods could detect that there were different harvesting methods for different fruits or veggies. By then studying when and where the different methods were used and by whom, we could gain insight into the different functions and patterns associated with the different behaviors. We might be able to detect that some methods were always used in certain habitat types or that different methods were consistently used at different times of the year.

Behavior classification methods such as these described provide a more refined and detailed analysis of categories that can then be used to identify patterns of gray whale behaviors. While our ultimate goal is to understand how gray whales will be affected by a changing environment, a comprehensive understanding of their current behavior serves as a baseline for that future study.

References

Burnett, J. D., Lemos, L., Barlow, D., Wing, M. G., Chandler, T., & Torres, L. G. (2019). Estimating morphometric attributes of baleen whales with photogrammetry from small UASs: A case study with blue and gray whales. Marine Mammal Science, 35(1), 108–139. https://doi.org/10.1111/mms.12527

Darling, J. D., Keogh, K. E., & Steeves, T. E. (1998). Gray whale (Eschrichtius robustus) habitat utilization and prey species off Vancouver Island, B.C. Marine Mammal Science, 14(4), 692–720. https://doi.org/10.1111/j.1748-7692.1998.tb00757.x

Hastie, G. D., Wilson, B., Wilson, L. J., Parsons, K. M., & Thompson, P. M. (2004). Functional mechanisms underlying cetacean distribution patterns: Hotspots for bottlenose dolphins are linked to foraging. Marine Biology, 144(2), 397–403. https://doi.org/10.1007/s00227-003-1195-4

Mann, J. (1999). Behavioral sampling methods for cetaceans: A review and critique. Marine Mammal Science, 15(1), 102–122. https://doi.org/10.1111/j.1748-7692.1999.tb00784.x

Slooten, E. (1994). Behavior of Hector’s Dolphin: Classifying Behavior by Sequence Analysis. Journal of Mammalogy, 75(4), 956–964. https://doi.org/10.2307/1382477

Torres, L. G., Nieukirk, S. L., Lemos, L., & Chandler, T. E. (2018). Drone up! Quantifying whale behavior from a new perspective improves observational capacity. Frontiers in Marine Science, 5(SEP). https://doi.org/10.3389/fmars.2018.00319

Mayo, C. A., & Marx, M. K. (1990). Surface foraging behaviour of the North Atlantic right whale, Eubalaena glacialis, and associated zooplankton characteristics. Canadian Journal of Zoology, 68(10), 2214–2220. https://doi.org/10.1139/z90-308

Demystifying the algorithm

By Clara Bird, Masters Student, OSU Department of Fisheries and Wildlife, Geospatial Ecology of Marine Megafauna Lab

Hi everyone! My name is Clara Bird and I am the newest graduate student in the GEMM lab. For my master’s thesis I will be using drone footage of gray whales to study their foraging ecology. I promise to talk about how cool gray whales in a following blog post, but for my first effort I am choosing to write about something that I have wanted to explain for a while: algorithms. As part of previous research projects, I developed a few semi-automated image analysis algorithms and I have always struggled with that jargon-filled phrase. I remember being intimidated by the term algorithm and thinking that I would never be able to develop one. So, for my first blog I thought that I would break down what goes into image analysis algorithms and demystify a term that is often thrown around but not well explained.

What is an algorithm?

The dictionary broadly defines an algorithm as “a step-by-step procedure for solving a problem or accomplishing some end” (Merriam-Webster). Imagine an algorithm as a flow chart (Fig. 1), where each step is some process that is applied to the input(s) to get the desired output. In image analysis the output is usually isolated sections of the image that represent a specific feature; for example, isolating and counting the number of penguins in an image. Algorithm development involves figuring out which processes to use in order to consistently get desired results. I have conducted image analysis previously and these processes typically involve figuring out how to find a certain cutoff value. But, before I go too far down that road, let’s break down an image and the characteristics that are important for image analysis.

Figure 1. An example of a basic algorithm flow chart. There are two inputs: variables A and B. The process is the calculation of the mean of the two variables.

What is an image?

Think of an image as a spread sheet, where each cell is a pixel and each pixel is assigned a value (Fig. 2). Each value is associated with a color and when the sheet is zoomed out and viewed as a whole, the image comes together.  In color imagery, which is also referred to as RGB, each pixel is associated with the values of the three color bands (red, green, and blue) that make up that color. In a thermal image, each pixel’s value is a temperature value. Thinking about an image as a grid of values is helpful to understand the challenge of translating the larger patterns we see into something the computer can interpret. In image analysis this process can involve using the values of the pixels themselves or the relationships between the values of neighboring pixels.

Figure 2. A diagram illustrating how pixels make up an image. Each pixel is a grid cell associated with certain values. Image Source: https://web.stanford.edu/class/cs101/image-1-introduction.html

Our brains take in the whole picture at once and we are good at identifying the objects and patterns in an image. Take Figure 3 for example: an astute human eye and brain can isolate and identify all the different markings and scars on the fluke. Yet, this process would be very time consuming. The trick to building an algorithm to conduct this work is figuring out what processes or tools are needed to get a computer to recognize what is marking and what is not. This iterative process is the algorithm development.

Figure 3. Photo ID image of a gray whale fluke.

Development

An image analysis algorithm will typically involve some sort of thresholding. Thresholds are used to classify an image into groups of pixels that represent different characteristics. A threshold could be applied to the image in Figure 3 to separate the white color of the markings on the fluke from the darker colors in the rest of the image. However, this is an oversimplification, because while it would be pretty simple to examine the pixel values of this image and pick a threshold by hand, this threshold would not be applicable to other images. If a whale in another image is a lighter color or the image is brighter, the pixel values would be different enough from those in the previous image for the threshold to inaccurately classify the image. This problem is why a lot of image analysis algorithm development involves creating parameterized processes that can calculate the appropriate threshold for each image.

One successful method used to determine thresholds in images is to first calculate the frequency of color in each image, and then apply the appropriate threshold. Fletcher et al. (2009) developed a semiautomated algorithm to detect scars in seagrass beds from aerial imagery by applying an equation to a histogram of the values in each image to calculate the threshold. A histogram is a plot of the frequency of values binned into groups (Fig. 4). Essentially, it shows how many times each value appears in an image. This information can be used to define breaks between groups of values. If the image of the fluke were transformed to a gray scale, then the values of the marking pixels would be grouped around the value for white and the other pixels would group closer to black, similar to what is shown in Figure 4. An equation can be written that takes this frequency information and calculates where the break is between the groups. Since this method calculates an individualized threshold for each image, it’s a more reliable method for image analysis. Other characteristics could also be used to further filter the image, such as shape or area.

However, that approach is not the only way to make an algorithm applicable to different images; semi-automation can also be helpful. Semi-automation involves some kind of user input. After uploading the image for analysis, the user could also provide the threshold, or the user could crop the image so that only the important components were maintained. Keeping with the fluke example, the user could crop the image so that it was only of the fluke. This would help reduce the variety of colors in the image and make it easier to distinguish between dark whale and light marking.

Figure 4. Example histogram of pixel values. Source: Moallem et al. 2012

Why algorithms are important

Algorithms are helpful because they make our lives easier. While it would be possible for an analyst to identify and digitize each individual marking from a picture of a gray whale, it would be extremely time consuming and tedious. Image analysis algorithms significantly reduce the time it takes to process imagery. A semi-automated algorithm that I developed to count penguins from still drone imagery can count all the penguins on a one km2 island in about 30 minutes, while it took me 24 long hours to count them by hand (Bird et al. in prep). Furthermore, the process can be repeated with different imagery and analysts as part of a time series without bias because the algorithm eliminates human error introduced by different analysts.

Whether it’s a simple combination of a few processes or a complex series of equations, creating an algorithm requires breaking down a task to its most basic components. Development involves translating those components step by step into an automated process, which after many trials and errors, achieves the desired result. My first algorithm project took two years of revising, improving, and countless trials and errors.  So, whether creating an algorithm or working to understand one, don’t let the jargon nor the endless trials and errors stop you. Like most things in life, the key is to have patience and take it one step at a time.

References

Bird, C. N., Johnston, D.W., Dale, J. (in prep). Automated counting of Adelie penguins (Pygoscelis adeliae) on Avian and Torgersen Island off the Western Antarctic Peninsula using Thermal and Multispectral Imagery. Manuscript in preparation

Fletcher, R. S., Pulich, W. ‡, & Hardegree, B. (2009). A Semiautomated Approach for Monitoring Landscape Changes in Texas Seagrass Beds from Aerial Photography. https://doi.org/10.2112/07-0882.1

Moallem, Payman & Razmjooy, Navid. (2012). Optimal Threshold Computing in Automatic Image Thresholding using Adaptive Particle Swarm Optimization. Journal of Applied Research and Technology. 703.