Category Archives: Exercise 1

Exercise 1: Topographic Influence on GPS Collar Success

Ultimately I am interested in how topography hinder GPS collar fixes from being obtained, but for the purposes of this exercise I asked: How does sky availability change with scale?

A close-up of 6 test sites (red triangles). Raw sky availability raster (10x10m resolution) prior to focal analysis (left) and same spatial extent after performing mean focal statistics with 25×25 moving window size.

I already had a raster of my study area for sky availability, which is the proportion of sky unobstructed by topography (it does not take vegetation into account). I loaded the raster into R and used the function “focal” from the Raster package to perform mean moving window analysis.  Focal analysis computes an output value of each pixel using a neighborhood. A moving window is a rectangular shape that whose function is applied to each pixel in the raster, and is specified in the “weights” argument in the R code. The rectangle used is the neighborhood, and can vary in size. The larger the neighborhood, the more smooth the values will become. Alternatively, I could have used ArcGIS’s focal statistics tool. Below is an example of the R code I used for a moving window size of 3 pixels by 3 pixels (one pixel on either side of the focal pixel):

f3 <- focal(skyavail.rast, w=matrix(1, nrow=3,ncol=3),fun=mean, pad=TRUE, na.rm= TRUE)

I compared the 7 different sizes of focal windows to the original sky availability values from 54 locations within part of my study area (western slopes of the Cascade Mountain Range). The neighborhood sizes were 3, 5, 9, 11, 17, 21 and 25. The locations I used were the same locations as test sites where I will be assessing GPS collar accuracy and rate of fix success in the future.  

After exporting the new focal window raster into ArcGIS, I extracted values from each of the 7 focal rasters, and the raw sky availability data to assess how these values varied with different smoothing parameters. To accomplish this I used the Extract Multi-Values to Points tool in ArcGIS. Now that the moving window values were now added to the test site locations shapefile, I exported this shapefile as a .csv file where I made graphs to compare and visualize the shift in values.

Sky availability values were extracted from 54 locations, testing 7 differently sized moving windows.

Looking at sky availability values across study sites made comparisons difficult to identify, so I averaged the values for each neighborhood size across test sites and produced a line graph. This graphic shows a threshold relationship between sky availability and neighborhood size. Proportion of unobstructed sky values are high for smaller window sizes, and then sharply decreases after a neighborhood size of 9.

When sky availability values were averaged over the 54 sites, a more clear relationship among moving window sizes can be seen. The proportion of sky unobstructed by topography decreases with moving window size, possibly due to “over-smoothing.”

Since the resolution of these rasters are 10x10m, my interpretation of this is that sky availability seems stable within 45 meters (90x90m), but larger windows may smooth the values too much and this is the point where we lose detail that may influence GPS collar data. This was a useful exercise in learning how to perform focal statistics, but I’m not sure how to proceed from here to apply these results to GPS fix success.

Oregon State Landslide analysis

  1. Research Question

Oregon ’s west coast has important highways that connect Washington and California, and the impact of landslides on traffic is huge. Landslides often occur in certain areas, and some areas have national highways or frequently used highways. Once a landslide occurs, travelers will spend a lot of time detouring, but this is very inefficient. In my research, I hope to find the most sensitive area by analyzing the spatial pattern of landslide susceptibility map and analyze the heat map. Mark the dangerous area and make a high-risk area map. Base on that, the cost, severity, and time of landslides are analyzed to obtain more data to analyze the impact of landslides. In exercise1, the annual cost hot spot analysis is generating.

2. Tools

I think hot spot analysis is most suitable for point data of Oregon State Landslides, so Hot Spot Analysis tool in ArcGIS Pro will be used.

3. Methodology

Basically the step of generating hot spot analysis in ArcGIS Pro is very simple. First of all, add point data of Oregon Landslide in ArcGIS Pro. Then open the tool box, search Hot Spot Analysis in tool box, then the tool are shown as blow.

Figure 1. Hot Spot Analysis Tool

For Input Feature Class, the Point data should be added. For Input Field, I choose Annual Cost for analyzing.

Then, click Run button, the hotspot based on annual cost are generated.

4. Result

Figure 2. Oregon State Landslide Hot Spot Analysis By Annual Cost Map
Figure 2. Oregon State Landslide Susceptibility Map
Figure 3. NW Oregon State Landslide Hot Spot Analysis Map

The map records the landslides that occurred in Oregon since 1990. The hot spot analysis tool was used to analyze the map. The annual cost was used as the parameter of the analyze. Through observation, it is found that NW Oregon most often occurs landslides, but SW Oregon spends the most on landslides.
By analyzing susceptibility map, the entire West coast is high susceptibility to occur landslide. However, most highway or important traffic in west coast, the landslide protectation need more budget. As shown in figure 1, NW Oregon has less cost than SW. Oregon government should add budget in Northwest area to prevent landslde.

For hot spot analysis map, the red points are hot sopt, blue points are cold spot, which represent the similarity of landslide by year.

As shown in figure 3, I have chosen NW Oregon as further analysis by location. In figure 3, lower right has many landslide occured, the hot spot is shown as orenge color in this area, which means landslide in this area are relative similar or the survey only occured in this area. Those area sround by Corvallis, Newport and Monroe. In this area, the further analysis is going to generate due to the area has many roads, highway and resident area.

5. Critique

In the analysis process, the tools in ArcGIS Pro are very efficient, so it does not take much time to generate hot spots. Hot spot is useful in this analysis, the research is about landslide distributions, further information also can be generated by hot spot analysis, for example, annual cost, volume of landslide or landslide area. So I think the tools in ArcGIS are very effective. However, before using the Hot Spot Analysis tool, I failed to use the Create Space Time Cube tool, so there is no more detailed hot spot generation. It can be improved in later exercises.

In exercise1, due to the different soil characteristics of various regions in Oregon, landslides mainly occur in certain specific areas. In the point data, these points represent the place where the landslide occurred. In Exercise 1, these points are attracted to each other in high-risk areas. Forthermore, point data and landslide susceptibility maps are also associated. For red and blue high-risk areas, they are mutually attractive with point data, and for green low-risk areas, they are mutually exclusive with point data.

Hongyu Lu

04/17/2020

Unmanned aerial vehicle (UAV)-based photogrammetric approach to delineate tree seedlings

Exercise 1: Image classification for detecting seedlings based on UAV derived imagery

  1. The question that is addressing in this study

Detection of trees is a primary requirement for most forest management practices and forestry research applications. In modern-day remote sensing techniques provide time and cost-effective tree detection capabilities compared to conventional tree detection methods. Notably, the image classification approach considers as one of the vital tools associated with remote sensing applications (Lu and Weng, 2007). During the last few decades, scientists and practitioners developed various image classification methods to detect the landcover land-use features using remotely sensed data (Franklin et al., 2002; Gong and Howarth, 1992; Pal and Mather, 2003). However, due to various factors such as terrain conditions and its complexity, remote sensing platform used and its performance, remotely sensed image classification, image processing tools, and classification approaches used, the desired output of image classification may change/ affects (Lu and Weng, 2007). Therefore, in this part of the study, I am interested in evaluating “how feasible is detecting individual seedling crown using the unmanned aerial vehicle (UAV) based RGB imagery by applying supervised and unsupervised classification techniques?”

Tree seedlings are considered as the individual entities and attraction or repulsion of seedling may depend on how they arrange/planted spacially. Additionally, attraction or repulsion of seedlings can be described based on their spatial arrangement. The spatial arrangement of the seedlings changes according to their regeneration prosses. On a broader scale, both natural and artificial regeneration processes can be observed in the main study area. A systematically arranged artificial regeneration seedlings represent attraction, while natural regeneration of seedling represents some kind of repulsion. However, for this particular study, the northwest part of the study area was selected where artificially regenerated seedlings were prominent.  In this part of the study, approximately 450 m2 of two plots/ areas were selected (Fig.1) for image classification (Fig.1). Plot 1 used for collecting training data, and Plot 2 used for assessing the accuracy of image classification performed in this study.

Figure 1. Spacial arrangement of plots that were used in this study. Plot 1 used as the training site, and plot 2 used as the testing site for image classification.

2.Name of the tool or approach used and steps followed.

Image classification tools available in ArcMap 10.7:

  • Supervised image classification: Maximum Likelihood Algorithm
  • Unsupervised image classification:  Iso Clustering approach  

2.1 Agisoft Metashape

AgiSoft Metashape software used for producing orthomosaic images at the pre-possessing stage of UAV imagery. As shown in figure 2, collected UAV images (287 images) were aligned georeferenced. An additional step was performed to feed the GPS coordinates to each UAV imagery (Link1: Appendix 1). After georeferencing the images, the dense point cloud was produced. Depending on the desired image quality, availability of time, and data space, the building of the dese cloud can be carried out in five different ways (i.e., lowest, low, medium, high, highest). Generally, the required computing power and image processing time increase from lowest to highest.   Finally, orthomosaic image produced and saved it as a .tif  file. The produced orthomosiac image can be used in image classification in the next stage.

Figure 2. Flow diagram of processing orthomosiac image from collected UAV imagery.

2.2 ArcMap

2.2.1 Supervised image classification

  • Collecting training data

The produced orthomosiac image was added to ArcMap 10.7 for image classification. Next, two different blocks were selected from the raw image as training and testing, Plot 1 and Plot 2, respectively (Fig.1). The training data were collected using the image classification tool available in ArcMap 10.7. Draw polygon option used to collect the training data, and data were collected by covering the entire area (Fig. 3). The training sample manager table used to merge the training data for each class, and a total of five classes were determined for this classification (Fig.3). The total number of polygons per class depending on the number of total classes that we are interested, generally, for better classification, the number of polygons for each class need to be equal to 10 times number of total classes (i.e., nu.of polygons class 1 = 10 x 5). Finally, the polygons were saved as a signature file, which is required for the image classification in the next stage.

Figure 3. RGB image of the training site (left) and the distribution of polygons used to collect training data (right), and the image of the attribute table detailing the classes and their respective outputs (bottom).
  • Collecting testing data and image classification

To evaluate the performance of the supervised image classification, a set of testing data points collected over the testing area (Plot2) (Fig 4). Then collected reference points converted into pixels by using conversion tools (points to raster)option available in the ArcMap toolbox. Next, the raster file of plot 2 (Fig. 4 ) and signature file from the training data set (from plot 1) were used as input files to perform the Maximum Likelihood classification. Finally, the spatial analyst tool used to combine the produced classified image and the raster training data (from plot 2) to asses the image accuracy parameters. The output of this process used to create the confusion matrix and evaluate the accuracy of the classified image.

Figure 4. RGB image of the testing site (left) and the distribution of testing data points used to asses the accuracy of classified image (right).

2.2.2 Unsupervised image classification

For the unsupervised classification approach, Iso clustering method was used, which is available under the classification tool category. For this approach, we can define the number of classes that we are expecting to see from unsupervised classification. Also, we can define the minimum class size and sample intervals that we are interested in the classification. A total of 8 classes were included to get a better classification scheme; however, the initial eight classes were reduced into three major classes by comparing the orthomosaic and classified image features (Fig. 5). Reclassify tool used to merge the similar classes to get the final classified image.

Figure 5. The initial stage of the unsupervised classified image(left) and reclassified image by matching the features of the RGB image(right).

The spatial analyst tool used to combine the produced classified image and the raster training data (from plot 2) to asses the image accuracy parameters. The output of this process used to create the confusion matrix and evaluate the accuracy of the classified image

3. Results

Generally, the error matrix approach is widely used for assessing the performance/accuracy of image classification (Foody, 2002). Further, the error matrix can be utilized to produce additional accuracy passement parameters such as overall accuracy, omission error, commission error, etc (Lu and Weng, 2007). In this study, the supervised image classification approach showed higher overall accuracy (94 %) compared to unsupervised classification (93%). Detection of tree seedlings in the supervised classification approach showed 94 % of user accuracy and 10 % commission error. However, tree seedling detection based on unsupervised image classification showed 4% and 94 % of commission error and user accuracy, respectively.  Similarly, the supervised classification approach showed 97% of producer accuracy and 3% of omission error for the class of seedlings. Additionally, the unsupervised image classification approach showed 94% producer accuracy and 2 % of omission error for the class of seedlings (Fig.6).

Figure 6. The output of the supervised classification (top) and unsupervised classification (bottom).

4. Critique

Overall, estimated accuracies show promising results for identifying the tree seedlings using both supervised and unsupervised classification. However, estimated classification accuracy depends on some essential parameters, including sampling design, estimation, and analysis procedures (Stehman and Czaplewski, 1998). Additionally, sampling strategy also plays an important, especially in determining sample size, sample unit (either pixels or polygons), and sample design (Muller et al., 1998). Therefore, the observed results are highly dependent on the parameters described above. For this study, a random sampling approach was used to collect data that cover the study area to represents most of the objects in orthomosaic imagery.

Spatial resolution is another important factor that can affect the image classification performances. One of the main advantages of UAV imagery is fine resolution. Fine resolution images reduce mixed-pixel problems while providing more landcover information compared to medium and coarser resolution imagery (Lu and Weng, 2007). Hence, fine-resolution images have a higher potential to provide better image classification results. However, there are a few drawbacks associated with the fine resolution imagery, such as effects of shadows and high spatial variation within the same landcover class: these problems tended to reduce the accuracies of image classification (Cushnie, 1987). For example, the classifier may detect dark objects as trees seedlings because shadows may have similar pixel values as seedlings (Fig.7).

Figure 7. RGB image showing the shadow of a cut-down tree stem (left) and unsupervised classified image for the same area (right).

Another potential problem associated with UAV based orthomosaic imagery is errors associated with data processing. Especially in this study, I found there was some low detection of seedlings in both supervised and unsupervised classified images (Fig. 8). Figure 8 shows the coarse-scale pixelated area (circled) in RGB image and low detection of tree seedlings both unsupervised and supervised classifications, respectively.

Figure 8. Figure showing the effect of coarse pixels for both supervise (bottom) and unsupervised (top) classified images.

By considering the spatial aspects of the classified images, we can notice that there were some tree seedlings were missing compared to the RGB image (Fig 9). This type of issues may occur due to poor health conditions of the seedlings or due to some drawbacks associated with image classification settings (i.e., the pixel value of these trees may be lower compared to the pixel value of healthy tree)

Figure 9. RGB image showing the spatial distribution of seedlings (left) and the distribution of same seedlings detected from unsupervised classification (right)

Overall, this image classification approach showed the potential use of both supervised and unsupervised classification to detect tree seedlings. The observed minor drawbacks can be addressed by implementing the quality of image processing and classification.

Appendix 1

Link1 https://sfec.cfans.umn.edu/sites/sfec.cfans.umn.edu/files/photoscan-1.2-ortho-dem-tutorial.pdf

Future works:

In addition to the above classification methods, I tried to use the random forest classification approach. By changing the input band values for the random forest algorithm, the detection of tree seedlings can be enhanced (Fig. A1). However, further research/details are required to perform better classification using random forest approach.

Figure A1. Image showing the outputs derived from the random forest classification approach.

References

CUSHNIE, J.L., 1987. The interactive effect of spatial resolution and degree of internal variability within land-cover types on classification accuracies. Int. J. Remote Sens. 8, 15–29. https://doi.org/10.1080/01431168708948612

Foody, G.M., 2002. Status of land cover classification accuracy assessment. Remote Sens. Environ. 80, 185–201. https://doi.org/10.1016/S0034-4257(01)00295-4

Franklin, S.E., Peddle, D.R., Dechka, J.A., Stenhouse, G.B., 2002. Evidential reasoning with Landsat TM, DEM and GIS data for landcover classification in support of grizzly bear habitat mapping. Int. J. Remote Sens. 23, 4633–4652. https://doi.org/10.1080/01431160110113971

Gong, P., Howarth, P.J., 1992. Frequency-based contextual classification and gray-level vector reduction for land-use identification. Photogramm. Eng. Remote Sens. 58, 423–437.

Lu, D., Weng, Q., 2007. A survey of image classification methods and techniques for improving classification performance. Int. J. Remote Sens. 28, 823–870. https://doi.org/10.1080/01431160600746456

Muller, S.V., n.d. Accuracy Assessment of a Land-Cover Map of the Kuparuk River Basin, Alaska: Considerations for Remote Regions. Photogramm. Eng. 10.

Pal, M., Mather, P.M., 2003. An assessment of the effectiveness of decision tree methods for land cover classification. Remote Sens. Environ. 86, 554–565. https://doi.org/10.1016/S0034-4257(03)00132-9

Stehman, S.V., Czaplewski, R.L., 1998. Design and Analysis for Thematic Map Accuracy Assessment: Fundamental Principles. Remote Sens. Environ. 64, 331–344. https://doi.org/10.1016/S0034-4257(98)00010-8

Exercise 1

Question:

Are there nonrandom spatial patterns in reported illness across the country?

Using data from Pakistan for the years 2011, 2013 and 2014, I explore rates of respiratory illnesses for districts across the country. I further examine whether these patterns vary by age (children less than 5) and gender. Intuitively, I expect that underlying mechanisms such as poverty and pollution exposure will influence spatial clustering i.e. poorer areas may have greater incidence of illness just as areas exposed to greater pollution source would also have more people falling ill.

Attraction and Repulsion: One “attraction” mechanism is exposure to crop fires. Areas that have greater exposure may have higher rates of reported illness.  

Name of the tool or approach that you used

I used the Getis-Ord Gi* (hotstop) statistic to assess whether there are statistically significant clustering of high/low rates of reported respiratory illnesses across districts in the country. This initial analysis can be used to gauge if areas are more or less susceptible so to later develop a causal understanding of the mechanisms that explain these illnesses.  

Sources and Sinks: A uni variate analysis of the count of reported illness across districts provides limited information as less populated areas/districts will have a lower count than other more congested areas. To account for spatial patterns in population, I use the total number of people interviewed in each district as the base for calculating the rates of illness across districts.

Steps to complete the analysis

I use survey data from Pakistan to analyze individual responses on illness and created a binary response variable – 1: if illness was experienced and 0 otherwise. The steps for analyzing the spatial distribution of this variable were as follows:

  1. Treating all family members in a household as independent respondents, I aggregated the count of illnesses across districts. I use this for choropleth maps (as in Figure 2) and also to calculate district level rates for the hotspot analysis.
  2. For each district, I also aggregate the number of individuals interviewed and use this as the denominator to calculate the rate of illness across districts.
  3. I used the collapse command in stata to get one observation per district from my entire dataset.
  4. I then merged the above data with a district level shapefile and subsequently carried out the hotspot analysis.
  5. The input field was the rates that I had calculated and used the fixed distance conceptualization of spatial relationships to compute the Getis-Org-Gi statistic.

Results

At this stage, I am only looking at the distribution of my outcome variable. It is not possible to draw inference since the sampling is not representative of the district population. My purpose at this point is to explore the data and then move on to do an analysis of exposure to see if there is association between this outcome variable (respiratory illness) and a household’s proximity to fire locations.

The results of the hotspot analysis (Figure 1) show that the significant clusters exist in Punjab. These results make intuitive sense as the province has the largest share of the country’s population residing in it. The results also indicate that Agricultural crop residue burning (occurring primarily in Punjab) may be a worthwhile cause to explore further in the analysis.

Figure 2 is a mapping of point data using household addresses that may later be used to explore household level explanations of the causes of illness in a community. Figure 3 – 5 are spatial distributions based on aggregate as well as yearly data. It is evident that the patterns exhibited by the simple count of reported illnesses does not predict the presence of statistically significant clusters as presented in Figure 1. This is likely because the districts with higher count also have higher population density.

Figure 1: Spatial Clustering of Respiratory illnesses across districts
Fig 2: Point locations of households interviewed
Figure 3: Count of individuals with respiratory illness across districts
Figure 4: Individuals interviewed across districts
Figure 5: Rate of illness across districts

Critique

The analysis a present is a district level which does not reflect household level variation. Such broad aggregation does not tell us much about the neighborhood level factors (access to sanitation, type of energy source, exposure to pollution etc.) that are likely to be strongly associated with the current outcome variable. Another limitation of the present analysis is that the district level of aggregation limits the use of spatial regression due to small n – there are only 77 districts covered in the survey.

However, the hotspot analysis is useful to understand the distribution of respiratory illnesses. It may be more useful for my research if I am able to do a hotspot analysis of the residuals of my regression model.

Exercise 1: Comparison of Interpolation Methods to Estimate Zooplankton Density

Question asked

A key to understanding the foraging patterns of predators, such as gray whales, it is important to have a grasp of the patterns of their prey, which in the case of gray whales is zooplankton. The distribution and abundance of zooplankton is often described as patchy and variable. Therefore, my question for exercise 1 was “What is the spatial and temporal pattern of zooplankton density in the inshore waters off of Port Orford, OR?”. In order to address this question, I tested using different types of interpolation on my point data of prey density sampled at 12 stations during the month of August in 2018. 

Approaches/tools used and methods

To address the question, I wanted to explore various types of interpolation to see how the density of prey collected at my sampling stations mapped out to my entire study site to fill the gaps of where we do not have known density of prey. I had planned on trying to do this in ArcGIS (a program I am too familiar with), however due to slowness of remote connection and my lack of knowledge/skills, I decided to teach myself how to do interpolations in R instead, since I am more savvy in R. I quite enjoyed the process of trying out new things and figuring out kinks in lines of code that I found online. Since interpolations in R were also new to me, I followed a very helpful tutorial on how to do interpolations. However, because of this, I was limited to the types of interpolation that were presented in said tutorial, namely Inverse Distance Weighting (IDW), 1st and 2nd order polynomials, and kriging. The other main type of interpolation that was not covered in this tutorial was splines. However, after some research, I discovered that splines do not tend to do well when there is a lot of variation in the data that is being interpolated. After doing some initial data exploration, I found that there was significant variation in my data, such that on most days there were 1-2 stations that had a very density while the other 4-5 had a relatively low density. Therefore, I decided that it was not too important that I was unable to run a spline interpolation for that reason. 

Raw relative density values of zooplankton at sampling stations in Mill Rocks (left) and Tichenor Cove (right) on August 2nd in 2018.

I will be sharing code for how to make interpolations in R during my tutorial, however I will detail the rough steps involved here. First, I had to import coordinates (latitude, longitude) for the boundary of my study sites and convert it into a spatial polygon. Then, after splitting my data into daily data frames, I converted them into spatial points. I used several packages that contained the functions to run different steps of the data preparation and interpolation, including ‘spatstat’, ‘gstat’, ‘tmap’, and ‘sp’. All of these packages have other dependencies that also needed to be installed and loaded before these packages and functions will run. 

Before I move on to some preliminary results, I want to explain some of the core differences between the different kinds of interpolations I employed.

IDW is a deterministic interpolation method, which relies on surrounding values to populate the resulting surface. One of the major assumptions is that the variables being mapped decrease in influence linearly as you move away from a given value. In IDW, the weight given to any given point is calculated linearly, decreasing as you move farther away from this given value. 

1st and 2nd polynomials fit a smooth surface given by a mathematical surface to the data points. The polynomial surfaces change gradually and therefore are good at capturing coarse changes in data. It’s kind of like fitting a flat sheet of paper to a surface of points and trying to find the best straight fit. That’s what 1storder polynomial interpolations are. In a 2nd order polynomial, the ‘sheet of paper’ is folded once. Polynomials tend to be very good at representing a gradual change in the variable of interest.

Kriging is different from IDW and polynomials as it is not a deterministic interpolation but instead is a geostatistical approach. Kriging is a little more advanced than other interpolations because it not only considers the actual value of the variable at a spatial location, but it also involves an exploration of the spatial autocorrelation between points. Therefore, kriging not only has the ability to produce a predicted surface but also provides a measure of certainty or accuracy of the predictions.

Results

The resulting interpolations for my two field sites can be found below. I have selected the interpolations from the same date at the both sites so that some initial visual comparisons can be made.

IDW interpolations for Mill Rocks (left) and Tichenor Cove (right).
1st order polynomial interpolations for Mill Rocks (left) and Tichenor Cove (right).
2nd order polynomial interpolations of Mill Rocks (left) and Tichenor Cove (right). The interpolation for Mill Rocks says 1st order polynomial and this is because the 1st and 2nd order polynomials for Mill Rocks were the same for every day, so the same interpolation is presented here.
Universal kriging interpolations of Mill Rocks (left) and Tichenor Cove (right).

Critiques of the method

Based on my research on the different interpolations, it seems that my data are best suited for IDW interpolations. This is because I have very few data points (only 6 stations per site) and there is quite a lot of variation in the density of zooplankton within a site on the same day. Since polynomials and kriging apply functions and equations, they will inevitably smooth the data to some extent. However, I am precisely trying to capture this variability in the data as my next step will be to see whether gray whales prefer sites that have high densities of prey. Therefore, if I select an interpolation that smooths out this variability, then I may not find the right relationships between predator and prey that I am investigating. 

Even though the polynomials may make sense as they seem to imply there is a gradient of density that occurs in the field sites and the raw data may in fact also reflect this, the polynomials (and the kriging too) create very unrealistic values. If you look at the density scales next to the plots, you will see that both the polynomials and the kriging interpolations end up having negative density values which ecologically, does not make any sense. 

I still have a lot of refinement to do with these interpolations, including calculating root mean square errors for each of the interpolations to quantitatively compare them to one another to see which one performs best. Furthermore, since I think the IDW interpolations are the most suitable based on my data, I think it would be good to continue exploring the different interpolation variables, like grid cell size or buffers around the points, to see what variations will look best and most ecologically relevant.

Accessing the incremental and spatial autocorrelation of Particulate Organic Phosphorous (POP) in Bermuda Ocean. (Exercise I)

1. Research Question
In order to perform the right hysplit back trajectory data, the first thing that I must do is accessing the spatial and the temporal pattern of POP. The initial data of POP is looked like clustered, however, it needs more spatial statistical analysis to determine where is the highest clustered condition with exact coordinate and time. Hence the questions that I would like to ask for exercise A is the cluster pattern of POP and when is the highest value of POP is conducted.

2. Tools and Approach
I decided to use Incremental and Spatial Autocorrelation by comparing the Hot Spot Analysis (Fixed distance band) and Local Moran’s I. By looking at the result of those features, I can decide which tool that appropriate to read my data.
I used Spatial Statistic Tools in ArcMap 10.7.1:
a) Analyzing pattern by using Incremental Spatial Autocorrelation and Spatial Autocorrelation (Moran’s I).
b) Mapping cluster by using Hotspot analysis: ((Getis-Ord general G) and Optimized Hot Spot) and Cluster and Outlier Analysis (Anselin Local Morans I)

3. Step of Analysis
a) My POP data consisted of a lot of value based on the depth in coordinate, thus in order to start analyst the spatial dan temporal pattern I need to exaggerate the POP by averaging the POP value. Subsequently, I got one POP value for one X and Y coordinate. In order to do this I used Matlab, with code as follows:

b) The next step is determining the rank of POP for clustering purposes. The rank is defined by using the mean value, where the POP value which highest than the mean will be considered as high value and POP value which lower than the mean will be accessed as low value. I also used Matlab to perform this job, the code as follow:

c) Since the data from Matlab is in txt format, hence I call this data in ArcMap to display X and Y data and convert it to be the shp file.
d) Analyzing the pattern by using spatial autocorrelation Moran’s I with “fixed distance band” and “Euclidean Distance” and “non-standardization”. The second pattern is by using Incremental Spatial Autocorrelation with ‘Row standardization’.
e) The mapping cluster is generated by using as follows:
1. Cluster and Outlier Analysis (Anselin Local Morans I) with “fixed distance band” and “Euclidean Distance” and “non-standardization”.
2. Hot spot analysis by using Getis-Ord general G with “fixed distance band” and “Euclidean Distance” and Optimized Hot Spot with “Snap nearby incident to create a weighted point” aggregate method.
f) Join the attribute table of initial data to the attribute table of spatial autocorrelation result to get the highest clustered POP value by coordinate and time.

4. Result
a) The data is significantly decreased from 929 points to 82 points.

If we see in a three-dimensional graph hence, we can see that the pattern of POP is clustered over five years’ time interval, however, I need more robust spatial autocorrelation to get the highest clustered value.

b) The pattern analysis is performed based on the rank value, where the result is like I expected, where the highest value is clustered in one area.

In order to robust the certainty of the result, I also performed the incremental spatial autocorrelation, where the result seems similar to the Morans I.

c) The Mapping clusters

The mapping shows the result of different methods of mapping cluster analysis, both Local Moran’s I and hot spot indicate similar result where the hot spot and high value is clustered in the black dot. Based on this result, the Hysplit back trajectory will be conducted on the black dot on Moran’s I cluster.

5. Critique of the method – what was useful, what was not?
Actually, my data has been clustered in one area, however, it will consume a lot of effort and time if I analyses all the hysplit back trajectory data for all date of the year in that clustered area. This spatial autocorrelation gives me a plausible decision to determine the spatial and temporal analysis for hysplit back trajectory.
However, I am not sure that I have conducted the right spatial relationship or not, I need to deepen my knowledge on how to determine the conceptual spatial relationship based on the initial data. Once we change the conceptual spatial relationship the result of the Z-score will change as well.

Looking for Ancient Surfaces using Kernel Density Interpolations

Primary Question

The main question I sought to answer within this first exercise looks to identify the dispersion and aggregation of artifacts in an assemblage. Using this assemblage as a point pattern, I implemented a method of interpolation based upon the density of physical points within space. The first hypothesis I am seeking to assess is whether my assemblage is clustered within physical space, regardless of artifact attributes. I am moving forward with the assumption that artifact attributes do not produce and internal source or sink in my dataset. The main reason that it may appear that similar artifact attributes are attracted to one another in space is because the external force behind their creation produces very constant patterns in this regard. Human behavior is the external factor that is primarily responsible for the perceived sources and sinks in typical artifact assemblages. For this reason, I have neglected artifact attributes (e.g. length, width, material, type) in this initial hypothesis.

In addition to the previous assumption, since the dataset is confined within a depth of 1 meter, identifying clustered behavior in this artifact assemblage is not sufficient within 2-Dimensions. I must consider how the artifacts within this assemblage are clustered within 3-Dimensional space involving the longitude, latitude, and depth. Where many may attempt to cluster within this expanded dimensional, I have chosen to segregate this dataset into horizontal segments based upon observable characteristics that come from visualizations of the artifacts within 2-D and 3-D space. As a result of this approach, there is a potential for a higher amount of error because I have made visual judgments based upon what I can see in the assemblage with little quantification present in this step.

In conclusion, I seek to identify dispersion within the artifact assemblage in 2-Dimensional segments varying in vertical extent using interpolation methods based upon the kernel density function.

Proceeding with Interpolation

A tool that I used to assess the dispersion within the horizontal segments of the artifact assemblage is the kernel density function called ‘kde3d’ located within the software package ‘misc3d’ in R. Since this tool has already been created and established in R, the first step I undertook was assigning the proper coordinates for each artifact within ‘kde3d’. One of the main parameters in this function is the number of grid points to use. Ultimately, this number relates to the accuracy and “smoothness” of the resulting image. A number is then assigned to each grid determining the density in a cubed extent of the data frame. When this resulting value is produced into contour lines using ‘contour3d’ located in the same package, I am able to draw these contours in order to visualize the 3-D kernel density of the artifact assemblage (figure 1). The residual points not included in the topological features are outliers and do not constitute a highly dense area. Ultimately, figure 1 shows areas that contain a significant cluster of artifacts. I then used the resulting 3-D kernel density image (figure 1) in conjunction with a vertical window into the artifact assemblage (figure 2) in order to identify depths that I will choose as my horizontal segments.

Figure 1 3-D Kernel Density

Figure 2 resembles one part of the process that was undergone in order to judgmentally determine the most significant surfaces to analyze within 2-D horizontal segments with the x-axis assigned to the Easting and the y-axis is assigned as the Elevation. The areas, or levels, chosen here will be referred to as surfaces of interest. These have been determined significant because the artifacts appear to follow an increasing trend along a similar angle as the recorded topographic surface, there is a gap between artifacts and these intervals representing a possible period with no occupation, and there is an aggregation of artifacts within each surface of interest. This is just a single example of a window that has been cut into the dataset; similar procedures were done with multiple other windows cut into the dataset. Figure 3 represents an example of what I refer to as a “window” cut into the dataset within a 1-meter interval spanning from West to East across the entire site extent.

I have chosen five initial horizontal segments to analyze further. These horizontal segments have been analyzed for clustering using the ‘density’ function in the package ‘spatstat’ in R (figure 4). This density function uses the similar kernel density calculation but instead of 3-D space, I have created an interpolated surface within 2-D space based upon the density of artifacts contained in each surface interval. Using the ‘spatstat’ package, I separated the artifact types at each surface in order to visualize spatial patterns between all the artifacts and then between each artifact type at every 5cm interval in my dataset. For the purpose of this exercise, I have chosen to only include the density plots representing the artifacts at the five identified surfaces in the dataset (figure 4).

Figure 2 Vertical windows cut into the artifact assemblage. Moving from left to right mirrors moving from West to East.
Figure 3 Plan view of archaeological site representing the method used to determine surfaces of interest.

Results

              Resulting from this procedure, I was able to visually identify clustered areas of artifacts within both 2-Dimensional and 3-Dimensional space. Figure 1 identified high density areas of artifacts within 3-D space and facilitated in the identification of five primary horizontal segments of the dataset that I will focus on. Within these five segments, I have analyzed the interpolated density surfaces containing all artifacts and surfaces containing a single type of artifact (figures 4). There appears to be clustered areas within the artifact assemblage that may be representative of different human activities at this site. Figure 4 shows highly dense areas of artifacts across the entire site is specific areas. As we move between the different surfaces in figure 4, the dispersion and density of artifacts changes as well. Thus, there appears to be some kind of variable that is shifting patterns in the spatial distributions of artifacts throughout time in this assemblage. The next step is to analyze the type of variable that may be affecting artifact distribution throughout each horizontal segment.

              The primary critique that comes with this interpolation approach has to do with the scale of investigation. For a specific example, my data set has one entry labeled as Fire Cracked Rock (FCR). Even though there is only one count of FCR in my data, when I interpolate the surface containing this object, the result shows a very prominent density in the area surrounding area. If one looks closer, they will notice that the scale of this particular frame is adjusted to indicate a density value less than 0.5. This value occurs because, in the dataset, there is only one object identified as FCR. Thus, visually, there appears to be a significant amount of FCR in the identified area but in hindsight, there is only a single object in that location. So, in light of this critique, I must adjust my interpretation of which artifacts present a clustered area and which ones could simply be outliers within the dataset. The use of visualizing point patterns in 2-D and 3-D space comes into play here in order to mitigate the affects of this misrepresented data.

Figure 4 Density maps of the five surfaces of interest

Ex. 1 Spatial-Temporal Patterns of Flood Insurance Claims

  1. Research Question 
    My research question that I continue to work toward answering is How is the temporal pattern of  population flow by county related to the temporal pattern of flood insurance claims by county via risk compensation? I was particularly interested in the temporal pattern of insurance claims by county for this week. I examined single year as well as whole study period extents. From the exercise 1 lab I was also drawn to the question of possible sources/sink locations, which, in the case of my data could signify vulnerable areas located near the coasts as sources of hazard. While the sink is areas of higher population adjacent to these sources of more hazardous locations. Down the line, this source/sink concept could be interesting once I include population flow data into the mix. See below for a bubble map denoting total flood insurance claims over the study period (1990-2010) by county.
Total NFIP Claims by county visualized by proportional bubbles

2. Approach

I was interested in and attempted a few different approaches to explore my data. I started out by attempting spatial autocorrelation using Moran’s I to attempt to disaggregate county-level vector data and examine the spatial dependence significance. Next I attempted an Inverse Distance Weighted Interpolation. Finally, I attempted to simply examine and plot the temporal variation in my insurance claims data.

3. Steps and Results

Using the Monte-Carlo simulation of Moran I in R I observed a significant pseudo p-value (less than 0.01) which indicates that there is a significant spatial autocorrelation between contiguous counties and we would not expect to see this relationship if the claims data was randomly distributed across counties. 

Inverse Distance Weighted interpolation attempts to predict the values of data points near areas of known values. Since my data is attached as county polygons I converted this to point data and then ran an IDW function over my entire claims dataset (total claims from 1990-2010) to show interpolated values across the country (see picture below). I also tried this for single years of data (see below for 1998 as an example year). This map along with other maps I have created from this data helped inform where some “hotspot” regions exist for future work. 

Finally, I summarized and plotted the data to examine the temporal variation. I plotted the counts of claim frequency at the national scale and also selected 3 hotspot counties (Houston, New Orleans, and Miami) to plot as well (see charts below). These plots are helpful in showing the volatile nature of flood claims. Since I have been working with this data for some time, I had some preconceived ideas of the data’s temporal characteristics. Thus, I understand the up-and-down nature of flood claim frequency that respond to the acute nature of natural hazards. I also created plots for mean annual claims which I included below. I attempted to create a mean difference map in an attempt to show anomalous data deviation but that has not as of yet has depicted data in a new way than previous mapping efforts. 

4. Critiques of Methods

Of the three approaches the IDW interpolation was the least helpful. As I now understand, this type of analysis is not particularly helpful to my type of data and I could essentially see the same results from mapping the data by county. I found the spatial autocorrelation approach to be interesting, however, the results were not surprising given the known spatial distribution of disaster claims from the visualized maps. Finally, the temporal plot focus yielded helpful information regarding the temporal variation in the data and inside hotspot areas. 

Determining Patterns of Spatial Distribution of Nearshore Benthic Organism Abundance from Point Conception to Canada

Exercise #1: Determining Patterns of Spatial Distribution of Nearshore Benthic Organism Abundance from Point Conception to Canada

Research Question: Using ~80,000 benthic box core samples along the California, Oregon, and Washington coasts, I investigated methods to determine patterns of spatial distribution of benthic organism abundance (both infauna and epifauna). This process was the first step towards examining the correlation between total benthic organism count and total organic carbon (TOC). I conducted this investigation at three spatial scales: (1) the entire study area (Pt. Conception, CA to the Canada border) (2) an approximately ~60 km2 area off Yaquina Head near Newport, OR, and (3) an approximately ~760 km2 area off Newport, OR from Beverly Beach to Seal Rock.

Hypothesis: I hypothesize that total organism count will have a clustered spatial distribution due to both internal and external processes, including attraction/repulsion and sources/sinks. Predator/prey interactions, mating, territorialism, and colonization are all internal processes of attraction/repulsion that impact the spatial distribution of total organism count. For instance, if a large Dungeness crab megalope recruitment event occurs in Yaquina Bay, predators will congregate in that area to take advantage of the abundant food source, resulted in a concentrated area of high organism abundance.  Salinity, upwelling, sediment grain size, pH, temperature, turbidity, wave intensity, current, light availability, depth, total nitrogen, total organic carbon, chemical pollution, & noise pollution could all serve as either external sources or sinks of organism abundance depending on the adaptations of particular species. I hypothesize that total organism count, especially of detritovores, will be higher in areas with relatively high levels of total organic carbon.   

Tools/Approaches Used & Steps: I focused on three Spatial Analyst tools in ArcMap 10.7.1 to conduct my analysis: Hotspot, Kriging (Interpolation), and Inverse Distance Weighting (IDW, Interpolation). Before I could begin my analysis, I first merged various tables from an Access database and created Pivot Tables in Excel so that I could have datapoints with total organism count, TOC, lat/long, and sampling date information to make the “next steps” in my analysis easier. After that I began playing around with various tools at my three spatial scales, a process I’ll summarize below:

  1. Total Study Area: In order to generate as accurate an interpolation of total organism count as possible, I loaded all my data into ArcMap, opened the attribute table, and then used the “Select by Attribute” function to select only box core samples that were sifted using a 1 mm mesh screen. Different mesh sizes would of course impact the number of organisms counted (smaller critters fall through bigger holes). I then used the Inverse Distance Weighting (IDW) (Spatial Analyst) tool to generate an interpolation raster for the entire study area. The raster output was a large rectangle that included all the points. Because my data points are distributed over a large area along the coast, the raster covered significant areas of land and sea. I felt that the interpolations could only be “trusted” near the actual data points, so I decided to clip the raster to a smaller area. In order to do this, I buffered all the points and dissolved the buffer to generate a polygon. I then used the Extract by Mask tool to create a raster within the bounds of the buffer polygon. Some of the raster area was still over land, so I imported a polygon of California, Oregon, and Washington and then used the Erase function to delete the portion of the raster that intersected with the state areas. I ended up with an IDW interpolation raster for the coastal area. Next, I conducted a Hotspot analysis on the 1mm mesh organism count data. Hotspot analysis doesn’t do well with unevenly distributed data points (which I have), so I selected data points with values near mean organism counts within highly sampled areas. I ultimately determined that Hotspot analysis is not the most appropriate method for determining spatial distribution given my data, a conclusion I will discuss in more detail below.  
  2. Newport Limited Region: Looking towards the future, I plan to examine if/how the relationship between total organism count and TOC changes seasonally. With that in mind, I generated interpolations for five months in 2011 (April, June, August, October, & December) of a ~60km2 area off Yaquina Head near Newport, OR. I once again used the IDW (Spatial Analyst) tool in ArcMap 10.7.1. I then ran a sixth interpolation using all my data for that area to compare to the five monthly interpolations. Next, I used the “Minus” function to subtract the mean pixel values of all the data from the pixel values of the monthly rasters. These difference rasters allowed me to more easily compare the monthly organism count data.
  3. Newport Extended Region: In the Newport Limited Region, I only looked at data from 2011. In the Newport Extended Region, I looked at all the monthly data I have between 2010 and 2016 from a ~760km2 area off Newport between Beverly Beach and Reedsport, OR. I determined that I have sufficient data from April, June, August, & October. I also had some September and December data, but not enough to make accurate interpolations for the whole area. For these interpolations, I decided to compare results from the Kriging and IDW methods.

Results:

  1. Total Study Area: The IDW interpolation for the entire study area suggests that the areas with the highest overall numbers of benthic organisms occur near San Francisco Bay, Monterey Bay, and off the coast of Washington State north of Puget Sound (see Figures 1 and 2). Generally speaking, the Oregon coast seems to have lower total organism counts than the other two states. The Hotspot analysis showed similar results. 
  2. Newport Limited Area: The difference rasters helped me to see some temporal change in total organism count, but I will save a discussion of those findings in a later exercise. I included an example of how I generated a difference raster in Figure 3 below.
  3. Newport Extended Area: For the larger Newport region, I decided to compare the IDW interpolations to Kriging interpolations for four different months. I observed some clustering of total organism count (see Figure 4).
Figure 4: Visual comparison of IDW and Kriging interpolation methods.

Methods Critique: Broadly speaking, this exercise provided me with an excellent learning opportunity. I found it useful to practice working through a large dataset, switching between different forms of software analysis (ArcMap, Access, and Excel), and using various spatial analysis tools in ArcMap that I haven’t used recently. I also learned about the advantages and disadvantages of various methods for determining spatial distribution patterns, namely IDW, Kriging, and Hotspot analyses. 

Inverse Distance Weighting: The assumption behind this method is that points that are closer together will be more similar than points that a farther apart. Unknown values are determined through the weighted average of known values within a certain radius. Spatially closer values are given more weight than those that a farther away within the set radius. Weights are proportional to the inverse of the distance, raised to an assigned value p. I used p = 2 for my analyses. IDW has the advantage of relative simplicity which makes the results easy to interpret, but it does have some disadvantages. In instance, interpolated IDW rasters can be skewed when data points are highly clustered. Additionally, IDW cannot interpolate values above or below the minimum and maximum values form the dataset. IDW also does not generate variance rasters to help the viewer determine how “trustworthiness” of the results. 

  • Kriging: Like IDW, the Kriging interpolation method estimates values for unsampled areas using known data made up of a certain number of points or within a certain radius of an estimated point. This technique used both the distance between points and the degree of variation between points when interpolating values. One difference between Kriging and IDW is that Kriging can interpolate values outside of the range of the known dataset. However, the interpolation does not necessarily pass through known points, so the interpolation raster values could be higher or lower than the known values in a sampled location. The results of the Kriging and IDW showed similar patterns of total organism abundance in this instance.
  • Hotspot: I ran the Getis-Ord Gi* Hotspot analysis tool in ArcMap 10.7.1, which identifies statistically significant hotspots and coldspots in spatial data. The tool generates an Arc feature class that contains the statistical p-value and z-values for each feature (in this case point) in a designated feature class. After consultation with Dr. Jones, I learned that hotspot analysis is not an appropriate toll when sample sites are determined by researchers. My data comes from multiple projects and as a result some of the sample sites are spread out and others are clustered closely together. These clusters can influence the results of hotspot analysis.  

Exercise 1 – Spatial patterns of population change

My dataset A is population by county in Nebraska, Kansas, Oklahoma, and Texas in 1992 and 2017. My research question is asking what the spatial patterns in population are in 1992 and 2017. I also wanted to identify the spatial patterns in the change in population between 1992 and 2017.

My hypotheses for this analysis are that there will be much higher population rates in cities than rural places, but more importantly, that population increase will be in urban and suburban counties. I also suspect that there will be decreasing populations in rural counties. I hypothesis this shift towards cities because of the attractions of cities (and repulsions of rural areas). People frequently move in search of more job opportunities, which are typically in or around cities (PRB, 2008). A major industry in rural areas is agriculture, which is a shrinking industry, especially due to the concentration of farms. Fewer job opportunities in rural areas act as a repulsion processes and the larger economies in cities act as an attraction process.

In order to identify spatial patterns in the population data I created maps of the two years of data as well as a map of the difference in population. To help highlight the areas with higher rates of increase and decrease in populations, I conducted a hotspot analysis.

Since I have continuous county-level data, I determined that a hotspot analysis would be the best way to highlight the patterns. I used a similar approach to the Getis-Ord Gi hotpot analysis, but since I was working in Python, I used the Exploratory Analysis of Spatial Data package to conduct spatial autocorrelation and identify hot spots, cold spots, and spatial outliers.

In order to conduct a hotspot analysis, I had to first join the population datasets from 1992 and 2017 with the shapefile of counties. This took some reformatting of the datasets and their county identifier codes to match them up correctly. I then selected out the four states that I am working with but found that the population in cities in eastern Texas are significantly larger than anywhere else in those four states, so I separated eastern Texas from the rest.

In order to calculate hotspots, I had to first calculate the spatial lag. This function returns the average of each observation’s neighbors. Then using weights, the spatial lag, and a few other inputs, identifies counties that are hotspots and cold spots.

My results show some obvious hotspots over cities, although not all cities are hotspots. It is possible that some have populations that are not as large as others, and therefore are not standing out as hotspots. I found the cold spots to be helpful in identifying cold spots, or areas with dramatic decreases in population. There is a large cold spot between Nebraska and Kansas on the eastern side of the states. There are some others that I will keep an eye on as well when I bring dataset B into the picture.

On the left is the hot and cold spot map of population change between 1992 and 2017. On the right is the raw change in population (2017 – 1992).

I would say that this method has been useful for highlighting the hot and cold spots in a busy dataset, but it has not created new insights. The tool that I used in Python is also not documented in very much detail, so there was a lot of front loading of effort in order to get it to work.

Citation:

Population Reference Bureau. 2008. Population Losses Mount in U.S. Rural Areas. Retrieved from https://www.prb.org/populationlosses/

Assessing Greenland Outlet Glacier Retreat from Terminus Lines (Exercise 1)

Introduction:

My work for Exercise 1 was geared towards developing a method to calculate the magnitude of retreat of Greenland outlet glaciers, based on their terminus lines. Because there is no obvious tool or set of tools to do this in ArcGIS pro, I developed two distinct workflows to produce relatively accurate datasets representing the spatial retreat of glacier boundaries. Once these datasets were constructed, I was able to conduct some very preliminary spatial statistics to determine their distribution.

Question:

1.How can I calculate and represent glacier retreat solely from polylines of glacier termini? How is this (constructed) variable of glacial retreat distributed across Greenland?

What kinds of attraction/repulsion processes might occur in your variable? 

Outlet glaciers tend to be connected (or their qualities may be “spatially attracted”) to those nearest to them because they are all fed by a single ice mass. For example, if the southeast portion of the Greenland ice sheet experiences higher temperatures, the outlet glaciers that it feeds will likely have a higher likelihood of experiencing retreat.

Furthermore, temperature and precipitation in and of themselves tend not to affect entire regions universally. For example, certain regions warm up more than others, which could lead to a “clustered” distribution of glacier mass balances. Greenland as a whole is not warming at a constant rate across its geographic extent.

Local geography (valley geometry, rainshadow effects, etc.) have a certain element of randomness that can contribute to repulsion behavior (adjacent glaciers experiencing similar forcings not necessarily responding the same way.

What kinds of source/sink processes might occur in your variable? 

I think in general, the primary “sources” of material to outlet glaciers is ice from the interior of the ice sheet, and from snow that falls directly in the accumulation zone of each glacier (locally). The primary sinks, that contribute to ablation, are surface melt (a function of temperature), subaerial melt (within the glacier) and ice-ocean interaction (calving and subaqueous melting). Together, these competing forces determine the mass balance of any given glacier. Obviously, all of these forces have a local and global trends, which could lead to different spatial patterns at different scales.

Approach #1- Drawing Polygons:

ArcGIS Pro Tools Used:

Merge, Feature Vertices to Points, Points to Lines, Feature to Polygon, Hotspot Analysis

Steps:

To experiment with techniques, I limited my datasets to just 2 years, 2005 and 2017, to calculate the magnitude of retreat over the entire study period. Termini were distributed as such (There is an unfortunate absence of data in the Southwest of Greenland, a region considered to be melting the fastest based on gravitational anomaly data):

../../../../../Volumes/Samsung%20USB/AdvSPStat/Exercis

Datasets were manipulated as such (zoomed to view individual glacier example):

1.Polylines from each dataset were merged together into one individual dataset using MERGE tool.

2.Endpoints from polylines were isolated using FEATURE VERTICES TO POINTS tool and selecting “both start and end vertices” parameter.

3.Endpoints of polylines were connected using “POINTS TO LINES” tool.

4.Outlines were converted to polygons using the “FEATURE TO POLYGON” tool. The polygon dataset is now complete.

5. Map generated from polygon attributes (Area) to represent “Area of ice retreated” across Greenland:

6. The tool “HOTSPOT ANALYSIS” was used to understand spatial trends in glacier retreat, yielding a predictable result:

Approach#1 Results and Critique:

I was not happy with this approach for several reasons. First, it required several parameters to be just right in order for a constructed polygon to accurately represent an area of retreat. The 2 terminus lines must be entirely separate (aka the lines from T1 and T2 can never cross)  otherwise they will create multiple polygons and overestimate the area retreated. As a result, glaciers that did not retreat significantly always received a larger estimate than they should have. Also, glaciers in unique geographic locations (like curved valleys) would not retreat straight back, but perhaps around a corner, causing termini at T1 and T2 to be ~90 degrees from one another. For these glaciers, straight lines between terminus edges would clearly cut off parts of the retreated area. In these scenarios, the estimated area clearly undershot the true area retreated. The screenshots above were cherry-picked from a cooperative glacier, but in reality, about 30-40% of my calculations seemed to significantly overshoot or undershoot what I believed to be the true area lost.

Second, the AREA lost in and of itself seemed to be a substandard parameter to calculate. Wider glaciers, that create longer termini, would always produce larger polygons. On a relative scale, this makes it seem like only the widest glaciers retreated significantly, whereas smaller, thinner glaciers only lost a miniscule amount. This makes the 2 maps above deceiving: Glaciers in the north of Greenland were much larger and had much longer terminus lines, so their retreat polygons came out massive. Along the East and West of Greenland, there appeared to be very many smaller glaciers, but their polygons came out very tiny. Comparing raw data of a few huge retreats and hundreds of small retreats made the hotspot analysis entirely skewed to the north. Now, I could normalize for this if I wanted to, but in exercises 2 and 3 I want to work with all of these glaciers individually when compared to local glacier velocity, so I want to keep their “raw data” as intact as possible. (It is worth noting, glacial geologists usually use the distance of a glacier’s retreat, in conjunction with the movement of the equilibrium line, to estimate temperature changes. They do not care about area)

Finally, I don’t like this approach because the hotspot analysis makes it look like the ONLY location with significantly retreating glaciers is northern Greenland. While this could indeed be the region experiencing the most calving and ablation at the terminus, it does not even remotely resemble Greenland mass anomalies detected with GRACE tandem satellites (shown below). Of course, this brings up the question: how exactly do glaciers retreat? While one obvious and dramatic mechanism is calving which occurs at the terminus, there are very significant mass fluxes from surface water melt and subaerial melt- which don’t necessarily occur at the terminus. The discrepancy between these 2 datasets makes me wonder if (climatically) short term changes in terminus position are dominated by calving and subaqueous frontal melt, but that total mass flux of a glacier (including surface and subaerial melt) takes slightly longer timescales to kick in and be “felt” at the terminus line. In other words: The equilibrium line of a glacier, and thus it’s terminus position, is entirely controlled by glacier mass balance. However, glaciers that experience particularly vigorous calving might appear to retreat faster than their surface melting counterparts. These processes might take longer to be expressed at the terminus line, and might manifest more immediately as glacier THINNING. Perhaps glaciers in the north of Greenland experience more calving, whereas glaciers in central and south Greenland might experience more surface and subaerial melt. It is not a question I can tackle with the current project and datasets, but a very interesting one nonetheless.

ews Details

Approach #2- Average Distance Retreated:

ArcGIS Pro Tools Used:

Feature Vertices to Points, Near, Summary Statistics, Join Field, Hotspot Analysis, Spatial Autocorrelation (Global Moran’s I)

Steps:

Data was manipulated as such:

../../../../../Volumes/Samsung%20USB/AdvSPStat/Exercise%201%20images/Ne

1.The tool FEATURE VERTICES TO POINTS was used once again on a single dataset (in this case 2005 rather than 2017). This time, the parameter “All Vertices” was selected, to convert virtually the entire polyline into points.

2. The NEAR tool was used to calculate the closest distance between each individual point on the 2005 line to the 2017 line. The blue hand-drawn lines in the figure above visually represent this process. NOTE- the distance calculated is NOT perpendicular the 2005 line, merely the distance between a given point on that line and the closest point on the 2017 line. I attempted to visualize this concept above as well.

3.The SUMMARY STATISTICS tool was used to generate a table isolating the mean distance of these points for each glacier, as well as the median distance of these points for each glacier. The parameter “Case Field” was set to each glacier’s unique “glacier ID,” to ensure individual values were produced for each glacier, and not each separate point.

4.The JOIN FIELD tool was used to merge the newly created statistics table to a separate dataset containing only point locations of each glacier (created previously)

5. Maps were constructed representing the newly constructed variable: “Mean Distance Retreated” across Greenland (NOTE: a different symbology was used than in the previous method, because graduated symbol size became quite confusing to view):

6. The tool “HOTSPOT ANALYSIS” was used to understand spatial trends in glacier retreat, based on length:

7.The tool SPATIAL AUTOCORRELLATION: GLOBAL MORANS I was used to determine the clustered behavior of glacier distance retreated across the island:

Approach #2 Results and Critique:

I consider this approach far superior to the previous one. The Near tool gives me a distance between termini at very close intervals that can then be averaged with summary statistics, and when going back to manually check each average “distance” it seemed far more accurate than the constructed polygons. In this approach, the map produced by the eye test alone seems to have a more even distribution, not heavily weighting everything to the north. The hotspot analysis picks up more hotspots in different locations spread across the island. While there are clearly still 3 hotspots in the north, there are also 3 in central Greenland. I am surprised a hotspot didn’t appear in NW Greenland, where there is a clear abundance of high retreat values. Moving forward, I think I will use this type of dataset to compare glacier retreat to glacier velocity in Exercise 2 and 3. This process can be replicated relatively simply on various time series throughout the study period, to get annual resolution of terminus retreat.

Does your variable A have different spatial patterns at different scales?

From the eye test, it seems there are some groups of high-retreat glaciers around the hotspots, interspersed with glaciers that are lower retreat, which indicates a somewhat clustered distribution. Furthermore, in both distance and area datasets, high retreat values appear to be clustered in northern Greenland. We can assess further with a Spatial autocorrelation tool (Gobal Moran’s I)

The Moran’s I index, which calculated a default neighborhood search area of 162km, came back as 0.018, which reflects a very, very slight clustering. But with an index value so close to zero, a p value of 0.5 and a Z score of 0.6, this is nowhere near enough to reject the null hypothesis, and indicates the distribution may just be random. This is not an exhaustive analysis of the distribution of these values, just something I performed quickly to try to include last minute in Exercise 1. I hope to perform more spatial autocorrelations and more closely assess the distribution of these values, as well as  their relatedness to glacier velocity, in Exercise 2.

A Century of Oregon Salt Marsh Expansion and Contraction (Part I)

Background

My main goal for this course is to determine the methods needed to quantify salt marsh volumetric change from 1939 to the present within Oregon estuaries. I have historical aerial photographs (HAPs) and sediment cores from five estuaries along the Oregon coast – Nehalem, Netarts, Salmon, Alsea, and Coquille. Ultimately, through a combination of rates of area change measured from the HAPs with rates of vertical accretion measured using excess 210Pb within my sediment cores, I think I will be able to develop a history of salt marsh volumetric change. (See my first blog post for more background the significance of this research.)

Based on my goals outlined in my first blog post, Julia made a number of helpful suggestions for how to best frame my first exercise. First, she suggested that I need to begin by mapping salt marsh horizontal change from the HAPs by creating outlines of my marshes for each year I have photos. Second, Julia suggested that rather than just focusing on net area change, I also investigate portions of the salt marsh that exhibit growth or contraction as these areas might relate to larger changes within the estuary and watershed. Third, Julia suggested I first focus on one estuary to nail down my methods as a proof of concept. I can then apply my method to each individual salt marsh.  

Throughout this blog post I will reference many of the topics we have discussed in class. For instance, I will analyze hot spots of accumulation and contraction through analysis of digitized areas from georeferenced HAPs. I will briefly discuss how I hypothesize that the scale at which I digitize a salt marsh area will also change the overall area. Additionally, though I am not directly using an interpolation tool in my geospatial analysis yet, I will use interpolation between years in my initial analysis of salt marsh area change. Ultimately, I will be characterizing whether the salt marsh is a source or sink of sediment through analysis of the volumetric change.

Specific Questions

  1. What is the best method to measure change in salt marsh area from HAPs?
  2. How have different regions of the salt marsh within Alsea Bay changed in area since 1939?
  3. How do horizontal changes relate to vertical changes?

Method

Step 1: Scan images

HAPs, which are panchromatic and roughly decadal spanning 1939 to 1991, were scanned at the University of Oregon’s Map & Aerial Photography Library. No information was provided for each photo besides the year it was taken (and sometimes the day). This is frustrating because when photographs were taken at high tide, areas of the vegetated salt marsh are inundated and edges are less easily distinguished – especially within tidal creeks. Before bringing these images into ArcGIS Pro, they were cropped to remove borders and converted to TIFF files in Adobe Photoshop.

Step 2: Georeference images   

I played around with stitching the images together in Fiji(Image J), which is a powerful, open-source image processing program that I highly recommend. However, since the HAPs were taken from different angles, the photographs stich together very strangely especially for tall features such as trees, buildings, and bridges.   

All images were georeferenced in ArcGIS Pro using ≥ 10 control points. With an effort to select areas evenly across the photographs, control points were preferentially placed at road intersections; however, creek intersections were also used as control points in portions of the images without roads. Comparison between HAPs indicates that tidal creeks are surprisingly stable throughout the last ~80 years in Alsea Bay. Historical aerial photographs combined with control points were fit using second-order polynomial transformations. The root mean square error (RMSE) was calculated for each georeferenced image; no map had a RMSE >7 m and the mean RMSE was 3 m.

Step 2.5: Try automated image classification

Initially I tried classifying images using the ArcGIS Pro Classification Wizard. To reduce the complexity and size, I first cropped the georeferenced images. Object based supervised classification was used. Trying different degrees of spectral and spatial detail, I then trained the Classification Wizard with examples of forest, vegetated salt marsh, and mudflat/open water. One major issue observable in the classifications was that the vegetated surfaces were sometimes white, causing the software to confuse it with the white produced from light reflected off the water’s surface. Thus, the salt marsh edge is seemingly too complex for supervised image classification when images are only black and white. After receiving these poor results with every adjustment and speaking with colleagues about automated/manual classification, I decided manual digitization is my best option. (A review of the literature also indicates that others must have found that manual digitization is preferable to supervised/unsupervised image classification when analyzing panchromatic photographs of salt marshes.)    

Figure 1. Example of classified salt marsh surface (green area) compared to heads-up digitized salt marsh (blue line). Classification does not capture low marsh surface in gray.

Step 3: Heads-up digitization

Digitized areas were initially limited to regions of the estuary present within the majority of photographs. These areas were further limited to least-disturbed tidal saline wetlands from which vertical sediment accumulation rates were available. I then performed heads-up digitization of the edges between unvegetated mudflat and vegetated marsh using the create polygon tool… Because trees and their shadows obscure land-ward edges, these boundaries were digitized using a combination of PMEP’s (Pacific Marine & Estuarine Fish Habitat Partnership) “West Coast USA Estuarine Biotic Habitat” maps and the georeferenced aerial photography.

Step 4: Observe changes

Preliminary analysis of horizontal changes was observational. I additionally plotted net changes in salt marsh area over time for each major salt marsh complex.   

Figure 2. Digitized Alsea Bay salt marshes. Red is 1939, orange is 1952, yellow is 1959, green is 1972, blue is 1982, and navy is 1991.

Results

In response to question 1, heads-up digitization is much better suited to producing salt marsh outlines than an automated image classification method, such as those available within the ArcGIS Pro Classification Wizard. This result is unfortunate because heads-up digitization is obviously very time consuming and strenuous on your eyes (my right eye started to twitch) and mouse hand. Supervised image classification may allow for an initial distinction between unvegetated mudflat and vegetated marsh surface. Classified images and georeferenced aerial photography could then be referenced in combination to digitize the marsh boundaries.

Figure 3. Changes in salt marsh area based on changes between HAPs and net changes since 1939 for the three islands and the total area.

A preliminary analysis of the digitized layers shows hot spots of salt marsh growth and contraction. For instance, the upstream, east-most salt marsh complex has shown very little change over the last observed record, though over the total record, its area has decreased by ~4%. Comparatively, after first experiencing a ~5% decrease in size in the 1940s, the large, west-most island increased in area by ~10% from 1952 to 1959, then after modest growth in the 1960s to 80s, declined to only ~3% of its original 1939 area by the early 1990s. The salt marsh fringing the NE portion of the bay additionally observed a modest 3% increase in area in the 1950s, but then contracted in size by ~13% in the mid-1960s, expanded again by ~10% and ~5% between the 1970s and 1980s, respectively. This area observed a net growth of ~4% from 1939 to 1991.

The wet phase of the Pacific Decadal Oscillation (PDO) from 1944 to 1977 has been observed to increase peak river flows (e.g., Wheatcroft et al. 2013). Coincidently logging intensified in Lincoln county during the same period (Andrews & Kutura 2005). These events may have contributed to increased area of the large, west-most island in that time frame. The contraction of the salt marsh fringing the NE portion of the bay in the mid-1960s to 70s is unclear but perhaps related to increased wake from boats in that area.    

A preliminary comparison between histories of sediment accumulation and marsh growth indicate very similar patterns. For instance, the sediment core collected from the small island shows very little change in accumulation over the last century. Additionally, the sediment core collected from the large island exhibits a period of rapid accumulation in the 1960s. Vertical accumulation increased in the 1950s in the core collected from the fringing marsh, also similar to its horizontal accumulation record.     

Figure 4. Example comparison between history of discharge and timber harvest, sediment core data (mass accumulation rate – brown, dry bulk density – navy), and horizontal change.

Based on the net changes in area for the three salt marsh complexes of interest and the net in vertical sediment accumulation from three cores collected within each marsh, the volumetric change for the small island, large island, and fringing marsh are -160, 220, 250 m3, respectively. With an estimated dry bulk density of ~0.5 g cm-3, these values equate to ~ -80, ~110, ~125 t of sediment trapped from 1939 to 1991 on the small island, large island, and fringing marsh, respectively. The Alsea Bay salt marshes have thus remained a net sink during this timeframe. I will improve these volumetric estimates by incorporating more sediment core data and also comparing the history of volumetric change with the sediment discharge record for the Alsea River, thereby creating estimates of trapping efficiencies for the estuary.    

Critique

I am foremost interested in assessing changes within the high marsh extent of my salt marshes because these are the areas considered in quasi-equilibrium with relative sea level rise. As a bit of background, intertidal areas are typically divided between unvegetated mud flats (inundated most frequently and exposed only at low tide), low marsh (inundated less frequently with high tides), and high marsh (inundated the least frequently with only high higher high tides). However, in Oregon, high marsh and low marsh are both densely vegetated and look very similar in panchromatic aerial photographs. This represents one of the first major downsides to analyzing salt marsh change from HAPs – low marsh and high marsh must be considered together regardless of the research question. Fortunately, Oregon salt marshes typically exhibit very little low marsh area so my estimates are hopefully not too different from what they would be if I could distinguish between the two habitat classes.

Another major issue with analyzing HAPs is that the landward edge between high marsh and forest is difficult to distinguish due to the trees obscuring the edge. Depending on the location of the plane when taking the photographs and the time of day, the edge is either obscured by the trees themselves or by the shadows they cast. Without ground truthing, it is difficult to speculate how much this impacts estimates of the landward edge; fortunately for me, however, Oregon salt marshes typically extend to the base of Coast Range hills and so it is unlikely that there has been much landward migration of salt marshes over the last century. (This process is called coastal squeeze, and it poses a serious threat to long-term survival of Oregon salt marshes under accelerated sea level rise.)    

The method of heads-up digitization, while straightforward but time consuming, seems to be the best suited method for digitizing panchromatic images. Unfortunately, I have not yet figured out the best method of assessing error associated with heads-up digitization. Additionally, I have not assessed the impact of scale on changing area. I have digitized my salt marshes at the smallest spatial scale possible for each photo. I hypothesize that smaller spatial scales would result in decreased volume as more and more tidal creeks are incorporated. The highest quality HAPs are 1939 and 1952. This is something I will have to think more about in the future.

A serious issue in sediment stratigraphy is the inability to easily observe reductions in accumulation, hiatuses, and periods of erosion. Thus comparison between the HAPs and core data is much clearer for periods of increased accumulation, as I have done. Though not much can be done to remedy this issue (think of publication impact factor if I could!), I will continue to acknowledge it.  

Remaining questions and future directions

My immediate questions are:

  1. How should I include a modern-day layer? Just outline the satellite basemap in ArcGIS Pro? (Hand digitization would be more precise than PMEP’s layers.)
  2. How should I measure horizontal change more robustly – i.e. with better statistics?
  3. How should I estimate error associated with heads-up digitization?
  4. How should I be digitizing channels?

Going forward I plan to answer these questions ~ depending on how tricky they prove (simple questions always seem to be the hardest to answer), I plan to focus on at least questions 1, 2, and 3 for my next exercise. To answer question 1, the USGS Digital Shoreline Analysis System (DSAS; Himmelstoss et al. 2018) seems very promising so I will focus on getting my Alsea data into this program. DSAS requires error estimates for each shoreline layer and suggests a number of resources including Ruggiero et al. (2013) that I will investigate.  

Literature

Andrews, A., & Kutura, K. Oregon’s timber harvests: 1949 – 2004. (2005). Oregon Department of Forestry. Salem, OR.

Himmelstoss, E.A., Henderson, R.E., Kratzmann, M.G., & Farris, A.S. (2018). Digital Shoreline Analysis System (DSAS) version 5.0 user guide: U.S. Geological Survey Open-File Report 2018–1179, 110 p., https://doi.org/10.3133/ ofr20181179.

Ruggiero, P., Kratzmann, M. G., Himmelstoss, E. A., Reid, D., Allan, J., & Kaminsky, G. (2013). National assessment of shoreline change: historical shoreline change along the Pacific Northwest coast. US Geological Survey.

Wheatcroft, R. A., Goñi, M. A., Richardson, K. N., & Borgeld, J. C. (2013). Natural and human impacts on centennial sediment accumulation patterns on the Umpqua River margin, Oregon. Marine Geology, 339, 44-56.

Ex1. Wk2 Lab

What kinds of attraction/repulsion processes might occur in your variable? 

Due to the different soil characteristics of various regions in Oregon, landslides mainly occur in certain specific areas. In the point data, these points represent the place where the landslide occurred. In Exercise 1, these points are attracted to each other in high-risk areas.

Forthermore, point data and landslide susceptibility maps are also associated. For red and blue high-risk areas, they are mutually attractive with point data, and for green low-risk areas, they are mutually exclusive with point data.

What kinds of source/sink processes might occur in your variable? 

After generating the hot spot map, different soil characteristics are the source of the variables. Some areas of the soil are more prone to landslides. In Exercise 2, the spectrum difference of high-risk areas will be analyzed. The high-risk area will generate force might occur landslides.

Does your variable A have different spatial patterns at different scales?

In the original spatial scale, point data are scattered, and after hot spot analysis, point data are clustered.