Mapping the Southern Ocean?

“Can we map the interannual variability of the whole upper Southern Ocean with the current database of hydrographic observations?”
That’s the (too long) title of a paper I wrote with Frédéric Vivier from the LOCEAN (Paris) and which is now available online. I don’t know how I did that, but I ended up with some spare time during the end of my PhD, so with Frédéric we restarted working on my Master’s thesis and extracted a paper out of it. Rather than concentrating on the science, we decided this time to first study the impact of the method we choose for said science. And we found that if you are not careful enough, you could have a few bad surprises…

1. Scattered data vs regular grid

An Argo float chilling out before its deployment (image taken from http://arctic.cbl.umces.edu/Laurier2010/)

An Argo float chilling out before its deployment (image taken from here)

This is an Argo float. Apparently this particular one was deployed during an Arctic cruise in July 2010. Thousands of them have been deployed worldwide since the early 2000s in order to increase the ocean coverage. They basically are autonomous CTD. They are “parked” at 2000 m depth where they gently drift with the currents. Every 10 days, they go back to the surface, measuring temperature and salinity with depth, and once at the surface they transmit these measurements via satellites. Once the data are sent, they sink back down to 2000 m and wait for 10 days before rising again…

There are thousands of them currently in activity worldwide. In remote/harsh areas like the Southern Ocean, they are a very precious tool to get a good coverage, for all seasons (normally, no one goes there in winter). The problem is that no one can control where the float is unless they go there with a ship, recover it, and redeploy it. The float simply drifts away from its original location with time. Which is quite annoying when you want to compare the same location year after year. This is why most people who work with the Argo database first bring these scattered measurement onto a regular horizontal grid.

This was the first step of my own data analysis during the Master degree. I did some brief sensitivity tests, but honestly we had more exciting things to work on so instead I used the same method as another paper who had just been published on a similar topic. This was four years ago, and since more papers have been published on that topic, but not two of them use the same mapping method. Or rather, they use the same interpolation method: an objective analysis, but with different settings. In fact, you have to choose how far the method will look for data, how the datapoints are going to be weighted, and which size of grid you want for your final product.

We decided to see if differences arise from mapping the same dataset with these various settings. We cheated slightly and did not use the real Southern Ocean data. We used instead ten years of DRAKKAR 1/12° run (a) given by our three coauthors from Grenoble – that way we knew what the whole area looks like – from which we extracted and remapped only the points corresponding to Southern Ocean measurements (b). See for example the mixed layer depth in January 2014:

Reference 1/12° model field (left) and model value at the location of Argo measurements (right). After Fig. 3, Heuzé et al. (2015)

Reference 1/12° model field (left) and model value at the location of Argo measurements (right). After Fig. 3, Heuzé et al. (2015)

2. Accuracy vs coverage

Long story short: yes it does. And as often, you will need to compromise.

If you go and look for data further and further, obviously you are more likely to find some. That allows you to produce maps with hardly any gap. It is convenient if you are studying a large area that has some “holes” in the measurements, and that you need to have data all the time (e.g. to study the variability of a whole sector of the Southern Ocean). The downside is that you are using more points for each grid cell, and are likely to average and smoothen your signal.

The resolution of your final product has a similar effect: the more points you are ready to use (i.e. the coarser the resolution), the better the coverage. Especially when looking at data-poor regions or times (winter), decreasing the resolution slightly increased the accuracy.

How often does the mapping fail? From the worst (left, short search distance and high resolution) to the best (right, long search distance and coarse resolution). After Fig. 8 Heuzé et al. (2015)

The last setting, how to weight the data, we did not really explore because it turned out to be quite complex and time-consuming. The “basic” objective analysis that we use weights the datapoints depending on how far in space/time they are from the target location. Schmidtko et al. (2013) also have a front criteria while Boehme et al. (2008) use a potential vorticity criteria, which both give more weight to points which belong to the same water mass as the target. It should increase the accuracy when scanning large areas to obtain a better coverage. Maybe a project for a future paper?

3. Decadal trends?

Out of curiosity, as we had ten years of data, we wondered what happened to the decadal trends when doing this re-mapping exercise. That was a bad idea. Any setting that detects the trend correctly also creates fake ones somewhere else. Note however that we were working with the trends in the model, not in the real ocean: there were very few significant trends, which is good for the model (it was close to equilibrium) but not ideal for our study. Also, we’re working with the mixed layer depth. This field is specific, for its seasonal and interannual variabilities are very large compared to any potential trend. That probably was too hard for the mapping methods to deal with. We could imagine that the methods would work much better for the heat or freshwater content… but again, that will be for another paper!

In conclusion: yes, we can map the interannual variability of the Southern Ocean mixed layer depth using ten years of Argo floats. But before choosing the method to do so, you have to think for 5mn about what is most important: accuracy over a small area, or ok-ish values over most the Southern Ocean? For that will define which method will be most appropriate.
If you have more questions, feel free to contact me!

Next on Polarfever: about Wednesday 25th November meeting at the Royal Swedish Academy of Sciences on “public engagement in science“, and Thursday 26th Polarforum

 

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s