Replies: 3 comments 3 replies
-
It is a good question and something that we are "working on". I gave a talk on this exact topic at the 2021 National Cooperative Soil Survey meetings. You are correct in that the tapestry of survey vintages can lead to odd edge-matching errors at county and state boundaries. This has been actively addressed since ~2013, but there are still many joins that aren't yet harmonized. I'd suggest contacting your state soil scientist if you'd like to see something specific corrected. Seriously, without detailed feedback from the public it could take many more years before the data are fixed. Note that I haven't yet updated the ISSR-800 maps to reflect the current (FY24) SSURGO. Chances are, that some of these issues have been corrected since last year. |
Beta Was this translation helpful? Give feedback.
-
Answering the second part of your question: why would suites of soil properties vary across SSA boundaries? A couple short answers:
Contacting your state soil scientist is the best way to determine why these artifacts are present and when they are scheduled to be fixed. |
Beta Was this translation helpful? Give feedback.
-
I think it is important to recognize that any thematic map created from SSURGO requires making MANY decisions about aggregation and relative importance of information. The map can be no more precise than what is portrayed by the pattern of MUKEYs and the chosen data/aggregation method. The thematic map is unaware of implicit relationships between similar soils or analogous concepts, and applies weighting/aggregation regardless of context. Maps like the ones you share really emphasize the differences because we can't see the full depth of the dataset--which in many cases requires going down to series and component level details. I think that you can feel confident that the original maps seek to capture the interpretive behavior of the named soils and those similar to them. That met the needs of the time, while we are in a new world in terms of our ability to cross reference properties and concepts for the whole country--we are still stuck with relatively simple aggregation methods. Recent initiatives (Soil Data Join Recorrelation) sought to develop harmonized concepts across political boundaries called "MLRA map units". In general, creating MLRA mapunits to combine similar concepts is the "solution" to the problem of sharp changes at political boundaries. The process requires investigation by a MLRA soil scientist into the historical concept(s), and the development of a "new" concept. Usually this new concept is based in large part on one of the historical map unit concepts and then brought to modern standards, adding other information and extending ranges as needed. The process of combining mapunits in this way, while improving consistency across surveys, has been argued to remove the "nuance" associated with the specific mapunit concept from each county survey. That said, often times we lack the data to "defend" that nuance, so in most cases similar mapunits can be combined and ranges expanded if needed. Dylan brought up several good points about the "why," so I'll provide some thoughts on interpretation / "dealing" with it. Just because something is "different" doesn't mean it is a "significant" difference. In the case of your drainage class map, you have lots of MWD on PA side, and lots of WD on WV side. Either there is a systematic difference in the soil concepts being used, or the soil concepts being used have a range that includes both WD and MWD. It appears from poking around in the area in question that the most extenstive mapunits at this state/SSA boundary have similar dominant components (e.g. Gilpin, Dormont, Peabody, Culleoka) but they differ in their composition. This kind of area is not the type of thing that was targeted in SDJR, because the mapunit names are quite different, but you clearly illustrate why these too are an issue. From the aggregate map of RV/dominant values alone you can't tell what the natural range of the soils involved is. From an interpretive standpoint, this difference is not as big of an error as e.g. WD versus PD. Similarly, the difference between slightly acid and moderately acid, while clearly affecting interpretations, is not a huge difference if that is the only thing that is different. As you say though, multiple properties tend to differ / covary how they differ: this is because even the numeric quantities are categorical values derived from aggregation of specific mapunit concepts. I would say that in general the product that was designed with the intent of having "regular" behavior across political boundaries is STATSGO. While we have been working towards updating SSURGO so it can behave the same way, there are still lots of places in the country that are not mapped, let alone updated so the continuity is not quite there. Further, because those updates focused on specific sets of mapunits, rather than specific properties, or all mapunits differing across e.g. a state boundary, there is the likelihood that we see only partial fixes over time. ISSR800, used for the soil properties app, further performs aggregations of mapunit-level data to 800m grid cell resolution, so each cell is a combination of multiple mapunit values, which themselves are an aggregate of components, and in the case of horizon-level properties weighted averages over fixed depth intervals. All three of these steps can incur subtle changes in the thematic map where you otherwise have "similar" component data. Further, aggregation in general is prone to error in terms of coding them, especially when we can't be certain the same standards for data population have been used over the various surveys. For instance, components might only differ in RV horizon depths, but have similar historic SSIR data populated for each of their layers. The SSIR data were essentially series-level ranges used for components of the same name, and at the time were managed largely at the state level. So, just changing horizon depths such that different proportions of different layers are averaged affects the thematic map of e.g. 0-25cm pH when the property information is otherwise the same. |
Beta Was this translation helpful? Give feedback.
-
When looking across the UC Davis CONUS Soil Properties map, one can easily find data irregularities between survey areas. Sometimes it can be quite stark. For example, the SW corner of PA and the Northern Panhandle of WV:
I totally get that these irregularities are bound to exist...the surveys were conducted by many different people over many years, and definitions/methods have evolved over time. What makes me more unsettled is that it seems to me that, when a region has any irregularities at all, it tends to have them across many variables. In other words, it makes sense to me that two surveys over here calibrated their pH meters different and two surveys over there had different protocols for measuring soil depth....but why is it that, like in the example above, so many of those irregularities tend to occur together and even across unrelated variables?
Sorry if this is more of a history question than a science question. I'm just feeling a bit unsettled about the trustworthiness of data in certain regions at the moment...
I doubt that there is anything I can do to "deal" with this on my end, but I'm curious if there is any broader effort underway to rectify any of these irregularities.
Beta Was this translation helpful? Give feedback.
All reactions