Package 'FedData'

Title: Download Geospatial Data Available from Several Federated Data Sources
Description: Download geospatial data available from several federated data sources (mainly sources maintained by the US Federal government). Currently, the package enables extraction from nine datasets: The National Elevation Dataset digital elevation models (<https://www.usgs.gov/3d-elevation-program> 1 and 1/3 arc-second; USGS); The National Hydrography Dataset (<https://www.usgs.gov/national-hydrography/national-hydrography-dataset>; USGS); The Soil Survey Geographic (SSURGO) database from the National Cooperative Soil Survey (<https://websoilsurvey.sc.egov.usda.gov/>; NCSS), which is led by the Natural Resources Conservation Service (NRCS) under the USDA; the Global Historical Climatology Network (<https://www.ncei.noaa.gov/products/land-based-station/global-historical-climatology-network-daily>; GHCN), coordinated by National Climatic Data Center at NOAA; the Daymet gridded estimates of daily weather parameters for North America, version 4, available from the Oak Ridge National Laboratory's Distributed Active Archive Center (<https://daymet.ornl.gov/>; DAAC); the International Tree Ring Data Bank; the National Land Cover Database (<https://www.mrlc.gov/>; NLCD); the Cropland Data Layer from the National Agricultural Statistics Service (<https://www.nass.usda.gov/Research_and_Science/Cropland/SARS1a.php>; NASS); and the PAD-US dataset of protected area boundaries (<https://www.usgs.gov/programs/gap-analysis-project/science/pad-us-data-overview>; USGS).
Authors: R. Kyle Bocinsky [aut, cre, cph] , Dylan Beaudette [ctb], Scott Chamberlain [ctb, rev], Jeffrey Hollister [ctb], Julia Gustavsen [rev]
Maintainer: R. Kyle Bocinsky <[email protected]>
License: MIT + file LICENSE
Version: 4.2.0
Built: 2024-12-10 23:24:44 UTC
Source: https://github.com/ropensci/FedData

Help Index


Download and crop the 1-km DAYMET v4 daily weather dataset.

Description

get_daymet returns a SpatRaster of weather data cropped to a given template study area.

Usage

get_daymet(
  template,
  label,
  elements = c("dayl", "prcp", "srad", "swe", "tmax", "tmin", "vp"),
  years = 1980:(lubridate::year(Sys.time()) - 1),
  region = "na",
  tempo = "day",
  extraction.dir = file.path(tempdir(), "FedData", "extractions", "daymet", label),
  raster.options = c("COMPRESS=DEFLATE", "ZLEVEL=9", "INTERLEAVE=BAND"),
  force.redo = FALSE,
  progress = TRUE
)

Arguments

template

An Simple Feature or SpatRaster object to serve as a template for cropping.

label

A character string naming the study area.

elements

A character vector of elements to extract.
The available elements are:
dayl = Duration of the daylight period in seconds per day. This calculation is based on the period of the day during which the sun is above a hypothetical flat horizon.
prcp = Daily total precipitation in millimeters per day, sum of all forms converted to water-equivalent. Precipitation occurrence on any given day may be ascertained.
srad = Incident shortwave radiation flux density in watts per square meter, taken as an average over the daylight period of the day. NOTE: Daily total radiation (MJ/m2/day) can be calculated as follows: ((srad (W/m2) * dayl (s/day)) / l,000,000)
swe = Snow water equivalent in kilograms per square meter. The amount of water contained within the snowpack.
tmax = Daily maximum 2-meter air temperature in degrees Celsius.
tmin = Daily minimum 2-meter air temperature in degrees Celsius.
vp = Water vapor pressure in pascals. Daily average partial pressure of water vapor.

years

A numeric vector of years to extract.

region

The name of a region. The available regions are:
na = North America
hi = Hawaii
pr = Puerto Rico

tempo

The frequency of the data. The available tempos are:
day = Daily data
mon = Monthly summary data
ann = Annual summary data

extraction.dir

A character string indicating where the extracted and cropped DEM should be put. Defaults to a temporary directory.

raster.options

a vector of GDAL options passed to terra::writeRaster.

force.redo

If an extraction for this template and label already exists in extraction.dir, should a new one be created?

progress

Draw a progress bar when downloading?

Value

A named list of SpatRasters of weather data cropped to the extent of the template.

Examples

## Not run: 
library(terra)

# Get the DAYMET (North America only)
# Returns a list of raster bricks
DAYMET <- get_daymet(
  template = FedData::meve,
  label = "meve",
  elements = c("prcp", "tmin", "tmax"),
  years = 1985
)

# Plot with terra::plot
plot(DAYMET$tmin$`1985-10-23`)

## End(Not run)

Download and crop the Global Historical Climate Network-Daily data.

Description

get_ghcn_daily returns a named list of length 2:

  1. 'spatial': A Simple Feature of the locations of GHCN weather stations in the template, and

  2. 'tabular': A named list of type data.frame() with the daily weather data for each station. The name of each list item is the station ID.

Usage

get_ghcn_daily(
  template = NULL,
  label = NULL,
  elements = NULL,
  years = NULL,
  raw.dir = file.path(tempdir(), "FedData", "raw", "ghcn"),
  extraction.dir = file.path(tempdir(), "FedData", "extractions", "ned", label),
  standardize = FALSE,
  force.redo = FALSE
)

Arguments

template

An Simple Feature or SpatRaster object to serve as a template for cropping. Alternatively, a character vector providing GHCN station IDs. If missing, all stations will be downloaded!

label

A character string naming the study area.

elements

A character vector of elements to extract.
The five core elements are:
PRCP = Precipitation (tenths of mm)
SNOW = Snowfall (mm)
SNWD = Snow depth (mm)
TMAX = Maximum temperature (tenths of degrees C)
TMIN = Minimum temperature (tenths of degrees C)

The other elements are:

ACMC = Average cloudiness midnight to midnight from 30-second ceilometer data (percent)
ACMH = Average cloudiness midnight to midnight from manual observations (percent)
ACSC = Average cloudiness sunrise to sunset from 30-second ceilometer data (percent)
ACSH = Average cloudiness sunrise to sunset from manual observations (percent)
AWDR = Average daily wind direction (degrees)
AWND = Average daily wind speed (tenths of meters per second)
DAEV = Number of days included in the multiday evaporation total (MDEV)
DAPR = Number of days included in the multiday precipitation total (MDPR)
DASF = Number of days included in the multiday snowfall total (MDSF)
DATN = Number of days included in the multiday minimum temperature (MDTN)
DATX = Number of days included in the multiday maximum temperature (MDTX)
DAWM = Number of days included in the multiday wind movement (MDWM)
DWPR = Number of days with non-zero precipitation included in multiday precipitation total (MDPR)
EVAP = Evaporation of water from evaporation pan (tenths of mm)
FMTM = Time of fastest mile or fastest 1-minute wind (hours and minutes, i.e., HHMM)
FRGB = Base of frozen ground layer (cm)
FRGT = Top of frozen ground layer (cm)
FRTH = Thickness of frozen ground layer (cm)
GAHT = Difference between river and gauge height (cm)
MDEV = Multiday evaporation total (tenths of mm; use with DAEV)
MDPR = Multiday precipitation total (tenths of mm; use with DAPR and DWPR, if available)
MDSF = Multiday snowfall total
MDTN = Multiday minimum temperature (tenths of degrees C; use with DATN)
MDTX = Multiday maximum temperature (tenths of degrees C; use with DATX)
MDWM = Multiday wind movement (km)
MNPN = Daily minimum temperature of water in an evaporation pan (tenths of degrees C)
MXPN = Daily maximum temperature of water in an evaporation pan (tenths of degrees C)
PGTM = Peak gust time (hours and minutes, i.e., HHMM)
PSUN = Daily percent of possible sunshine (percent)
SN*# = Minimum soil temperature (tenths of degrees C) where * corresponds to a code for ground cover and # corresponds to a code for soil depth.

Ground cover codes include the following:
0 = unknown
1 = grass
2 = fallow
3 = bare ground
4 = brome grass
5 = sod
6 = straw multch
7 = grass muck
8 = bare muck

Depth codes include the following:
1 = 5 cm
2 = 10 cm
3 = 20 cm
4 = 50 cm
5 = 100 cm
6 = 150 cm
7 = 180 cm

SX*# = Maximum soil temperature (tenths of degrees C) where * corresponds to a code for ground cover and # corresponds to a code for soil depth.
See SN*# for ground cover and depth codes.
TAVG = Average temperature (tenths of degrees C) (Note that TAVG from source 'S' corresponds to an average for the period ending at 2400 UTC rather than local midnight)
THIC = Thickness of ice on water (tenths of mm)
TOBS = Temperature at the time of observation (tenths of degrees C)
TSUN = Daily total sunshine (minutes)
WDF1 = Direction of fastest 1-minute wind (degrees)
WDF2 = Direction of fastest 2-minute wind (degrees)
WDF5 = Direction of fastest 5-second wind (degrees)
WDFG = Direction of peak wind gust (degrees)
WDFI = Direction of highest instantaneous wind (degrees)
WDFM = Fastest mile wind direction (degrees)
WDMV = 24-hour wind movement (km)
WESD = Water equivalent of snow on the ground (tenths of mm)
WESF = Water equivalent of snowfall (tenths of mm)
WSF1 = Fastest 1-minute wind speed (tenths of meters per second)
WSF2 = Fastest 2-minute wind speed (tenths of meters per second)
WSF5 = Fastest 5-second wind speed (tenths of meters per second)
WSFG = Peak gust wind speed (tenths of meters per second)
WSFI = Highest instantaneous wind speed (tenths of meters per second)
WSFM = Fastest mile wind speed (tenths of meters per second)
WT** = Weather Type where ** has one of the following values:

01 = Fog, ice fog, or freezing fog (may include heavy fog)
02 = Heavy fog or heaving freezing fog (not always distinguished from fog)
03 = Thunder
04 = Ice pellets, sleet, snow pellets, or small hail
05 = Hail (may include small hail)
06 = Glaze or rime
07 = Dust, volcanic ash, blowing dust, blowing sand, or blowing obstruction
08 = Smoke or haze
09 = Blowing or drifting snow
10 = Tornado, waterspout, or funnel cloud
11 = High or damaging winds
12 = Blowing spray
13 = Mist
14 = Drizzle
15 = Freezing drizzle
16 = Rain (may include freezing rain, drizzle, and freezing drizzle)
17 = Freezing rain
18 = Snow, snow pellets, snow grains, or ice crystals
19 = Unknown source of precipitation
21 = Ground fog
22 = Ice fog or freezing fog

WV** = Weather in the Vicinity where ** has one of the following values:
01 = Fog, ice fog, or freezing fog (may include heavy fog)
03 = Thunder
07 = Ash, dust, sand, or other blowing obstruction
18 = Snow or ice crystals
20 = Rain or snow shower

years

A numeric vector indicating which years to get.

raw.dir

A character string indicating where raw downloaded files should be put. The directory will be created if missing. Defaults to './RAW/GHCN/'.

extraction.dir

A character string indicating where the extracted and cropped GHCN shapefiles should be put. The directory will be created if missing. Defaults to './EXTRACTIONS/GHCN/'.

standardize

Select only common year/month/day? Defaults to FALSE.

force.redo

If an extraction for this template and label already exists, should a new one be created? Defaults to FALSE.

Value

A named list containing the 'spatial' and 'tabular' data.

Examples

## Not run: 
# Get the daily GHCN data (GLOBAL)
# Returns a list: the first element is the spatial locations of stations,
# and the second is a list of the stations and their daily data
GHCN.prcp <-
  get_ghcn_daily(
    template = FedData::meve,
    label = "meve",
    elements = c("prcp")
  )

# Plot the VEP polygon
plot(meve)

# Plot the spatial locations
plot(GHCN.prcp$spatial$geometry, pch = 1, add = TRUE)
legend("bottomleft", pch = 1, legend = "GHCN Precipitation Records")

# Elements for which you require the same data
# (i.e., minimum and maximum temperature for the same days)
# can be standardized using `standardize = TRUE`
GHCN.temp <- get_ghcn_daily(
  template = FedData::meve,
  label = "meve",
  elements = c("tmin", "tmax"),
  standardize = TRUE
)

# Plot the VEP polygon
plot(meve)

# Plot the spatial locations
plot(GHCN.temp$spatial$geometry, pch = 1, add = TRUE)
legend("bottomleft", pch = 1, legend = "GHCN Temperature Records")

## End(Not run)

Download the latest version of the ITRDB, and extract given parameters.

Description

get_itrdb returns a named list of length 3:

  1. 'metadata': A data frame or Simple Feature (if makeSpatial==TRUE) of the locations and names of extracted ITRDB chronologies,

  2. 'widths': A matrix of tree-ring widths/densities given user selection, and

  3. 'depths': A matrix of tree-ring sample depths.

Usage

get_itrdb(
  template = NULL,
  label = NULL,
  recon.years = NULL,
  calib.years = NULL,
  species = NULL,
  measurement.type = NULL,
  chronology.type = NULL,
  raw.dir = paste0(tempdir(), "/FedData/raw/itrdb"),
  extraction.dir = ifelse(!is.null(label), paste0(tempdir(),
    "/FedData/extractions/itrdb/", label, "/"), paste0(tempdir(),
    "/FedData/extractions/itrdb")),
  force.redo = FALSE
)

Arguments

template

An Simple Feature or SpatRaster object to serve as a template for cropping. If missing, all available global chronologies are returned.

label

A character string naming the study area.

recon.years

A numeric vector of years over which reconstructions are needed; if missing, the union of all years in the available chronologies are given.

calib.years

A numeric vector of all required years—chronologies without these years will be discarded; if missing, all available chronologies are given.

species

A character vector of 4-letter tree species identifiers; if missing, all available chronologies are given.

measurement.type

A character vector of measurement type identifiers. Options include:

  • 'Total Ring Density'

  • 'Earlywood Width'

  • 'Earlywood Density'

  • 'Latewood Width'

  • 'Minimum Density'

  • 'Ring Width'

  • 'Latewood Density'

  • 'Maximum Density'

  • 'Latewood Percent'

if missing, all available chronologies are given.

chronology.type

A character vector of chronology type identifiers. Options include:

  • 'ARSTND'

  • 'Low Pass Filter'

  • 'Residual'

  • 'Standard'

  • 'Re-Whitened Residual'

  • 'Measurements Only'

if missing, all available chronologies are given.

raw.dir

A character string indicating where raw downloaded files should be put. The directory will be created if missing.

extraction.dir

A character string indicating where the extracted and cropped ITRDB dataset should be put. The directory will be created if missing.

force.redo

If an extraction already exists, should a new one be created? Defaults to FALSE.

Value

A named list containing the 'metadata', 'widths', and 'depths' data.

Examples

## Not run: 
# Get the ITRDB records
ITRDB <- get_itrdb(
  template = FedData::meve,
  label = "meve"
)

# Plot the VEP polygon
plot(meve)

# Map the locations of the tree ring chronologies
plot(ITRDB$metadata$geometry, pch = 1, add = TRUE)
legend("bottomleft", pch = 1, legend = "ITRDB chronologies")

## End(Not run)

Download and crop the NASS Cropland Data Layer.

Description

get_nass_cdl returns a SpatRaster of NASS Cropland Data Layer cropped to a given template study area.

Usage

get_nass_cdl(
  template,
  label,
  year = 2019,
  extraction.dir = paste0(tempdir(), "/FedData/"),
  raster.options = c("COMPRESS=DEFLATE", "ZLEVEL=9", "INTERLEAVE=BAND"),
  force.redo = FALSE,
  progress = TRUE
)

get_nass(template, label, ...)

get_cdl(template, label, ...)

cdl_colors()

Arguments

template

An Simple Feature or SpatRaster object to serve as a template for cropping.

label

A character string naming the study area.

year

An integer representing the year of desired NASS Cropland Data Layer product. Acceptable values are 2007–the last year.

extraction.dir

A character string indicating where the extracted and cropped NASS data should be put. The directory will be created if missing.

raster.options

a vector of options for terra::writeRaster.

force.redo

If an extraction for this template and label already exists, should a new one be created?

progress

Draw a progress bar when downloading?

...

Other parameters passed on to get_nass_cdl.

Value

A SpatRaster cropped to the bounding box of the template.

Examples

## Not run: 
# Extract data for the Mesa Verde National Park:

# Get the NASS CDL (USA ONLY)
# Returns a raster
NASS <-
  get_nass_cdl(
    template = FedData::meve,
    label = "meve",
    year = 2011
  )

# Plot with terra::plot
terra::plot(NASS)

## End(Not run)

Download and crop the 1 (~30 meter) or 1/3 (~10 meter) arc-second National Elevation Dataset.

Description

get_ned returns a SpatRaster of elevation data cropped to a given template study area.

Usage

get_ned(
  template,
  label,
  res = "1",
  extraction.dir = file.path(tempdir(), "FedData", "extractions", "ned", label),
  raster.options = c("COMPRESS=DEFLATE", "ZLEVEL=9"),
  force.redo = FALSE
)

Arguments

template

An Simple Feature or SpatRaster object to serve as a template for cropping.

label

A character string naming the study area.

res

A character string representing the desired resolution of the NED. '1' indicates the 1 arc-second NED (the default), while '13' indicates the 1/3 arc-second dataset.

extraction.dir

A character string indicating where the extracted and cropped DEM should be put. The directory will be created if missing.

raster.options

a vector of GDAL options passed to terra::writeRaster.

force.redo

If an extraction for this template and label already exists, should a new one be created?

Value

A SpatRaster DEM cropped to the extent of the template.

Examples

## Not run: 
# Get the NED (USA ONLY)
# Returns a `SpatRaster`
NED <-
  get_ned(
    template = FedData::meve,
    label = "meve"
  )

# Plot with terra::plot
terra::plot(NED)

## End(Not run)

Download and crop the National Hydrography Dataset.

Description

get_nhd returns a list of Simple Feature objects extracted from the National Hydrography Dataset.

Usage

get_nhd(
  template,
  label,
  nhdplus = FALSE,
  extraction.dir = file.path(tempdir(), "FedData", "extractions", "nhd", label),
  force.redo = FALSE
)

Arguments

template

An Simple Feature or SpatRaster object to serve as a template for cropping.

label

A character string naming the study area.

nhdplus

Extract data from the USGS NHDPlus High Resolution service (experimental)

extraction.dir

A character string indicating where the extracted and cropped NHD data should be put.

force.redo

If an extraction for this template and label already exists, should a new one be created?

Value

A list of sf collections extracted from the National Hydrography Dataset.

Examples

## Not run: 
# Get the NHD (USA ONLY)
NHD <- get_nhd(
  template = FedData::meve,
  label = "meve"
)
NHD
NHD %>%
  plot_nhd(template = FedData::meve)

## End(Not run)

Download and crop the National Land Cover Database.

Description

get_nlcd returns a SpatRaster of NLCD data cropped to a given template study area. nlcd_colors and pal_nlcd return the NLCD legend and color palette, as available through the MLRC website.

Usage

get_nlcd(
  template,
  label,
  year = 2021,
  dataset = "landcover",
  landmass = "L48",
  extraction.dir = file.path(tempdir(), "FedData", "extractions", "nlcd", label),
  raster.options = c("COMPRESS=DEFLATE", "ZLEVEL=9"),
  force.redo = FALSE
)

nlcd_colors()

pal_nlcd()

Arguments

template

An Simple Feature or terra object to serve as a template for cropping.

label

A character string naming the study area.

year

An integer representing the year of desired NLCD product. Acceptable values are 2021 (default), 2019, 2016, 2011, 2008, 2006, 2004, and 2001.

dataset

A character string representing type of the NLCD product. Acceptable values are 'landcover' (default), 'impervious', and 'canopy'.

landmass

A character string representing the landmass to be extracted Acceptable values are 'L48' (lower 48 US states, the default), 'AK' (Alaska, 2011 and 2016 only), 'HI' (Hawaii, 2001 only), and 'PR' (Puerto Rico, 2001 only).

extraction.dir

A character string indicating where the extracted and cropped NLCD data should be put. The directory will be created if missing.

raster.options

a vector of GDAL options passed to terra::writeRaster.

force.redo

If an extraction for this template and label already exists, should a new one be created?

Value

A RasterLayer cropped to the bounding box of the template.

Examples

## Not run: 
# Extract data for the Mesa Verde National Park:

# Get the NLCD (USA ONLY)
# Returns a raster
NLCD <-
  get_nlcd(
    template = FedData::meve,
    label = "meve",
    year = 2016
  )

# Plot with terra::plot
terra::plot(NLCD)

## End(Not run)

Download and crop the Annual National Land Cover Database.

Description

get_nlcd_annual returns a SpatRaster of NLCD data cropped to a given template study area. The Annual NLCD is currently only available for the conterminous United States. More information about the Annual NLCD product is available on the Annual NLCD web page.

Usage

get_nlcd_annual(
  template,
  label,
  year = 2023,
  product = "LndCov",
  region = "CU",
  collection = 1,
  version = 0,
  extraction.dir = file.path(tempdir(), "FedData", "extractions", "nlcd_annual", label),
  raster.options = c("COMPRESS=DEFLATE", "ZLEVEL=9"),
  force.redo = FALSE
)

Arguments

template

An Simple Feature or terra object to serve as a template for cropping.

label

A character string naming the study area.

year

An integer vector representing the year of desired NLCD product. Acceptable values are currently 1985 through 2023 (defaults to 2023).

product

A character vector representing type of the NLCD product. Defaults to 'LndCov' (Land Cover).
LndCov = Land Cover
LndChg = Land Cover Change
LndCnf = Land Cover Confidence
FctImp = Fractional Impervious Surface
ImpDsc = Impervious Descriptor
SpcChg = Spectral Change Day of Year

region

A character string representing the region to be extracted Acceptable values are 'CU' (Conterminous US, the default), 'AK' (Alaska), and 'HI' (Hawaii). Currently, only 'CU' is available.

collection

An integer representing the collection number. Currently, only '1' is available.

version

An integer representing the version number. Currently, only '0' is available.

extraction.dir

A character string indicating where the extracted and cropped NLCD data should be put. The directory will be created if missing.

raster.options

a vector of GDAL options passed to terra::writeRaster.

force.redo

If an extraction for this template and label already exists, should a new one be created?

Value

A RasterLayer cropped to the bounding box of the template.

Examples

## Not run: 
# Extract data for the Mesa Verde National Park:

# Get the NLCD (USA ONLY)
# Returns a raster
NLCD_ANNUAL <-
  get_nlcd_annual(
    template = FedData::meve,
    label = "meve",
    year = 2020,
    product =
      c(
        "LndCov",
        "LndChg",
        "LndCnf",
        "FctImp",
        "ImpDsc",
        "SpcChg"
      )
  )

NLCD_ANNUAL

## End(Not run)

Download and crop the PAD-US Dataset.

Description

get_padus returns a list of sf objects extracted from the PAD-US Dataset. Data are retrieved directly from PAD-US ArcGIS Web Services.

Usage

get_padus(
  template,
  label,
  layer = c("Manager_Name"),
  extraction.dir = file.path(tempdir(), "FedData", "extractions", "padus", label),
  force.redo = FALSE
)

Arguments

template

An Simple Feature or SpatRaster object to serve as a template for cropping. Optionally, a vector of unit names, e.g., c('Mesa Verde National Park','Ute Mountain Reservation') may be provided.

label

A character string naming the study area.

layer

A character vector containing one or more PAD-US Layers. By default, the Manager_Name layer is downloaded.

  • Protection_Status_by_GAP_Status_Code: PAD-US 3.0 Protection Status by GAP Status Code — Service representing a measure of management intent to permanently protect biodiversity. GAP 1&2 areas are primarily managed for biodiversity, GAP 3 are managed for multiple uses including conservation and extraction, GAP 4 no known mandate for biodiversity protection. GAP Status Codes 1-3 are displayed, GAP 4 areas included but not displayed.

  • Public_Access: PAD-US 3.0 Public Access — Service representing general level of public access permitted in the area - Open, Restricted (permit, seasonal), Closed. Public Access Unknown areas not displayed. Use to show general categories of public access (however, not all areas have been locally reviewed).

  • Fee_Manager: PAD-US 3.0 Fee Manager — Manager or administrative agency names standardized nationally. Use for categorization by manager name, with detailed federal managers and generic state/local/other managers. Where available this layer includes fee simple parcels from the Fee feature class plus DOD and Tribal areas from the Proclamation feature class.

  • Manager_Name: PAD-US 3.0 Manager Name — Service representing coarse level land manager description from "Agency Type" Domain, "Manager Type" Field (for example, Federal, Tribal, State, Local Gov, Private). Use for broad categorization of manager levels, for general depictions of who manages what areas.

  • Manager_Type: PAD-US 3.0 Manager Type — Service representing coarse level land manager description from "Agency Type" Domain, "Manager Type" Field (for example, Federal, Tribal, State, Local Gov, Private). Use for broad categorization of manager levels, for general depictions of who manages what areas.

  • Federal_Fee_Managers_Authoritative: PAD-US 3.0 Federal Fee Managers Authoritative — An ArcGIS WebService describing authoritative fee data for federal managers or administrative agencies by name. U.S. Department of Defense and Tribal areas shown from the Proclamation feature class. Use to depict authoritative fee data for individual federal management agencies (no state, local or private lands). This service does not include designations that often overlap state, private or other inholdings. U.S. Department of Defense internal land ownership is not represented but is implied Federal. See the Federal Management Agencies service for a combined view of fee ownership, designations, and easements.

  • Federal_Management_Agencies: PAD-US 3.0 Federal Management Agencies — Federal managers or administrative agencies by name. Use to depict individual federal management agencies (no state, local or private lands). This map is based on the Combined Proclamation, Marine, Fee, Designation, Easement feature class.

  • Protection_Mechanism_Category: PAD-US 3.0 Protection Mechanism Category — Service representing the protection mechanism category including fee simple, internal management designations, easements, leases and agreements, and Marine Areas. Use to show categories of land tenure for all protected areas, including marine areas.

  • Proclamation_and_Other_Planning_Boundaries: PAD-US 3.0 Proclamation and Other Planning Boundaries — Service representing boundaries that provide additional context. Administrative agency name standardized for the nation (DOD, FWS, NPS, USFS, Tribal). Boundaries shown with outline only, as proclamation data do not depict actual ownership or management. Use to show outline of agency proclamation, approved acquisition or other planning boundaries where internal ownership is not depicted.

  • Fee_Topology_Overlaps: PAD-US 3.0 Topology Overlaps — Topology assessment of the Fee feature class. Use to identify overlaps in Fee data between Federal agencies and between Federal/State lands.

extraction.dir

A character string indicating where the extracted and cropped PAD-US data should be put.

force.redo

If an extraction for this template and label already exists, should a new one be created?

Details

PAD-US is America’s official national inventory of U.S. terrestrial and marine protected areas that are dedicated to the preservation of biological diversity and to other natural, recreation and cultural uses, managed for these purposes through legal or other effective means. PAD-US also includes the best available aggregation of federal land and marine areas provided directly by managing agencies, coordinated through the Federal Geographic Data Committee Federal Lands Working Group.

Value

A list of sf::sf collections extracted from the PAD-US Dataset.

Examples

## Not run: 
# Get the PAD-US (USA ONLY)
PADUS <- get_padus(
  template = FedData::meve,
  label = "meve"
)
PADUS

## End(Not run)

Download and crop data from the NRCS SSURGO soils database.

Description

This is an efficient method for spatially merging several different soil survey areas as well as merging their tabular data.

Usage

get_ssurgo(
  template,
  label,
  raw.dir = paste0(tempdir(), "/FedData/raw/ssurgo"),
  extraction.dir = paste0(tempdir(), "/FedData/"),
  force.redo = FALSE
)

Arguments

template

An Simple Feature or SpatRaster object to serve as a template for cropping. Optionally, a vector of area names, e.g., c('IN087','IN088') may be provided.

label

A character string naming the study area.

raw.dir

A character string indicating where raw downloaded files should be put. The directory will be created if missing. Defaults to './RAW/SSURGO/'.

extraction.dir

A character string indicating where the extracted and cropped SSURGO shapefiles should be put. The directory will be created if missing. Defaults to './EXTRACTIONS/SSURGO/'.

force.redo

If an extraction for this template and label already exists, should a new one be created? Defaults to FALSE.

Details

get_ssurgo returns a named list of length 2:

  1. 'spatial': A Simple Feature of soil mapunits in the template, and

  2. 'tabular': A named list of data.frames with the SSURGO tabular data.

Value

A named list containing the 'spatial' and 'tabular' data.

Examples

## Not run: 
# Get the NRCS SSURGO data (USA ONLY)
SSURGO.MEVE <-
  get_ssurgo(
    template = FedData::meve,
    label = "meve"
  )

# Plot the VEP polygon
plot(meve)

# Plot the SSURGO mapunit polygons
plot(SSURGO.MEVE$spatial["MUKEY"],
  lwd = 0.1,
  add = TRUE
)

# Or, download by Soil Survey Area names
SSURGO.areas <-
  get_ssurgo(
    template = c("CO670", "CO075"),
    label = "CO_TEST"
  )

# Let's just look at spatial data for CO675
SSURGO.areas.CO675 <-
  SSURGO.areas$spatial[SSURGO.areas$spatial$AREASYMBOL == "CO075", ]

# And get the NED data under them for pretty plotting
NED.CO675 <-
  get_ned(
    template = SSURGO.areas.CO675,
    label = "SSURGO_CO675"
  )

# Plot the SSURGO mapunit polygons, but only for CO675
terra::plot(NED.CO675)
plot(
  SSURGO.areas.CO675$geom,
  lwd = 0.1,
  add = TRUE
)

## End(Not run)

Download and crop the Watershed Boundary Dataset.

Description

get_wbd returns an Simple Feature collection of the HUC 12 regions within the specified template.

Usage

get_wbd(
  template,
  label,
  extraction.dir = file.path(tempdir(), "FedData", "extractions", "nhd", label),
  force.redo = FALSE
)

Arguments

template

An Simple Feature or SpatRaster object to serve as a template for cropping.

label

A character string naming the study area.

extraction.dir

A character string indicating where the extracted and cropped NHD data should be put.

force.redo

If an extraction for this template and label already exists, should a new one be created?

Value

An sf collection of the HUC 12 regions within the specified template.


The boundary of Mesa Verde National Park

Description

A dataset containing the spatial polygon defining the boundary of Mesa Verde National Park in Montana.

Usage

meve

Format

Simple feature collection with 1 feature and a geometry field.


A basic plotting function for NHD data.

Description

This is more of an example than anything

Usage

plot_nhd(x, template = NULL)

Arguments

x

The result of get_nhd.

template

An Simple Feature or SpatRaster object to serve as a template for cropping.

Value

A ggplot2 panel of plots

Examples

## Not run: 
# Get the NHD (USA ONLY)
NHD <- get_nhd(
  template = FedData::meve,
  label = "meve"
)
NHD
NHD %>%
  plot_nhd(template = FedData::meve)

## End(Not run)

Replace NULLs

Description

Replace all the empty values in a list

Usage

replace_null(x)

Arguments

x

A list

Value

A list with NULLs replaced by NA

Examples

list(a = NULL, b = 1, c = list(foo = NULL, bar = NULL)) %>% replace_null()