download_climate {clidatajp} | R Documentation |
Download climate data of the world
Description
For polite scraping, 5 sec interval is set in download_climate(), it takes over 5 hours to get climate data of all stations. Please use existing links by "data(climate_world)", if you do not need to renew climate data. You can see web page as below. https://www.data.jma.go.jp/gmd/cpd/monitor/nrmlist/
Usage
download_climate(url)
Arguments
url |
A String to specify target html. |
Value
A tibble including climate and station information, or NULL when failed.
Examples
# If you want all climate data, remove head().
# The codes take > 5 sec because of poliste scraping.
library(magrittr)
library(stringi)
library(dplyr)
data(station_links)
station_links <-
station_links %>%
dplyr::mutate_all(stringi::stri_unescape_unicode) %>%
head(3) %T>%
{
continent <<- `$`(., "continent")
no <<- `$`(., "no")
} %>%
`$`("url")
climate <- list()
for(i in seq_along(station_links)){
print(stringr::str_c(i, " / ", length(station_links)))
climate[[i]] <- download_climate(station_links[i])
}
# run only when download_climate() successed
if(sum(is.null(climate[[1]]),
is.null(climate[[2]]),
is.null(climate[[3]])) == 0){
month_per_year <- 12
climate_world <-
dplyr::bind_rows(climate) %>%
dplyr::bind_cols(
tibble::tibble(continent = rep(continent, month_per_year))) %>%
dplyr::bind_cols(
tibble::tibble(no = rep(no, month_per_year))) %>%
dplyr::relocate(no, continent, country, station)
climate_world
}
[Package clidatajp version 0.5.2 Index]