-
Notifications
You must be signed in to change notification settings - Fork 2
Expand file tree
/
Copy pathsimple_workflow.Rmd
More file actions
749 lines (557 loc) · 34.6 KB
/
simple_workflow.Rmd
File metadata and controls
749 lines (557 loc) · 34.6 KB
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
---
title: "A Simple Workflow"
author:
- name: "Socorro Dominguez Vidaña"
institute: [uwiscgeog]
correspondence: true
email: dominguezvid@wisc.edu
orcid_id: 0000-0002-7926-4935
- name: "Simon Goring"
institute: [uwiscgeog]
correspondence: false
email: goring@wisc.edu
orcid_id: 0000-0002-2700-4605
institute:
- uwiscgeog:
name: "University of Wisconsin -- Madison: Department of Geography"
date: "`r Sys.Date()`"
output:
html_document:
code_folding: show
fig_caption: yes
keep_md: yes
self_contained: yes
theme: readable
toc: yes
toc_float: yes
css: "text.css"
pdf_document:
pandoc_args: "-V geometry:vmargin=1in -V geometry:hmargin=1in"
csl: 'https://bit.ly/3khj0ZL'
---
```{r setup, echo=FALSE}
options(warn = -1)
pacman::p_load(dplyr, tidyr, ggplot2, sf, geojsonsf, leaflet, DT, readr, stringr, rioja, neotoma2)
```
## Introduction
This document is intended to act as a primer for the use of the new Neotoma R package, `neotoma2` and is the companion to the [*Introduction to Neotoma* presentation](https://docs.google.com/presentation/d/1oouTvKMoWdEvUyjrKdn7qUKLNWF27KiM/edit?slide=id.p1#slide=id.p1).
Some users may be working with this document as part of a workshop for which there is a Binder instance. The Binder instance will run RStudio in your browser, with all the required packages installed.
If you are using this workflow on its own, or want to use the package directly, [the `neotoma2` package](https://github.com/NeotomaDB/neotoma2) is available on CRAN by running:
``` r
install.packages('neotoma2')
library(neotoma2)
```
Your version should be at or above `r packageVersion("neotoma2")`.
This workshop will also require other packages. To maintain the flow of this document we've placed instructions at the end of the document in the section labelled "[Installing packages on your own](#localinstall)". Please install these packages, and make sure they are at the lastest version.
## Learning Goals
In this tutorial you will learn how to:
1. [Site Searches](#3-site-searches): Search for sites using site names and geographic parameters.
2. [Filter Results](#33-filter-records-tabset): Filter results using temporal and spatial parameters.
3. [Explore Data](#34-pulling-in-sample-data): Obtain sample information for the selected datasets.
4. [Visualize Data](#4-simple-analytics): Perform basic Stratigraphic Plotting
## Background
### Getting Help with Neotoma
If you're planning on working with Neotoma, please join us on [Slack](https://join.slack.com/t/neotomadb/shared_invite/zt-cvsv53ep-wjGeCTkq7IhP6eUNA9NxYQ) where we manage a channel specifically for questions about the R package (the *#it_r* channel, or *#it_r_es* for R help in Spanish and *#it_r_jp* in Japanese). You may also wish to join the Neotoma community through our Google Groups mailing lists; please [see the information on our website](https://www.neotomadb.org/about/join-the-neotoma-community) to be added.
### Understanding Data Structures in Neotoma
Data in the Neotoma database itself is structured as a set of linked relationships to express different elements of paleoecological analysis:
- space and time
- Where is a sample located in latitude and longitude?
- Where is a sample along a depth profile?
- What is the estimated age of that sample?
- What is the recorded age of elements within or adjacent to the sample?
- observations
- What is being counted or measured?
- What units are being used?
- Who observed it?
- scientific methods
- What statistical model was used to calculate age?
- What uncertainty terms are used in describing an observation?
- conceptual data models
- How do observations in one sample relate to other samples within the same collection?
- How does an observation of a fossil relate to extant or extinct relatives?
These relationships can be complex because paleoecology is a broad and evolving discipline. As such, the database itself is highly structured, and normalized, to allow new relationships and facts to be added, while maintaining a stable central data model. If you want to better understand concepts within the database, you can read the [Neotoma Database Manual](https://open.neotomadb.org/manual), or take a look at [the database schema itself](https://open.neotomadb.org/dbschema).
In this workshop we want to highlight two key structural concepts:
1. The way data is structured conceptually within Neotoma (Sites, Collection Units and Datasets).
2. The way that this structure is adapted within the `neotoma2` R package.
#### Data Structure in the Neotoma Database
{width="50%"}
Data in Neotoma is associated with **sites** -- specific locations with latitude and longitude coordinates.
Within a **site**, there may be one or more [**collection units**](https://open.neotomadb.org/manual/dataset-collection-related-tables-1.html#CollectionUnits) -- locations at which samples are physically collected within the site:
- an archaeological **site** may have one or more **collection units**, pits within a broader dig site
- a pollen sampling **site** on a lake may have multiple **collection units** -- core sites within the lake basin.
- A bog sample **site** may have multiple **collection units** -- a transect of surface samples within the bog.
Collection units may have higher resolution GPS locations than the site location, but are considered to be part of the broader site.
Data within a **collection unit** is collected at various [**analysis units**](https://open.neotomadb.org/manual/sample-related-tables-1.html#AnalysisUnits).
- All sediment at 10cm depth in the depth profile of a cutbank (the collection unit) along an oxbow lake (the site) is one analysis unit.
- All material in a single surface sample (the collection unit) from a bog (the site) is an analysis unit.
- All fossil remains in a buried layer from a bone pile (the collection unit) in a cave (the site) is an analysis unit.
Any data sampled within an analysis unit is grouped by the dataset type (charcoal, diatom, dinoflagellate, etc.) and aggregated into a [**sample**](https://open.neotomadb.org/manual/sample-related-tables-1.html#Samples). The set of samples for a collection unit of a particular dataset type is then assigned to a [**dataset**](https://open.neotomadb.org/manual/dataset-collection-related-tables-1.html#Datasets).
- A sample would be all diatoms (the dataset type) extracted from sediment at 12cm (the analysis unit) in a core (the collection unit) obtained from a lake (the site).
- A sample would be the record of a single mammoth bone (sample and analysis unit, dataset type is vertebrate fauna) embeded in a riverbank (here the site, and collection unit).
#### Data Structures in `neotoma2` {#222-data-structures-in-neotoma2}

If we look at the [UML diagram](https://en.wikipedia.org/wiki/Unified_Modeling_Language) for the objects in the `neotoma2` R package we can see that the data structure generally mimics the structure within the database itself. As we will see in the [Site Searches section](#3-site-searches), we can search for these objects, and begin to manipulate them (in the [Simple Analysis section](#4-simple-analytics)).
It is important to note: *within the `neotoma2` R package, most objects are `sites` objects, they just contain more or less data*. There are a set of functions that can operate on `sites`. As we add to `sites` objects, using `get_datasets()` or `get_downloads()`, we are able to use more of these helper functions.
## Site Searches
### `get_sites()`
There are several ways to find sites in `neotoma2`, but we think of `sites` as being spatial objects primarily. They have names, locations, and are found within the context of geopolitical units, but within the API and the package, the site itself does not have associated information about taxa, dataset types or ages. It is simply the container into which we add that information. So, when we search for sites we can search by:
| Parameter | Description |
|---------------------------------|---------------------------------------|
| sitename | A valid site name (case insensitive) using `%` as a wildcard. |
| siteid | A unique numeric site id from the Neotoma Database |
| loc | A bounding box vector, geoJSON or WKT string. |
| altmin | Lower altitude bound for sites. |
| altmax | Upper altitude bound for site locations. |
| database | The constituent database from which the records are pulled. |
| datasettype | The kind of dataset (see `get_tables(datasettypes)`) |
| datasetid | Unique numeric dataset identifier in Neotoma |
| doi | A valid dataset DOI in Neotoma |
| gpid | A unique numeric identifier, or text string identifying a geopolitical unit in Neotoma |
| keywords | Unique sample keywords for records in Neotoma. |
| contacts | A name or numeric id for individuals associuated with sites. |
| taxa | Unique numeric identifiers or taxon names associated with sites. |
All sites in Neotoma contain one or more datasets. It's worth noting that the results of these search parameters may be slightly unexpected. For example, searching for sites by sitename, latitude, or altitude will return all of the datasets for the particular site. Searching for terms such as datasettype, datasetid or taxa will return the site, but the only datasets returned will be those matching the dataset-specific search terms. We'll see this later.
#### Dataset types: `datasettype="charcoal"` {.tabset}
The `datasettype` parameter allows us to search for sites that contain specific types of datasets. For example, if we want to find sites that contain charcoal records, we can use the following code:
##### Code
```{r datasettype, eval=FALSE}
charcoal_sites <- neotoma2::get_sites(datasettype = "charcoal")
plotLeaflet(charcoal_sites)
```
##### Results
```{r datasettypePlot, echo=FALSE}
charcoal_sites <- neotoma2::get_sites(datasettype = "charcoal")
plotLeaflet(charcoal_sites)
```
#### Site names: `sitename="%Beach"` {.tabset}
We may know exactly what site we're looking for ("Pearly Beach"), or have an approximate guess for the site name (for example, we know it's something like "Beach", but we're not sure how it was entered specifically), or we may want to search all sites that have a specific term, for example, *Beach*.
We use the general format: `get_sites(sitename="%Beach")` for searching by name.
PostgreSQL (and the API) uses the percent sign as a wildcard. So `"%Beach"` would pick up ["Pearly Beach"](<https://data.neotomadb.org/49977> for us (and picks up "Wizards Beach" and others as well). Note that the search query is also case insensitive.
##### Code
```{r sitename, eval=FALSE}
pearly_beach <- neotoma2::get_sites(sitename = "%Beach")
plotLeaflet(pearly_beach)
```
##### Result
```{r sitenamePlot, echo=FALSE}
pearly_beach <- neotoma2::get_sites(sitename = "%Beach")
plotLeaflet(pearly_beach)
```
#### Location: `loc=c()` {.tabset}
The original `neotoma` package used a bounding box for locations, structured as a vector of latitude and longitude values: `c(xmin, ymin, xmax, ymax)`. The `neotoma2` R package supports both this simple bounding box, but also more complex spatial objects, using the [`sf` package](https://r-spatial.github.io/sf/). Using the `sf` package allows us to more easily work with raster and polygon data in R, and to select sites from more complex spatial objects. The `loc` parameter works with the simple vector, [WKT](https://arthur-e.github.io/Wicket/sandbox-gmaps3.html), [geoJSON](http://geojson.io/#map=2/20.0/0.0) objects and native `sf` objects in R.
As an example of searching for sites using a location, we've created a rough representation of africa as a polygon. To work with this spatial object in R we also transformed the `geoJSON` element to an object for the `sf` package. There are many other tools to work with spatial objects in R. Regardless of how you get the data into R, `neotoma2` works with almost all objects in the `sf` package.
##### Code
```{r boundingBox, eval=FALSE}
geoJSON <- '{"type": "Polygon",
"coordinates": [[
[-7.030, 36.011],
[-18.807, 23.537],
[-19.247, 10.282],
[-9.139, -0.211],
[18.370, -37.546],
[35.069, -36.352],
[49.571, -27.097],
[58.185, 0.755],
[53.351, 13.807],
[43.946, 12.008],
[31.202, 33.629],
[18.897, 34.648],
[12.393, 35.583],
[11.075, 38.184],
[-7.030, 36.011]
]
]}'
africa_sf <- geojsonsf::geojson_sf(geoJSON)
# Note here we use the `all_data` flag to capture all the sites within the polygon.
# We're using `all_data` here because we know that the site information is relatively small
# for Africa. If we were working in a new area or with a new search we would limit the
# search size.
africa_sites <- neotoma2::get_sites(loc = africa_sf, all_data = TRUE)
```
##### Result
```{r boundingBox2, echo=FALSE}
geoJSON <- '{"type": "Polygon",
"coordinates": [[
[-7.030, 36.011],
[-18.807, 23.537],
[-19.247, 10.282],
[-9.139, -0.211],
[18.370, -37.546],
[35.069, -36.352],
[49.571, -27.097],
[58.185, 0.755],
[53.351, 13.807],
[43.946, 12.008],
[31.202, 33.629],
[18.897, 34.648],
[12.393, 35.583],
[11.075, 38.184],
[-7.030, 36.011]
]
]}'
africa_sf <- geojsonsf::geojson_sf(geoJSON)
africa_sites <- neotoma2::get_sites(loc = africa_sf, all_data = TRUE)
africa_sites %>%
as.data.frame() %>%
DT::datatable(data = ., rownames = FALSE,
options = list(scrollX = "100%", dom = "t"))
```
#### Plotting with `plotLeaflet()` {.tabset}
You can always simply `plot()` the `sites` objects, but you will lose some of the geographic context. The `plotLeaflet()` function returns a `leaflet()` map, and allows you to further customize it, or add additional spatial data (like our original bounding polygon, `sa_sf`, which works directly with the R `leaflet` package):
##### Code
```{r plotL, eval=FALSE}
neotoma2::plotLeaflet(africa_sites) %>%
leaflet::addPolygons(map = .,
data = africa_sf,
color = "green")
```
##### Result
```{r plotLeaf, echo=FALSE}
neotoma2::plotLeaflet(africa_sites) %>%
leaflet::addPolygons(map = .,
data = africa_sf,
color = "green")
```
###
Note the use of the `%>%` pipe here. If you are not familiar with this symbol, check our ["Piping in R" section](#piping-in-r) of the Appendix.
#### `site` Object Helpers {.tabset}
If we look at the [data structure diagram](#222-data-structures-in-neotoma2) for the objects in the `neotoma2` R package we can see that there are a set of functions that can operate on `sites`. As we retrieve more information for `sites` objects, using `get_datasets()` or `get_downloads()`, we are able to use more of these helper functions.
As it is, we can take advantage of functions like `summary()` to get a more complete sense of the types of data we have in `africa_sites`. The following code gives the summary table. We do some R magic here to change the way the data is displayed (turning it into a [`DT::datatable()`](https://rstudio.github.io/DT/) object), but the main piece is the `summary()` call.
##### Code
```{r summary_sites, eval=FALSE}
# Give information about the sites themselves, site names, etc.
neotoma2::summary(africa_sites)
# Give the unique identifiers for sites, collection units and datasets found at those sites.
neotoma2::getids(africa_sites)
```
##### Result
```{r summarySitesTable, eval=TRUE, echo=FALSE}
neotoma2::summary(africa_sites) %>%
DT::datatable(data = ., rownames = FALSE,
options = list(scrollX = "100%", dom = "t"))
```
###
In this document we list only the first 10 records (there are more, you can use `length(datasets(africa_sites))` to see how many datasets you've got). We can see that there are no chronologies associated with the `site` objects. This is because, at present, we have not pulled in the `dataset` information we need. In Neotoma, a chronology is associated with a collection unit (and that metadata is pulled by `get_datasets()` or `get_downloads()`). All we know from `get_sites()` are the kinds of datasets we have and the location of the sites that contain the datasets.
### `get_datasets()` {.tabset}
Within Neotoma, collection units and datasets are contained within sites. Similarly, a `sites` object contains `collectionunits` which contain `datasets`. From the table above (Result tab in Section 3.1.3.2) we can see that some of the sites we've looked at contain pollen records, some contain geochronologic data and some contain other dataset types. We could write something like this: `table(summary(africa_sites)$types)` to see the different datasettypes and their counts.
With a `sites` object we can directly call `get_datasets()` to pull in more metadata about the datasets. The `get_datasets()` method also supports any of the search terms listed above in the [Site Search](#3-site-searches) section. At any time we can use `datasets()` to get more information about any datasets that a `sites` object may contain. Compare the output of `datasets(africa_sites)` to the output of a similar call using the following:
#### Code
```{r datasetsFromSites, eval=FALSE}
# Depending on the number of sites, this could be slow.
africa_datasets <- neotoma2::get_datasets(loc = africa_sf,
datasettype = "charcoal",
all_data = TRUE)
datasets(africa_datasets)
```
#### Result
```{r datasetsFromSitesResult, echo=FALSE, message=FALSE}
africa_datasets <- neotoma2::get_datasets(loc = africa_sf,
datasettype = "charcoal",
all_data = TRUE)
datasets(africa_datasets) %>%
as.data.frame() %>%
DT::datatable(data = .,
options = list(scrollX = "100%", dom = "t"))
```
###
You can see that this provides information only about the specific dataset, not the site! For a more complete record we can join site information from `summary()` to dataset information using `datasets()` using the `getids()` function which links sites, and all the collection units and datasets they contain.
### `filter()` Records {.tabset}
If we choose to pull in information about only a single dataset type, or if there is additional filtering we want to do before we download the data, we can use the `filter()` function. For example, if we only want sedimentary pollen records (as opposed to pollen surface samples), and want records with known chronologies, we can filter by `datasettype` and by the presence of an `age_range_young`, which would indicate that there is a chronology that defines bounds for ages within the record.
#### Code
```{r downloads, eval=FALSE}
africa_records <- africa_datasets %>%
neotoma2::filter(siteid==27699)
neotoma2::summary(africa_records)
# We've removed records, so the new object should be shorter than the original.
length(africa_records) < length(africa_datasets)
```
#### Result
```{r downloadsCode, echo = FALSE}
africa_records <- africa_datasets %>%
neotoma2::filter(siteid==27699)
neotoma2::summary(africa_records) %>% DT::datatable(data = .,
options = list(scrollX = "100%", dom = "t"))
```
###
We can see now that the data table looks different (comparing it to the [table above](#322-result)), and there are fewer total sites. Again, there is no explicit chronology for these records, we need to pull down the complete download for these records, but we begin to get a sense of what kind of data we have.
### Pulling in `sample()` data {.tabset}
Because sample data adds a lot of overhead (for this pollen data, the object that includes the dataset with samples is 20 times larger than the `dataset` alone), we try to call `get_downloads()` only after we've done our preliminary filtering. After `get_datasets()` you have enough information to filter based on location, time bounds and dataset type. When we move to `get_download()` we can do more fine-tuned filtering at the analysis unit or taxon level.
The following call can take some time, but we've frozen the object as an RDS data file. You can run this command on your own, and let it run for a bit, or you can just load the object in.
#### Code
```{r taxa, eval = FALSE}
africa_dl <- africa_records[[1]]@siteid %>%
get_sites() %>%
neotoma2::filter(datasettype %in% c("pollen", "charcoal")) %>%
get_downloads(all_data = TRUE)
```
#### Result
```{r taxa2, echo=FALSE}
# You can run the code above to get the data, but we've already run it and saved it as an RDS file, so you can just load it in.
africa_dl <- readRDS("data/Download.RDS")
africa_dl %>%
as.data.frame() %>%
DT::datatable(data = ., rownames = FALSE,
options = list(scrollX = "100%", dom = "t"))
```
### {.tabset}
Once we've downloaded, we now have information for each site about all the associated collection units, the datasets, and, for each dataset, all the samples associated with the datasets. To extract samples all downloads we can call:
#### Code
```{r allSamples, eval=FALSE}
allSamp <- samples(africa_dl)
```
#### Result
```{r allSamples2, echo=FALSE}
allSamp <- samples(africa_dl)
allSamp %>%
DT::datatable(data = ., rownames = FALSE,
options = list(scrollX = "100%", dom = "t"))
```
### {.tabset}
When we've done this, we get a `data.frame` that is `r nrow(allSamp)` rows long and `r ncol(allSamp)` columns wide. The reason the table is so wide is that we are returning data in a **long** format. Each row contains all the information you should need to properly interpret it:
#### Code
```{r colNamesAllSamp0, eval=FALSE}
colnames(allSamp)
```
#### Result
```{r colNamesAllSamp, echo = FALSE}
colnames(allSamp)
```
### Extracting Taxa {.tabset}
For some dataset types or analyses, some of these columns may not be needed, however, for other dataset types they may be critically important. To allow the `neotoma2` package to be as useful as possible for the community we've included as many as we can.
If you want to know what taxa we have in the record you can use the helper function `taxa()` on the sites object. The `taxa()` function gives us not only the unique taxa, but two additional columns -- `sites` and `samples` -- that tell us how many sites the taxa appear in, and how many samples the taxa appear in, to help us better understand how common individual taxa are.
#### Code
```{r taxa3, eval=FALSE}
neotomatx <- neotoma2::taxa(africa_dl)
```
#### Results
```{r taxaprint, echo=FALSE, message=FALSE}
neotomatx <- neotoma2::taxa(africa_dl)
neotomatx %>%
DT::datatable(data = head(neotomatx, n = 20), rownames = FALSE,
options = list(scrollX = "100%", dom = "t"))
```
## Simple Analytics
### Stratigraphic Plotting {.tabset}
A stratigraphic diagram shows how taxa — pollen percentages and charcoal counts in this case — change through time at a single site. Age is plotted on the Y-axis with the oldest ages at the bottom, and each taxon or proxy occupies its own column.
We start by splitting `allSamp` into two subsets: tree and shrub pollen records (ecological group `TRSH`, element type `pollen`) and charcoal records (ecological group `CHAR`). We rename and select only the columns needed for plotting.
#### Code
```{r charcoalPrep0, message=FALSE, eval=FALSE}
# Pollen:
pollenSamp <- allSamp %>%
filter(ecologicalgroup == "TRSH",
elementtype == "pollen") %>%
select(depth,
age,
taxon = variablename,
count = value)
# Charcoal
charSamp <- allSamp %>%
filter(ecologicalgroup == "CHAR",
elementtype == "concentration") %>%
select(depth,
age,
variablename,
elementtype,
value)
```
#### Results
```{r charcoalPrep, message=FALSE, echo=FALSE}
pollenSamp <- allSamp %>%
filter(ecologicalgroup == "TRSH",
elementtype == "pollen") %>%
select(depth,
age,
taxon = variablename,
count = value)
# Charcoal
charSamp <- allSamp %>%
filter(ecologicalgroup == "CHAR",
elementtype == "concentration") %>%
select(depth,
age,
variablename,
elementtype,
value)
charSamp %>%
DT::datatable(data = ., rownames = FALSE,
options = list(scrollX = "100%", dom = "t"))
pollenSamp %>%
DT::datatable(data = ., rownames = FALSE,
options = list(scrollX = "100%", dom = "t"))
```
### {.tabset}
Charcoal records in Neotoma are often recorded across multiple size fractions (e.g., 10–50 µm, 50–100 µm, >100 µm). Since we want a single charcoal value per sample depth, we group by `depth` and `age` and sum all size fractions together.
#### Code
```{r charcoalCode, message=FALSE, eval=FALSE}
charcoal <- charSamp %>%
group_by(depth, age) %>%
summarise(charcoal = sum(value, na.rm = TRUE), .groups = "drop")
```
#### Results
```{r charcoalRes, message=FALSE, echo=FALSE}
charcoal <- charSamp %>%
group_by(depth, age) %>%
summarise(charcoal = sum(value, na.rm = TRUE), .groups = "drop")
charcoal %>%
DT::datatable(data = ., rownames = FALSE,
options = list(scrollX = "100%", dom = "t"))
```
#### Code
```{r pollenCode, message=FALSE, eval=FALSE}
pollenWide <- pollenSamp %>%
group_by(depth, age, taxon) %>%
summarise(count = sum(count, na.rm = TRUE), .groups = "drop") %>%
pivot_wider(names_from = taxon, values_from = count, values_fill = 0)
# Separate the depth/age columns from the count matrix
meta <- pollenWide %>% select(depth, age)
counts <- pollenWide %>% select(-depth, -age)
# Convert to percentages (row sums = 100)
pollen_pct <- counts / rowSums(counts) * 100
```
#### Results
```{r pollenRes, message=FALSE, echo=FALSE}
pollenWide <- pollenSamp %>%
group_by(depth, age, taxon) %>%
summarise(count = sum(count, na.rm = TRUE), .groups = "drop") %>%
pivot_wider(names_from = taxon, values_from = count, values_fill = 0)
# Separate the depth/age columns from the count matrix
meta <- pollenWide %>% select(depth, age)
counts <- pollenWide %>% select(-depth, -age)
# Convert to percentages (row sums = 100)
pollen_pct <- counts / rowSums(counts) * 100
pollen_pct %>%
DT::datatable(data = ., rownames = FALSE,
options = list(scrollX = "100%", dom = "t"))
```
### {.tabset}
With many taxa in the record, a stratigraphic diagram quickly becomes unreadable. A common threshold is to retain only taxa that reach at least 5% in at least one sample — this removes consistently rare taxa while preserving the ecologically meaningful ones.
#### Code
```{r charDiagram0, message=FALSE, warning=FALSE, out.width='70%', eval=FALSE}
keep <- sapply(pollen_pct, max) >= 5
pollen_pct <- pollen_pct[, keep]
```
#### Results
```{r charDiagram, message=FALSE, warning=FALSE, out.width='70%', echo=FALSE}
keep <- sapply(pollen_pct, max) >= 5
pollen_pct <- pollen_pct[, keep]
pollen_pct %>%
DT::datatable(data = ., rownames = FALSE,
options = list(scrollX = "100%", dom = "t"))
```
### Assembling the Final Table {.tabset}
With pollen percentages and charcoal counts prepared separately, we now assemble them into a single table. We bind the depth/age metadata to the pollen percentage matrix using `bind_cols()`, then join the aggregated charcoal values by `depth` and `age` using `left_join()`. The result is one row per sample, with pollen taxa and charcoal all aligned on the same depth axis.
#### Code
```{r charStd0, message=FALSE, warning=FALSE, out.width='70%', eval=FALSE}
final <- meta %>%
bind_cols(pollen_pct) %>%
left_join(charcoal, by = c("depth", "age")) %>%
arrange(depth)
```
#### Results
```{r charStd, message=FALSE, warning=FALSE, out.width='70%', echo=FALSE}
final <- meta %>%
bind_cols(pollen_pct) %>%
left_join(charcoal, by = c("depth", "age")) %>%
arrange(depth)
final %>%
DT::datatable(data = ., rownames = FALSE,
options = list(scrollX = "100%", dom = "t"))
```
### Drawing the Stratigraphic Diagram {.tabset}
We use `strat.plot()` from the `rioja` package to draw the stratigraphic diagram. The function expects a data frame of values (pollen percentages and charcoal), a depth or age vector for the Y-axis, and optional graphical parameters such as column widths and axis labels. Because the charcoal values and pollen percentages are on very different scales, the charcoal column is rescaled to a 0–100 range before plotting so it sits comfortably alongside the pollen curves.
To add stratigraphic zones, we first compute a **CONISS** (Constrained Incremental Sum of Squares) cluster analysis on the pollen percentage matrix using `chclust()`. CONISS groups adjacent samples that are compositionally similar, allowing us to identify major transitions in the assemblage. We then pass the resulting cluster object to `strat.plot()`, which draws a dendrogram alongside the diagram, and use `addClustZone()` to overlay zone boundary lines.
#### Code
```{r charStd1, message=FALSE, warning=FALSE, out.width='70%', eval=FALSE}
y <- final$age # or final$depth
taxa_df <- final %>% select(-depth, -age, -charcoal)
char_df <- final %>% select(charcoal)
char_scaled <- char_df %>%
mutate(charcoal_scaled = charcoal / max(charcoal, na.rm = TRUE) * 100) %>%
select(charcoal_scaled)
plot_data <- cbind(taxa_df, char_scaled)
n_taxa <- ncol(taxa_df)
widths <- c(rep(1, n_taxa), 3) # charcoal column wider
diss <- dist(sqrt(pollen_pct / 100)) # chord distance
clust <- chclust(diss, method = "coniss")
p <- strat.plot(
d = plot_data,
yvar = y,
y.rev = TRUE,
scale.percent = TRUE,
graph.widths = widths,
ylabel = "Age (cal yr BP)",
title = "Pearly Beach — pollen & charcoal",
srt.xlabel = 45,
cex.xlabel = 0.6,
clust = clust
)
addClustZone(p, clust, nZone = 4, col = "red")
```
#### Results
```{r charStd2, message=FALSE, warning=FALSE, out.width='70%', echo=FALSE}
y <- final$age # or final$depth
taxa_df <- final %>% select(-depth, -age, -charcoal)
char_df <- final %>% select(charcoal)
char_scaled <- char_df %>%
mutate(charcoal_scaled = charcoal / max(charcoal, na.rm = TRUE) * 100) %>%
select(charcoal_scaled)
plot_data <- cbind(taxa_df, char_scaled)
n_taxa <- ncol(taxa_df)
widths <- c(rep(1, n_taxa), 3) # charcoal column wider
pollen_for_clust <- plot_data %>% select(-charcoal_scaled)
diss <- dist(sqrt(pollen_for_clust / 100))
clust <- chclust(diss, method = "coniss")
p <- strat.plot(
d = plot_data,
yvar = y,
y.rev = TRUE,
scale.percent = TRUE,
graph.widths = widths,
ylabel = "Age (cal yr BP)",
title = "Pearly Beach — pollen & charcoal",
srt.xlabel = 45,
cex.xlabel = 0.6,
clust = clust
)
addClustZone(p, clust, nZone = 4, col = "red")
```
## Conclusion
So, we've done a lot in this example. We've (1) searched for sites using site names and geographic parameters, (2) filtered results using temporal and spatial parameters, (3) obtained sample information for the selected datasets and (4) performed basic analysis including the use of climate data from rasters. Hopefully you can use these examples as templates for your own future work, or as a building block for something new and cool!
## Appendix Sections
### Installing packages on your own {#localinstall}
We use several packages in this document, including `leaflet`, `sf` and others. We load the packages using the `pacman` package, which will automatically install the packages if they do not currently exist in your set of packages.
```{r setupFake, eval=FALSE}
options(warn = -1)
pacman::p_load(neotoma2, dplyr, ggplot2, sf, geojsonsf, leaflet, terra, DT, readr, stringr, rioja)
```
Note that R is sensitive to the order in which packages are loaded. Using `neotoma2::` tells R explicitly that you want to use the `neotoma2` package to run a particular function. So, for a function like `filter()`, which exists in other packages such as `dplyr`, you may see an error that looks like:
``` bash
Error in UseMethod("filter") :
no applicable method for 'filter' applied to an object of class "sites"
```
In that case it's likely that the wrong package is trying to run `filter()`, and so explicitly adding `dplyr::` or `neotoma2::` in front of the function name (i.e., `neotoma2::filter()`)is good practice.
### Piping in `R` {#piping-in-r .tabset}
Piping is a technique that simplifies the process of chaining multiple operations on a data object. It involves using either of these operators: `|>` or `%>%`. `|>` is a base R operator while `%>%` comes from the `tidyverse` ecosystem in R. In `neotoma2` we use `%>%`.
The pipe operator works as a real-life pipe, which carries water from one location to another. In programming, the output of the function on the left-hand side of the pipe is taken as the initial argument for the function on the right-hand side of the pipe. It helps by making code easier to write and read. Additionally, it reduces the number of intermediate objects created during data processing, which can make code more memory-efficient and faster.
Without using pipes you can use the `neotoma2` R package to retrieve a site and then plot it by doing:
``` r
# Retrieve the site
plot_site <- neotoma2::get_sites(sitename = "%catem%")
# Plot the site
neotoma2::plotLeaflet(object = plot_site)
```
This would create a variable `plot_site` that we will not need any more, but it was necessary so that we could pass it to the `plotLeaflet` function.
With the pipe (`%>%`) we do not need to create the variable, we can just rewrite our code. Notice that `plotLeaflet()` doesn't need the `object` argument because the response of `get_sites(sitename = "%ø%")` gets passed directly into the function.
#### Code
```{r piping code, eval=FALSE}
# get_sites and pipe. The `object` parameter for plotLeaflet will be the
# result of the `get_sites()` function.
get_sites(sitename = "%catem%") %>%
plotLeaflet()
```
#### Results
```{r piping result, echo=FALSE}
# get_sites and pipe
get_sites(sitename = "%catem%") %>%
plotLeaflet()
```