ohun is intended to facilitate the automated detection of sound events, providing functions to diagnose and optimize detection routines. Detections from other software can also be explored and optimized. This vignette provides a general overview of sound event optimization in ohun as well as basic concepts from signal detection theory.
The main features of the package are:
The package offers functions for:
All functions allow the parallelization of tasks, which distributes the tasks among several processors to improve computational efficiency. The package works on sound files in ‘.wav’, ‘.mp3’, ‘.flac’ and ‘.wac’ format.
The package can be installed from CRAN as follows:
# From CRAN would be
install.packages("ohun")
#load package
library(ohun)
To install the latest developmental version from github you will need the R package remotes:
# install package
remotes::install_github("maRce10/ohun")
#load packages
library(ohun)
library(tuneR)
library(warbleR)
Finding the position of sound events in a sound file is a challenging task. ohun offers two methods for automated sound event detection: template-based and energy-based detection. These methods are better suited for highly stereotyped or good signal-to-noise ratio (SNR) sounds, respectively. If the target sound events don’t fit these requirements, more elaborated methods (i.e. machine learning approaches) are warranted:
Also note that the presence of other sounds overlapping the target sound events in time and frequency can strongly affect detection performance for the two methods in ohun.
Still, a detection run using other software can be optimized with the tools provided in ohun.
Broadly speaking, signal detection theory deals with the process of recovering signals (i.e. target signals) from background noise (not necessarily acoustic noise) and it’s widely used for optimizing this decision making process in the presence of uncertainty. During a detection routine, the detected ‘items’ can be classified into 4 classes:
Several additional indices derived from these indices are used to evaluate the performance of a detection routine. These are three useful indices in the context of sound event detection included in ohun:
(Metrics that make use of ‘true negatives’ cannot be easily applied in the context of sound event detection as noise cannot always be partitioned in discrete units)
A perfect detection will have no false positives or false negatives, which will result in both recall and precision equal to 1. However, perfect detection cannot always be reached and some compromise between detecting all target signals plus some noise (recall = 1 & precision < 1) and detecting only target signals but not all of them (recall < 1 & precision = 1) is warranted. The right balance between these two extremes will be given by the relative costs of missing signals and mistaking noise for signals. Hence, these indices provide an useful framework for diagnosing and optimizing the performance of a detection routine.
The package ohun provides a set of tools to evaluate the performance of an sound event detection based on the indices described above. To accomplish this, the result of a detection routine is compared against a reference table containing the time position of all target sound events in the sound files. The package comes with an example reference table containing annotations of long-billed hermit hummingbird songs from two sound files (also supplied as example data: ‘lbh1’ and ‘lbh2’), which can be used to illustrate detection performance evaluation. The example data can be explored as follows:
# load example data
data("lbh1", "lbh2", "lbh_reference")
lbh_reference
Object of class 'selection_table'
* The output of the following call:
warbleR::selection_table(X = lbh_reference)
Contains:
* A selection table data frame with 19 rows and 6 columns:
|sound.files | selec| start| end| bottom.freq| top.freq|
|:-----------|-----:|------:|------:|-----------:|--------:|
|lbh2.wav | 1| 0.1092| 0.2482| 2.2954| 8.9382|
|lbh2.wav | 2| 0.6549| 0.7887| 2.2954| 9.0426|
|lbh2.wav | 3| 1.2658| 1.3856| 2.2606| 9.0774|
|lbh2.wav | 4| 1.8697| 2.0053| 2.1911| 8.9035|
|lbh2.wav | 5| 2.4418| 2.5809| 2.1563| 8.6600|
|lbh2.wav | 6| 3.0368| 3.1689| 2.2259| 8.9382|
... and 13 more row(s)
* A data frame (check.results) with 19 rows generated by check_sels() (as attribute)
created by warbleR 1.1.27
This is a ‘selection table’, an object class provided by the package
warbleR (see selection_table()
for details). Selection tables are basically data frames in which the
contained information has been double-checked (using warbleR’s check_sels()
).
But they behave pretty much as data frames and can be easily converted
to data frames:
# convert to data frame
as.data.frame(lbh_reference)
sound.files selec start end bottom.freq top.freq
1 lbh2.wav 1 0.109161 0.2482449 2.2954 8.9382
2 lbh2.wav 2 0.654921 0.7887232 2.2954 9.0426
3 lbh2.wav 3 1.265850 1.3855678 2.2606 9.0774
4 lbh2.wav 4 1.869705 2.0052678 2.1911 8.9035
5 lbh2.wav 5 2.441769 2.5808529 2.1563 8.6600
6 lbh2.wav 6 3.036825 3.1688667 2.2259 8.9382
7 lbh2.wav 7 3.628617 3.7465742 2.3302 8.6252
8 lbh2.wav 8 4.153288 4.2818085 2.2954 8.4861
9 lbh2.wav 9 4.723673 4.8609963 2.3650 8.6948
10 lbh1.wav 10 0.088118 0.2360047 1.9824 8.4861
11 lbh1.wav 11 0.572290 0.7201767 2.0520 9.5295
12 lbh1.wav 12 1.056417 1.1972614 2.0868 8.4861
13 lbh1.wav 13 1.711338 1.8680274 1.9824 8.5905
14 lbh1.wav 14 2.190249 2.3416568 2.0520 8.5209
15 lbh1.wav 15 2.697143 2.8538324 1.9824 9.2513
16 lbh1.wav 16 3.181315 3.3344833 1.9129 8.4861
17 lbh1.wav 17 3.663719 3.8133662 1.8781 8.6948
18 lbh1.wav 18 4.140816 4.3045477 1.8433 9.2165
19 lbh1.wav 19 4.626712 4.7851620 1.8085 8.9035
All ohun functions
that work with this kind of data can take both selection tables and data
frames. Spectrograms with highlighted sound events from a selection
table can be plotted with the function label_spectro()
(this function only plots one wave object at the time, not really useful
for long files):
# save sound file
tuneR::writeWave(lbh1, file.path(tempdir(), "lbh1.wav"))
# save sound file
tuneR::writeWave(lbh2, file.path(tempdir(), "lbh2.wav"))
# print spectrogram
label_spectro(wave = lbh1, reference = lbh_reference[lbh_reference$sound.files == "lbh1.wav", ], hop.size = 10, ovlp = 50, flim = c(1, 10))