yasa.SpindlesResults.compare_detection#
- SpindlesResults.compare_detection(other, max_distance_sec=0, other_is_groundtruth=True)[source]#
Compare the detected spindles against either another YASA detection or against custom annotations (e.g. ground-truth human scoring).
This function is a wrapper around the
yasa.compare_detectionfunction. Please refer to the documentation of this function for more details.- Parameters:
- otherdataframe or detection results
This can be either a) the output of another YASA detection, for example if you want to test the impact of tweaking some parameters on the detected events or b) a pandas DataFrame with custom annotations, obtained by another detection method outside of YASA, or with manual labelling. If b), the dataframe must contain the “Start” and “Channel” columns, with the start of each event in seconds from the beginning of the recording and the channel name, respectively. The channel names should match the output of the summary() method.
- max_distance_secfloat
The maximum distance between spindles, in seconds, to consider as the same event.
Warning
To reduce computation cost, YASA rounds the start time of each spindle to the nearest decisecond (= 100 ms). This means that the lowest possible resolution is 100 ms, regardless of the sampling frequency of the data.
- other_is_groundtruthbool
If True (default),
otherwill be considered as the ground-truth scoring. If False, the current detection will be considered as the ground-truth, and the precision and recall scores will be inverted. This parameter has no effect on the F1-score.Note
when
otheris the ground-truth (default), the recall score is the fraction of events in other that were succesfully detected by the current detection, and the precision score is the proportion of detected events by the current detection that are also present in other.
- Returns:
- scores
pandas.DataFrame A Pandas DataFrame with the channel names as index, and the following columns
precision: Precision score, aka positive predictive valuerecall: Recall score, aka sensitivityf1: F1-scoren_self: Number of detected events inself(current method).n_other: Number of detected events inother.
- scores
Notes
Some use cases of this function:
How well does YASA events detection perform against ground-truth human annotations?
If I change the threshold(s) of the events detection, do the detected events match those obtained with the default parameters?
Which detection thresholds give the highest agreement with the ground-truth scoring?