yasa.SleepStatsAgreement#
- class yasa.SleepStatsAgreement(ref_data, obs_data, *, ref_scorer='Reference', obs_scorer='Observed', agreement=1.96, confidence=0.95, alpha=0.05, bootstrap_kwargs={}, log_transform=False)[source]#
Evaluate agreement between sleep statistics reported by two different scorers. Evaluation includes bias and limits of agreement (as well as both their confidence intervals), various plotting options, and calibration functions for correcting biased values from the observed scorer.
Features include:
Get summary calculations of bias, limits of agreement, and their confidence intervals.
Test statistical assumptions of bias, limits of agreement, and their confidence intervals, and apply corrective procedures when the assumptions are not met.
Get bias and limits of agreement in a string-formatted table.
Calibrate new data to correct for biases in observed data.
Return individual calibration functions.
Visualize discrepancies for outlier inspection.
Visualize Bland-Altman plots.
Warning
Experimental — the API of this class may change before the full release planned for v0.8.0. Use with caution.
See also
Added in version 0.7.0.
- Parameters:
- ref_data
pandas.DataFrame A
pandas.DataFramewith sleep statistics from the reference scorer. Rows are unique observations and columns are unique sleep statistics.- obs_data
pandas.DataFrame A
pandas.DataFramewith sleep statistics from the observed scorer. Rows are unique observations and columns are unique sleep statistics. Shape, index, and columns must be identical toref_data.- ref_scorerstr
Name of the reference scorer.
- obs_scorerstr
Name of the observed scorer.
- agreementfloat
Multiple of the standard deviation to plot agreement limits. The default is 1.96, which corresponds to a 95% confidence interval if the differences are normally distributed.
Note
agreementgets adjusted for regression-modeled limits of agreement.- confidencefloat
Confidence level (between 0 and 1) for the confidence intervals applied to bias and limits of agreement. Default is 0.95 (i.e., 95%). The same level is used for both parametric and bootstrapped confidence intervals.
- alphafloat
Alpha cutoff used for all assumption tests.
- bootstrap_kwargsdict
Optional keyword arguments passed to
scipy.stats.bootstrap. Defaults usen_resamples=1000andmethod='BCa'. The keys'confidence_level','vectorized', and'paired'cannot be overridden.- log_transformbool
If
True, apply the Euser et al. (2008) log-transformation method to all sleep statistics. Limits of agreement are then expressed asbias ± slope × ref, whereslopeis derived from the standard deviation of log-ratio differences. This is appropriate when measurement variability is proportional to the measurement magnitude (heteroscedasticity), which is common for duration statistics such as TST, SOL, and WASO. WhenTrue,loa_method='auto'inreportandplot_blandaltmanwill automatically select the Euser method for all statistics, bypassing the homoscedasticity assumption test. Passingloa_method='log'to those methods whenlog_transform=Falseraises aValueError. Default isFalse.
- ref_data
Notes
Sleep statistics that are identical between scorers are removed from analysis.
Many steps here are influenced by guidelines proposed in Menghini et al., 2021 [Menghini2021]. See https://sri-human-sleep.github.io/sleep-trackers-performance/AnalyticalPipeline_v1.0.0.html
References
[Menghini2021]Menghini, L., Cellini, N., Goldstone, A., Baker, F. C., & de Zambotti, M. (2021). A standardized framework for testing the performance of sleep-tracking technology: step-by-step guidelines and open-source code. SLEEP, 44(2), zsaa170. https://doi.org/10.1093/sleep/zsaa170
Examples
>>> import pandas as pd >>> import yasa >>> >>> # Generate fake reference and observed datasets with similar sleep statistics >>> ref_scorer = "Henri" >>> obs_scorer = "Piéron" >>> ref_hyps = [yasa.simulate_hypnogram(tib=600, scorer=ref_scorer, seed=i) for i in range(20)] >>> obs_hyps = [h.simulate_similar(scorer=obs_scorer, seed=i) for i, h in enumerate(ref_hyps)] >>> # Generate sleep statistics from hypnograms using EpochByEpochAgreement >>> eea = yasa.EpochByEpochAgreement(ref_hyps, obs_hyps) >>> sstats = eea.get_sleep_stats() >>> ref_sstats = sstats.loc[ref_scorer] >>> obs_sstats = sstats.loc[obs_scorer] >>> # Create SleepStatsAgreement instance >>> ssa = yasa.SleepStatsAgreement(ref_sstats, obs_sstats) >>> ssa.summary().round(1).head(3) variable bias_intercept bias_mean ... loa_slope loa_upper interval center lower upper center ... upper center lower upper sleep_stat ... %N1 -5.4 -13.9 3.2 0.3 ... 1.6 6.1 3.7 8.5 %N2 -27.3 -49.1 -5.6 -0.2 ... 1.4 12.4 7.2 17.6 %N3 -9.1 -23.8 5.5 1.4 ... 1.8 20.4 12.6 28.3 [3 rows x 21 columns]
>>> ssa.report(ci_method="param").head(3)[["Bias [95% CI]", "LoA [95% CI]"]]
>>> ssa.assumptions.head(3) unbiased normal constant_bias homoscedastic sleep_stat %N1 True True True False %N2 True True False False %N3 True True True False
>>> ssa.auto_methods.head(3) bias loa ci sleep_stat %N1 param regr param %N2 regr regr param %N3 param regr param
>>> new_hyps = [h.simulate_similar(scorer="Kelly", seed=i) for i, h in enumerate(obs_hyps)] >>> new_sstats = pd.Series(new_hyps).map(lambda h: h.sleep_statistics()).apply(pd.Series) >>> new_sstats[["N1", "TST", "WASO"]].round(1).head(5) N1 TST WASO 0 42.5 439.5 147.5 1 84.0 550.0 38.5 2 53.5 489.0 103.0 3 57.0 469.5 120.0 4 71.0 531.0 69.0
>>> new_stats_calibrated = ssa.calibrate(new_sstats[ssa.sleep_statistics], bias_method="auto") >>> new_stats_calibrated[["N1", "TST", "WASO"]].round(1).head(5) N1 TST WASO 0 42.5 439.5 147.5 1 84.0 550.0 38.5 2 53.5 489.0 103.0 3 57.0 469.5 120.0 4 71.0 531.0 69.0
Methods
__init__(ref_data, obs_data, *[, ...])calibrate(data[, bias_method, adjust_all])Calibrate a
DataFrameof sleep statistics from a new scorer based on observed biases inobs_data/obs_scorer.get_calibration_func(sleep_stat)Return a function for calibrating a specific sleep statistic, based on observed biases in
obs_data/obs_scorer.plot_blandaltman([sleep_stats, bias_method, ...])Plot Bland-Altman agreement plots for one or more sleep statistics.
report([bias_method, loa_method, ci_method, ...])Return a human-readable
DataFramefor reporting bias, limits of agreement, and statistical assumption results.summary([ci_method])Return a
DataFramethat includes all calculated metrics:Attributes
A
pandas.DataFramecontaining boolean values indicating the pass/fail status of all statistical tests performed to test assumptions.A
pandas.DataFramecontaining the methods applied when'auto'is selected.A long-format
pandas.DataFramecontaining all raw sleep statistics fromref_dataandobs_data, with aMultiIndexwith levelssleep_statandsession_id(or the original index name from the input data).The number of sessions.
The name of the observed scorer.
The name of the reference scorer.
Return a list of all sleep statistics included in the agreement analyses.