Typical q-values are calculated by a p-value correction, such as Benjamini-Hochberg, and so p-values and q-values are always in the same order. Instead, Nitecap orders all hypotheses by a statistic, "total delta," and q-values are in the same order as this statistic. The Nitecap p-value of a hypothesis is not used in this ordering and so it is typical to see p-values in a different order. The q-values should be treated as the more meaningful value when analyzing datasets with Nitecap.

Nitecap avoids Benjamini-Hochberg correction since it is a permutation-based method that has limited ability to produce extremely low p-values. For example, a dataset with 6 timepoints over one cycle has a minimum p-value of about 0.008. With a dataset containing tens of thousands of measurements, Benjamini-Hochberg correction usually leaves little at a significant q-value. The q-values produced by Nitecap are able to be more significant by considering the exact test statistic rather than merely the p-value.

ANOVA p-values are computed as a one-way ANOVA where the values are grouped by timepoint. If your dataset does not contain any multiple replicates for any timepoint, or multiple cycles worth of data, then ANOVA has zero degrees of freedom and cannot be run.

Nitecap accepts most spreadsheets as is and supports Excel xls, xlsx spreadsheets (with your data in the first sheet in the file, if multiple sheets) as well as tab-separated and comma-separated files. The data should be formatted with one column per sample and one row per one feature (e.g. gene, protein or other value measured). Which column corresponds to which timepoint is set by the user after uploading the spreadsheet and no specific column headers are required. For convenience, if sample columns headers include ZT## or CT## (e.g. CT0, CT4, CT8, CT12, ...) then the number values will be used to infer timepoints when possible and for labels of timepoints. Similarly, if the columns contains ##:## (e.g. 12:04) time values, then those will be used.

Additional columns are no problem and can be used to label the values by selecting ID and multiple columns may be used as ID columns.

Nitecap can accept spreadsheets up to 40 megabytes.

To compare two datasets, you must upload them each individually as separate datasets. Then, go to the Display Spreadsheets page and select both datasets and hit the Compare button at the bottom.

Two datasets can be compared only if they have the same days and timepoints per days. Since values in the two datasets must be paired together, the IDs must be unique within a dataset and match across the datasets. Any IDs that are duplicated within a dataset is dropped from the analysis. Any IDs that appear in one but not the other dataset is dropped from the analysis.

The 'damping' statistic is available for comparison pages and low q-values indicates that the change over time decreases in the secondary dataset compared to the primary dataset.

You can select any columns for ID but must choose at least one. These are used for labelling the rows.

If you are comparing datasets, then there are further considerations. IDs should be unique (any non-unique IDs are dropped from comparisons, but not from a single dataset analysis). The IDs from two datasets should match if you wish to compare them as any IDs that do not exist in both spreadsheets will be dropped from the comparsion. For these purposes, if you select multiple ID columns, then the columns are joined together to form the IDs and IDs match only if they match in every ID column.

Nitecap is currently under development and has not yet been published, so no citation is yet possible. If you publish a work using Nitecap, please let let us know.

If you use the JTK analysis results in your paper, the developers of JTK_cycle ask users to cite their paper.

If you use phase and amplitude difference statistics, you can cite Bingham, Arbogast, Cornelissen Guillaume, Lee, Halberg "Inferential Statistical Methods for Estimating and Comparing Cosinor Parameters," 1982.

If you have logged into your account, there is a share button on every dataset which gives a URL that can be given to your colleagues to access the dataset. Note that anyone with the URL can access it, so consider who you share it with.

When someone opens the link, they receive a copy of the dataset and none of the changes they make will be reflected in your copy. If you modify your dataset, then that modification will go to anyone who clicks the link after your modification, but not for anyone who clicked before.

Unfortunately we do not yet support sharing comparisons between multiple spreadsheets. Instead, you should share all the individual spreadsheets and if your colleague opens each spreadsheet link, they can perform a comparison by visiting their spreadsheet list page.

These are derived from the methods in Bingham, Arbogast, Cornelissen Guillaume, Lee, Halberg "Inferential Statistical Methods for Estimating and Comparing Cosinor Parameters," 1982. In particular, see equations 49 and 50, for the case where there are 2 datasets using the t-test versions.

In short, these statistics are cosinor approaches based on linear least-squares fits of cosine and sine curves. The null hypothesis is that the phase (respectively, amplitude) parameters of the datasets are all equal for the phase (respectively amplitude) difference p-value. Hence a significant p-value allows the rejection of the null hypothesis that these parameters are the same across all datasets.

In particular, note that if any one of the datasets has non-significant rhythmicity, the p-value for the phase test will be non-significant. This is desirable since the phase of a non-rhythmic feature is undefined. Hence this p-value is significant only when all are significantly rhythmic and the difference in their phases are significant. The amplitude of a non-significant rhythmicity should be low and hence an amplitude difference is still meaningful and the test is more powerful.