-
The paper is incomplete: Sec. 3 (Results) and Sec. 4 (Conclusions) are essentially empty, and no figures/tables/quantitative analyses are provided to substantiate the headline numerical claims in the Abstract and Introduction (Sec. 1) (e.g., $\sim 12$ dB for $k$ and $\sim 18$ dB for $b$ with DKF, $>20$ dB for the numerical method, and “$k$ more observable than $b$”). As written, the central contribution cannot be verified or reviewed.
Recommendation: Fully populate Sec. 3 with quantitative results across the full SNR $\times$ temporal-resolution grid for both methods and both parameters. At minimum include: (i) MAPE vs SNR curves for each sampling rate, or heatmaps/surfaces over (SNR, resolution); (ii) summary statistics across the 20 oscillators (mean/median and variability such as standard deviation or IQR); (iii) an explicit procedure for extracting thresholds (e.g., interpolation, averaging across oscillators, handling variability); and (iv) representative time-series/fit examples at “above threshold” and “below threshold” conditions. Then rewrite Sec. 4 to synthesize the major trends and translate them into practical guidance.
-
Terminology and theoretical positioning: the manuscript defines an “observability threshold” as an empirical estimator- and metric-dependent performance boundary (Sec. 2.3), which differs from standard control-theoretic observability (structural property) and from identifiability/Fisher-information perspectives. Without clarification, readers may misinterpret the reported thresholds as fundamental system properties rather than contingent on estimator choices, priors, tuning, metrics, and parameter regimes.
Recommendation: In Sec. 1 and Sec. 2.3, explicitly clarify that the paper studies an empirical performance/identifiability threshold under specific estimators and evaluation criteria. Either (a) rename to something like “empirical SNR requirement” / “practical identifiability threshold,” or (b) add a short formal bridge to classical notions (observability/identifiability, Fisher information/Cramér–Rao-type bounds) with appropriate citations, explaining what is and is not being claimed.
-
Synthetic data generation and parameter regime are under-specified (Sec. 2.1). Key choices that strongly affect thresholds are missing: distributions/ranges for $k$ and $b$ across the 20 oscillators; initial conditions $x(0)$, $\dot{x}(0)$; simulation duration relative to decay time; integrator and base time step; and confirmation/characterization of the underdamped regime (e.g., damping ratio $\zeta$ and proximity to critical damping). Without these, the reported thresholds are hard to interpret or generalize.
Recommendation: Expand Sec. 2.1 with full reproducible specifications: (i) numeric ranges and sampling distributions for $k$ and $b$ and derived ranges for $\omega_n$ and $\zeta$; (ii) initial conditions and whether they vary; (iii) integration method (e.g., RK4) and base $\Delta t$; (iv) observation duration and how many oscillation cycles are present across the parameter set; and (v) explicitly verify underdamping ($b^2 < 4mk$) for all cases and report how close some cases are to critical damping. Consider stratifying Sec. 3 results by $\zeta$ and $\omega_n$ to show how thresholds shift with regime.
-
SNR is central to the claims but is not mathematically defined, and it is unclear how SNR is enforced when noise is injected into both $x$ and $\dot{x}$ (Sec. 2.1, Sec. 2.3). Different SNR conventions (power vs variance, per-channel vs joint, computed before/after downsampling) can materially change threshold values.
Recommendation: Provide an explicit SNR definition in Sec. 2.1/Sec. 2.3, including the formula in dB and how signal power/variance is computed (over time window; detrending; per trajectory). State whether SNR is matched separately for $x$ and $\dot{x}$ or jointly, whether noise variances are constant over time, and whether $x$ and $\dot{x}$ noise are independent or correlated. Report the exact mapping from target SNR to injected Gaussian noise variance for each channel.
-
Downsampling and aliasing are not addressed (Sec. 2.1), yet temporal resolution is a core experimental axis. If some oscillators’ frequencies approach/exceed Nyquist at low-resolution settings, estimation failures may be due to aliasing rather than noise or estimator limitations, confounding interpretation of “temporal-resolution observability thresholds.”
Recommendation: In Sec. 2.1 and Sec. 3, report sampling rates for each downsampling condition and compare them to the oscillators’ frequency content (e.g., $\omega_d/2\pi$ range). Specify whether an anti-aliasing low-pass filter is applied prior to decimation; if not, justify or add it. In Sec. 3, separate or annotate failure modes attributable to aliasing vs noise, and consider restricting the sweep (or the oscillator set) so the lowest sampling rate remains meaningfully above Nyquist for the studied $\omega_d$ range.
-
The DKF (Sec. 2.2.2) is described only qualitatively and is not reproducible or assessable for fairness. Missing are: the discrete-time state transition model for $[x,\dot{x}]$ as a function of $(k, b, \Delta t)$ and whether discretization is exact or approximate; the parameter evolution model (constant vs random walk); the measurement model for observing $x$ and $\dot{x}$; full $Q/R$ covariance choices and tuning; coupling between the dual filters; initialization (means/covariances); handling of parameter constraints ($k > 0$, $b > 0$); and sensitivity to priors/tuning.
Recommendation: In Sec. 2.2.2, write down the full discrete-time model and DKF equations (or provide an appendix/pseudocode): (i) state vector, transition matrices (or nonlinear transition) and discretization method; (ii) parameter dynamics (e.g., random-walk variance for $k$ and $b$); (iii) measurement equation for $[x, \dot{x}]$ with explicit $H$ and $R$; (iv) $Q$ and $R$ numerical values (or rules) and how they were tuned; (v) initialization strategy (prior means/covariances relative to true parameter ranges); (vi) whether/ how you enforce positivity or prevent non-physical estimates; and (vii) a sensitivity analysis in Sec. 3 showing how thresholds change under reasonable variations of $Q/R$ and prior covariance.
-
The numerical differentiation + least-squares pipeline (Sec. 2.2.1) is under-specified and may not be a fair baseline without clear tuning. Savitzky–Golay window length/order, the rule for scaling with sampling resolution, the acceleration finite-difference scheme and boundary handling, and regression details (weighting/regularization/outlier handling, use of all points vs subsampling) are not given. These choices largely determine performance and thus the apparent gap vs DKF.
Recommendation: Expand Sec. 2.2.1 with complete implementation details: (i) Savitzky–Golay polynomial order and window length in samples for each resolution (and rationale/selection method); (ii) explicit acceleration estimation formula (central difference, higher-order, etc.) and endpoint handling; (iii) regression formulation (OLS vs weighted/regularized; any constraints like $k, b > 0$); and (iv) any preprocessing (discarding transients, normalization). In Sec. 3, include a tuning/fairness discussion: either tune both methods with a comparable protocol (e.g., cross-validated/held-out simulations) or show performance sensitivity to smoothing-window and filter-tuning choices.
-
Evaluation protocol is incomplete (Sec. 2.3). It is unclear how many independent noise realizations are used per (SNR, resolution) condition, whether reported errors are averaged over Monte Carlo runs or based on a single draw, and whether baseline sanity checks are reported. Using only one realization can make thresholds unstable; using only MAPE can also hide bias/variance structure and can be dominated by small ground-truth parameter values (especially for $b$).
Recommendation: In Sec. 2.3 and Sec. 3: (i) specify the number of Monte Carlo repeats per condition and how results are aggregated; (ii) report noise-free/high-resolution baselines to validate both implementations; (iii) complement MAPE with additional metrics (RMSE, median APE, bias $\pm$ variability across oscillators); and (iv) include a sensitivity analysis to the $5\%$ cutoff (e.g., $2\%$, $10\%$) to show whether “$\sim 12$ dB vs $\sim 18$ dB” conclusions are robust.
-
Assumptions and generalizability are not adequately discussed: the study appears to assume direct measurements of both $x$ and $\dot{x}$ with additive Gaussian noise (Sec. 2.1), whereas in many real settings velocity is derived from position (introducing correlated/colored noise) and process noise/model mismatch exists. The Introduction (Sec. 1) gestures at broad applicability, but without a limitations discussion the reported thresholds risk being overgeneralized.
Recommendation: Add a Limitations/Scope subsection (ideally in Sec. 4): clarify that simulations use a linear, unforced, single-DOF oscillator with known mass ($m=1$) and direct noisy access to both $x$ and $\dot{x}$. Discuss how thresholds might change if $\dot{x}$ is numerically differentiated from $x$, if noise is colored/correlated, if forcing or process noise is present, if mass is unknown, or if the model is mismatched/nonlinear. If feasible, add one additional experiment contrasting “direct velocity measurement” vs “velocity derived from noisy position” to quantify the impact.
-
Literature context is currently too thin to assess novelty and to help readers interpret results (Sec. 1). Related work on estimating damping/stiffness from free decay, Kalman-filter-based parameter estimation (dual vs augmented-state EKF/UKF), and identifiability/observability analyses is not cited or discussed.
Recommendation: Add a Related Work subsection (e.g., Sec. 1.1): cover classical damping estimation (log decrement, frequency-domain fitting), state-space identification, dual/augmented Kalman filter approaches, and identifiability/observability/Fisher-information analyses for second-order systems. Then clarify the paper’s distinct contribution (e.g., systematic SNR–resolution maps and a practical comparison between a simple numerical pipeline and DKF under controlled conditions).