-
Positioning/novelty is currently unclear: full-trajectory non-linear least squares fitting of a damped sinusoid and its equivalence to Gaussian-noise MLE are standard in signal processing/system identification, yet the manuscript frames the work as a new “framework” without clearly identifying what is novel beyond a straightforward application (Sec. 1, Sec. 4). The introduction also lacks concrete related-work grounding, making it hard to assess contribution relative to established estimators (log decrement, Prony/matrix pencil, subspace/AR methods, Bayesian approaches, CRLB results).
Recommendation: Strengthen Sec. 1 with a compact related-work subsection (e.g., Sec. 1.1) and citations to standard damped-sinusoid estimation methods. Then explicitly state the paper’s contribution: e.g., (i) a reproducible implementation recipe with bounded TRF + a specific initialization pipeline, (ii) a benchmark/validation study under controlled conditions, (iii) (if added) uncertainty quantification, robustness experiments, or comparisons. If the intent is didactic/benchmarking, reframe claims accordingly in Abstract/Sec. 1/Sec. 4.
-
Synthetic data generation is under-specified and the evaluation set is small ($N=20$), limiting interpretability and generality of the accuracy/robustness claims (Sec. 2.1, Sec. 3.2). Key missing details include distributions/ranges of $(A,\gamma,\omega,\phi)$, sampling rate/$\Delta t$, duration and number of samples, whether sampling is uniform, whether mean removal/detrending is applied, and how noise variance is chosen across oscillators.
Recommendation: Expand Sec. 2.1 with complete, numeric simulation settings: parameter ranges/distributions (or explicit per-oscillator table in appendix), sampling frequency, duration, sample count, and noise model/variance selection. Increase trials substantially (e.g., hundreds/thousands) to support claims with distributions rather than anecdotes, and report aggregate statistics (median/IQR or mean/SD, min/max) for $\omega$ and $\gamma$ errors (Sec. 3.2.1). Provide code/pseudocode sufficient for reproduction.
-
No quantitative baseline comparisons are provided despite repeated motivation against “traditional” local feature-based estimators (Sec. 1, Sec. 3.2, Sec. 4). Without baselines, it is impossible to determine whether full-trajectory TRF fitting materially improves accuracy/robustness or merely matches standard practice, and at what computational cost.
Recommendation: In Sec. 3, add at least two baseline estimators under identical simulation conditions: (i) FFT/PSD peak (frequency) + logarithmic decrement from peaks/envelope (damping), and (ii) a Prony/matrix-pencil or simple AR/SSA-based method (or another common system-ID approach). Report relative/absolute error vs (true) SNR and damping ratio and include runtime comparisons (median fit time per trace). Summarize in an additional table/figure.
-
SNR definition is a post-fit statistic derived from the same fit used to compute parameter errors ($\mathrm{SNR}=\mathrm{Var}(\hat{x})/\mathrm{Var}(r)$, Sec. 2.3, Eq. (3)), creating potential circularity: the optimizer can trade parameter bias against residual reduction, and the “SNR” will generally increase as fit quality increases (including possible overfitting). As used in Sec. 3.2.2, this risks overstating explanatory power of SNR–error relationships.
Recommendation: For synthetic experiments, compute and plot error versus *true/generative* SNR defined from known clean signal and injected noise variance (or signal power/noise power prior to fitting). Keep Eq. (3) if desired, but rename it (e.g., “fit-based SNR proxy” or “variance-explained ratio”) and explicitly discuss limitations. Also report injected $\sigma^2$ vs estimated residual variance $\hat{\sigma}^2$ to validate the Gaussian-noise assumption (Sec. 3.2).
-
Model/parameter interpretation is inconsistent: the manuscript refers to $\omega$ as “natural frequency/angular frequency” while fitting $\cos(\omega t+\phi)$, which corresponds to the *damped* oscillation frequency in the standard second-order ODE solution. The damping ratio definition $\zeta=\gamma/\omega$ used in analysis is non-standard relative to classical $\zeta=\beta/\omega_0$ and becomes ambiguous if $\omega$ is damped (Sec. 1, Sec. 2.1\textendash 2.2, Sec. 3.2.2, Fig. 2 caption).
Recommendation: In Sec. 2.1/Sec. 2.2, explicitly define whether $\omega$ is the damped frequency $\omega_d$ or an undamped natural frequency $\omega_0$. If the latter, re-parameterize using the standard ODE form (e.g., $x(t)=A e^{-\beta t}\cos(\sqrt{\omega_0^2-\beta^2}\,t+\phi)$) or clearly state the mapping and adjust interpretation accordingly. Either adopt standard $\zeta$ or clearly label the current ratio as a heuristic and justify it; ensure consistency across text and figure labels (rad/s vs Hz).
-
Key methodological details are insufficient for full reproducibility and for evaluating robustness of optimization (Sec. 2.2). In particular: (i) envelope computation for $A_0,\gamma_0$ is vague; (ii) PSD/FFT settings for $\omega_0$ are ambiguous (windowing, zero-padding, peak interpolation); (iii) TRF configuration (tolerances, max iterations), scaling/normalization, and convergence/failure handling are not described; (iv) treatment of $\phi$ (bounds/wrapping) is unclear; (v) many real signals have a DC offset not included in Eq. (1).
Recommendation: Expand Sec. 2.2 with an explicit algorithm description (or pseudocode/algorithm box): PSD computation details and peak selection; envelope method (Hilbert magnitude vs peak-picking; smoothing; log-linear regression) and bias considerations; TRF solver settings and stopping criteria; parameter vector ordering and bounds (including $\phi\in[-\pi,\pi]$ or equivalent). State whether data are detrended/mean-removed; consider adding an offset term $c$ to the model or explicitly justify zero-mean assumption in Sec. 2.1.
-
Uncertainty quantification is missing, despite the MLE framing (Sec. 2.2, Sec. 3, Fig. 1\textendash 2). Reporting only point estimates and relative errors makes it difficult to judge reliability, compare regimes, or support claims about robustness—especially in low-SNR/high-damping conditions where identifiability degrades.
Recommendation: Add parameter uncertainty estimates from the Jacobian/Hessian at the optimum (Gauss–Newton covariance) and report standard errors or 95% CIs for $\omega,\gamma$ (and optionally $A,\phi$). In simulation, perform a basic coverage check (e.g., do 95% intervals contain the truth $\sim$95% of the time across trials?). Consider adding uncertainty bands to representative fits in Fig. 1 and error bars/intervals in Fig. 2.
-
Claims about robustness/efficiency/applicability are not fully substantiated. The paper does not quantify runtime/scaling (despite “computationally efficient” language), and it does not probe realistic failure modes: colored/non-Gaussian noise, outliers, short observation windows (few cycles), irregular sampling/missing data, strong damping, or model mismatch (Sec. 3.2.2, Sec. 4). Identifiability issues among $A,\gamma,\phi$ (and possible offset) are not discussed.
Recommendation: Add: (i) a runtime/iteration-count report (median and range) including hardware/software, and scaling with sample count; (ii) at least one robustness experiment (e.g., colored noise, impulsive outliers with robust loss, varying window length/number of cycles, varying sampling rate) to map breakdown points; and (iii) a limitations paragraph in Sec. 4 explicitly discussing noise-model dependence, identifiability, and expected real-data complications. If robustification is not added, temper wording in Sec. 4 accordingly.
-
Internal consistency issues between narrative claims and Table 1 examples weaken credibility: the manuscript states $\omega$ relative errors are “consistently below 0.5%,” but Table 1 includes an $\omega$ relative error of 0.0066 (0.66) for oscillator 17; similarly, a claim about $\gamma$ error $<1\%$ at SNR$>$500 is contradicted by oscillator 1 (SNR=630.2, $\gamma$ relative error=0.0106) (Sec. 3.2.1, Table 1, Sec. 4).
Recommendation: Audit all thresholded performance statements in Sec. 3.2 and Sec. 4 and align them with the full dataset maxima/quantiles. If Table 1 is illustrative, either choose examples consistent with the stated thresholds or explicitly state that examples include outliers; ideally provide full-dataset summary statistics (and/or append full table) to support the claims.
-
Figures 1\textendash 2 and Table 1 lack key context and uncertainty information needed to interpret results, and some labeling/definitions are incomplete (Sec. 3, Fig. 1\textendash 2, Table 1). Examples: “relative error” is not always clearly defined as fraction vs percent; axes/units (rad/s, s$^{-1}$) are incomplete; sample size is not stated on plots; log scaling is not explicitly indicated; and fit uncertainty is not shown.
Recommendation: Revise Fig. 1\textendash 2 captions and axes to include: parameter units, definition of relative error (and whether values are fractions or \%), sample size (N), and whether axes are log-scaled. Add panel labels (a–d), annotate representative traces with true/fitted parameters and (true) SNR, and include uncertainty (CIs/error bars or bands) where feasible. Ensure print legibility (font sizes, line widths) and accessibility (colorblind-safe palettes, not relying on color alone).