-
Velocity reconstruction is central to the paper’s feasibility conclusion, but the reconstruction method is only sketched and its failure is not yet diagnostic (Sec. 2.3, Sec. 3.2, Sec. 4). Key missing specifics include the exact $\delta \rightarrow v$ mapping (e.g., continuity-equation/Fourier-space form), grid/pixel and $k$-space resolution, smoothing/regularization, halo bias model and redshift treatment, boundary conditions on a $100~{\rm deg}^2$ patch, and how shot noise is handled. Without this, it is unclear whether $r \approx -0.026$ reflects an unavoidable sparsity/geometry limitation or a particular (possibly suboptimal) implementation choice.
Recommendation: Expand Sec. 2.3 and Sec. 3.2 with a fully specified reconstruction recipe: the equation used to infer $v$ from $\delta$ (including $fH$ factors), how $\delta$ is built from halos (mass weighting? bias correction?), smoothing scale(s), grid size, and treatment of edges (apodization/zero-padding/periodic). Provide scale-dependent diagnostics (e.g., $r(k)$ or $r$ after low-pass filtering) in addition to a single Pearson $r$. Add at least two controlled tests: (i) an “oracle” reconstruction using the true matter density field (to isolate tracer sparsity vs. algorithmic issues), and (ii) a tracer-density scaling study (subsample/augment halos or include lower-mass tracers if available) showing how $r$ changes with number density for this geometry.
-
The headline constraints assume true halo velocities, but the paper does not quantify how kSZ SNR and $\tau$–$M$ uncertainties degrade under realistic velocity errors (Sec. 3.2–3.4, Sec. 4). As written, the reader learns that reconstruction fails, but not what reconstruction quality would be required for $\tau$–$M$ science, or how “close” the setup is to feasibility if $r$ were modest (e.g., $0.3$–$0.7$).
Recommendation: Add a simple propagation model linking velocity-reconstruction fidelity to kSZ amplitude/SNR (e.g., signal suppression $\propto r$ between reconstructed and true pairwise velocities, or an equivalent multiplicative calibration factor). Using the measured $r \approx -0.026$ and a few representative literature values (e.g., $r = 0.3$, $0.5$, $0.7$), forecast the resulting pairwise SNR and the expected errors on $A$ and $\alpha$ (Sec. 3.3–3.4). Present this as a compact table/figure and summarize explicitly in Sec. 4 what reconstruction performance would be necessary for meaningful $\tau$–$M$ constraints under the assumed map noise/area.
-
The Wiener-filter/CMB-subtraction step is not fully specified in terms of conventions and its kSZ transfer function, yet it directly affects the recovered $\tau$ normalization and potentially the $\tau$–$M$ slope (Sec. 2.2, Sec. 3.1–3.4; Fig. 1). Eq. (1) defines $W(\ell)$ as a CMB Wiener filter, but the science signal is measured on the residual map, which effectively applies $(1-W)$ to all components; this can attenuate kSZ in a scale-dependent way. The manuscript states attenuation is “accounted for,” but does not show how (e.g., multiplicative calibration, profile-dependent correction, or simulation-based debiasing). Beam/noise conventions in the filter (whether $C_\ell^{\rm noise}$ is pre/post-beam; whether spectra are multiplied by $B(\ell)^2$ consistently) also remain ambiguous.
Recommendation: In Sec. 2.2, write the cleaned-map relation explicitly in Fourier space (e.g., $T_{\rm clean}(\ell) = [1-W(\ell)] T_{\rm obs}(\ell)$) and state the conventions for beam convolution and the noise power spectrum (pre/post-beam) unambiguously. Quantify the effective transfer function for a cluster kSZ signal by injecting either (i) a known kSZ-only map or (ii) a parametric cluster profile convolved with the beam, passing it through the same filtering/temperature-extraction step, and measuring the recovered amplitude. Use this to either (a) debias $\tau$ (reporting de-filtered $A$ and $\alpha$) or (b) clearly define that you are fitting a “filtered $\tau$” and provide the mapping to physical $\tau$. Update Fig. 1/caption to match the actual operation (CMB estimation via $W$ and residual via $1-W$) and state how $C_\ell^{\rm kSZ}$ in Eq. (1) is obtained (measured from the simulated kSZ-only map, modeled, or effectively negligible).
-
The kSZ temperature statistic at halo positions is underspecified and may be far from optimal, which matters because the paper’s main conclusion is that instrumental noise dominates (Sec. 2.3, Sec. 3.3–3.4). It is unclear whether $T_i$ is a single pixel value, an interpolated sample, aperture photometry, or a matched-filter output, and how this choice relates to the $1.4'$ beam and expected halo angular sizes.
Recommendation: Clarify in Sec. 2.3 exactly how $T_i$ is computed from the cleaned map (pixelization/projection; interpolation; aperture radius; any additional spatial filtering). Then add a targeted optimization/robustness check: compare the baseline choice to at least one more standard kSZ photometry option (e.g., aperture photometry with a few radii tied to beam FWHM and/or $\theta_{500}$, and/or an “oracle” matched filter using the injected profile if available). Report how the pairwise SNR in Sec. 3.3 changes; this will help determine whether the reported SNR $\approx 1.56$ is close to the best achievable for the stated map specs.
-
Estimator definition/units and its connection to $\tau$ are currently ambiguous and may be dimensionally inconsistent (Sec. 2.3; Eq. (2)). If the map is in $\mu{\rm K}$ (Sec. 2.1), Eq. (2) yields $\tau$ only if $T_i$ denotes $\Delta T / T_{\rm CMB}$ (dimensionless); otherwise a missing $1/T_{\rm CMB}$ factor (and possibly other normalization details) makes the reported $\tau$–$M$ normalization hard to interpret and compare to literature. The pair-sum indexing ($i \neq j$ vs unique pairs) and pair selection (separation cuts, geometry factors) are also not fully specified, limiting interpretability and reproducibility.
Recommendation: In Sec. 2.3, define $T_i$ explicitly as either $\Delta T$ or $\Delta T/T_{\rm CMB}$ and revise Eq. (2) accordingly (include a $1/T_{\rm CMB}$ factor if using temperature units). State whether the sum is over ordered pairs ($i \neq j$) or unique unordered pairs ($i<j$) and ensure the text/equation agree. Describe pair selection choices: whether all separations are included, whether there is a maximum separation, and whether any geometric/projection factor enters the estimator (or justify why not). Add a brief statement connecting the estimator’s expectation value to an average $\tau$ under your filtering/photometry choices so that $A$ in Sec. 3.4 has a clear operational meaning.
-
The $\tau$–$M$ regression methodology is not well-defined for a low-SNR measurement and may be unstable or biased (Sec. 2.4, Sec. 3.4; Fig. 2a). The text mentions a log–log linear regression, but binned $\tau$ estimates can be noise-dominated and may cross zero, making log transforms ill-defined and potentially biasing slope/normalization. In addition, the stability of the jackknife covariance (and bin-to-bin correlations) is not demonstrated, yet $\alpha$ has an enormous uncertainty ($\pm 7.23$), raising concerns about ill-conditioned fits rather than purely lack of information.
Recommendation: Specify the exact fitting procedure in Sec. 2.4/Sec. 3.4: whether the fit is performed in linear space or log space; how bins with $\hat \tau \leq 0$ are treated; whether the full jackknife covariance is used (generalized least squares) and whether any regularization is applied. Provide at least one covariance diagnostic (correlation matrix heatmap or key summary statistics such as condition number/eigenvalue spectrum). Add robustness checks: repeat the fit with diagonal-only covariance, with fewer/more mass bins, and with different numbers of jackknife regions (e.g., 50/100/200) to show that $A$ and $\alpha$ are not artifacts of covariance noise or binning choices. If possible, include an end-to-end recovery test in a higher-SNR toy setup (e.g., lower noise or larger area) with a known injected $\tau$–$M$ law to verify the pipeline can recover an unbiased slope when information exists.
-
The paper’s “fundamentally limited” conclusion is based on a single survey/catalog configuration, with limited scaling/forecast context and limited comparison to existing kSZ detections/forecasts (Abstract, Sec. 4). This makes it hard to generalize: is the limitation primarily (i) area ($100~{\rm deg}^2$), (ii) noise ($20~\mu{\rm K}$-arcmin), (iii) single-frequency temperature-only (no multifrequency tSZ cleaning), (iv) halo number density/mass threshold, or (v) the chosen photometry/estimator?
Recommendation: Either temper the language to explicitly restrict conclusions to the studied configuration or add a simple scaling/forecast section in Sec. 4: vary (even analytically) map noise, area, and halo density and report expected SNR and $\sigma_\alpha$ trends (e.g., SNR $\propto \sqrt{\rm area}$ and $\propto 1/{\rm noise}$ as a baseline, plus an empirical dependence on $N_{\rm halo}$ from resampling). Provide 1–2 comparison points to the literature (ACT/SPT/Planck pairwise kSZ detections/forecasts and typical map depths/areas/catalog densities) to position why this configuration underperforms and what improvements (larger area, lower noise, denser tracers, better velocity recon) would be required for meaningful $\tau$–$M$ science.
-
Figures 1–2 contain several ambiguities that directly affect interpretation of the main methodological claims (Fig. 1 caption vs definition of the Wiener filter; beam/noise treatment; and missing uncertainty/context in Fig. 2 SNR and $\tau$–$M$ panels).
Recommendation: Revise Fig. 1 to (i) clearly state whether the plotted filter is $W(\ell)$ (CMB estimator) or $1-W(\ell)$ (residual map response), (ii) annotate the analysis multipole range used, and (iii) clarify beam and noise conventions (and optionally show $B(\ell)$). For Fig. 2, add/clarify: uncertainty visualization for $\tau$–$M$ fit (credible band or fit covariance), explicit handling/meaning of negative $\hat \tau$ points (sign conventions and whether they enter the fit), and uncertainty or definition for the SNR bars (including what exactly differs between bars: filtering on/off; true vs reconstructed velocities). Reduce overplotting in the velocity-scatter panel (e.g., density/hexbin) and add necessary plot metadata (axes units, colorbars where relevant).