Google's TurboQuant Paper Critiqued by Prior Algorithm Author

According to 1M AI News, Gao Jianyang, a postdoctoral researcher at ETH Zurich, has published an open letter accusing Google’s ICLR 2026 paper, TurboQuant, of three serious inaccuracies in its description of his prior work, RaBitQ. Gao Jianyang, the first author of RaBitQ, an algorithm published in 2024 at the top database conference SIGMOD, employs random rotation (Johnson-Lindenstrauss transform) before quantization. This method has been rigorously proven to achieve asymptotically optimal error bounds and was invited for presentation at a workshop of the top theoretical computer science conference, FOCS.

The three accusations are:

  1. Circumvention of Methodological Similarity: TurboQuant’s core method also uses random rotation. However, the paper categorizes RaBitQ as ‘grid-based PQ,’ systematically omitting the direct methodological connection between the two. ICLR reviewers independently pointed out that both methods use random projection and requested a supplementary discussion. The TurboQuant team not only failed to provide this but also moved the description of RaBitQ from the main text to the appendix.
  2. Inaccurate Theoretical Results: The paper qualitatively labels RaBitQ’s theoretical guarantee as ‘suboptimal’ without any justification, attributing it to ‘loose analysis.’ An extended paper on RaBitQ has already proven its error bound reaches the asymptotically optimal bound given by Alon-Klartag (FOCS 2017).
  3. Unfair Experimental Comparison: TurboQuant tested RaBitQ using a self-translated Python code on a single-core CPU (with multi-threading disabled), while testing its own algorithm on an NVIDIA A100 GPU. This resulted in RaBitQ being reported as orders of magnitude slower. This experimental setup was not disclosed in the paper.

Gao Jianyang revealed that Majid Daliri, the second author of TurboQuant, proactively contacted the RaBitQ team in January 2025 to request assistance in debugging their Python version, which was translated from RaBitQ’s C++ code. In an email dated May 2025, Daliri personally confirmed the unfair experimental setup and stated that the theoretical clarifications from the RaBitQ team had been communicated to all co-authors. Despite this, the issues remained uncorrected throughout the entire process of TurboQuant’s submission, review, acceptance, and extensive promotion by Google.

The RaBitQ team has published a public comment on ICLR OpenReview and submitted a formal complaint to the ICLR Conference Chair and Ethics Committee.

Amir Zandieh, the first author of TurboQuant, responded, expressing willingness to correct the second and third issues but refusing to supplement the discussion on methodological similarity. He only agreed to make corrections after the ICLR 2026 conference concluded. Third-party researcher Jonas Matthias Kübler also independently noted on OpenReview inconsistencies between the paper and Google’s blog regarding speed benchmarks (PyTorch vs. JAX) and the quantization baseline (FP32).

Following extensive official promotion by Google, TurboQuant’s prior release had previously caused a collective drop in storage chip stocks, including those of Micron and Western Digital.

ETH0.32%
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin