Approximate (k)-nearest neighbor (AKNN) search is a fundamental problem with wide applications. To reduce memory and accelerate search, vector quantization is widely adopted. However, existing quantization methods either rely on codebooks – whose query speed is limited by costly table lookups – or adopt dimension-wise quantization, which maps each vector dimension to a small quantized code for fast search. The latter, however, suffers from a fixed compression ratio because the quantized code length is inherently tied to the original dimensionality. To overcome these limitations, we propose MRQ, a new approach that integrates projection with quantization. The key insight is that, after projection, high-dimensional vectors tend to concentrate most of their information in the leading dimensions. MRQ exploits this property by quantizing only the information-dense projected subspace – whose size is fully user-tunable – thereby decoupling the quantized code length from the original dimensionality. The remaining tail dimensions are captured using lightweight statistical summaries. By doing so, MRQ boosts the query efficiency of existing quantization methods while achieving arbitrary compression ratios enabled by the projection step. Extensive experiments show that MRQ substantially outperforms the state-of-the-art method, achieving up to 3x faster search with only one-third the quantization bits for comparable accuracy.