The Nearest Neighbor Search (NNS) problem asks to design a data structure that preprocesses an (n)-point dataset (X) lying in a metric space (\mathcal{M}), so that given a query point (q \in \mathcal{M}), one can quickly return a point of (X) minimizing the distance to (q). The efficiency of such a data structure is evaluated primarily by the amount of space it uses and the time required to answer a query. We focus on the fast query-time regime, which is crucial for modern large-scale applications, where datasets are massive and queries must be processed online, and is often modeled by query time (\text{poly}(d log n)). Our main result is such a randomized data structure for NNS in (\ell_p) spaces, (p>2), that achieves (p^{O(1) + loglog p}) approximation with fast query time and (\text{poly}(dn)) space. Our data structure improves, or is incomparable to, the state-of-the-art for the fast query-time regime from [Bartal and Gottlieb, TCS 2019] and [Krauthgamer, Petruschka and Sapir, FOCS 2025].