Leakage errors are unwanted transfer of population outside of a defined computational subspace and they occur in almost every platform for quantum computing. While prevalent, leakage is often overlooked when measuring and reporting the fidelity of quantum gates with standard methods. In fact, when leakage is substantial it can cause a large overestimation of fidelity from the typical method used to measure fidelity, randomized benchmarking. We provide several methods for properly estimating fidelity in the presence of leakage errors that are applicable in different error regimes with carefully chosen sequence lengths. Then, we numerically demonstrate the methods for two-qubit randomized benchmarking, which often have the largest errors. Finally, we reanalyze previously shared data from Quantinuum systems with some of the methods provided.