In Phys. Rev. A 108, L060402 (2023), we introduced a Bayesian measurement error mitigation algorithm, which leveraged complete information from the readout signal, and validated the protocol on a quantum device with five superconducting qubits. Here, we present an improved algorithm’s implementation, tailored for multiqubit experiments on near-term superconducting qubit quantum devices. In particular, we provide a detailed algorithm workflow, from calibrating the detector response functions to the postprocessing of measurement outcomes, offering a computationally efficient solution for the output size typical of current quantum computing devices. We show how the numerical representation of the noise function affects the performance of the error mitigation algorithm and test the convergence criteria. We benchmark our protocol on actual quantum computers with superconducting qubits, where the readout signal encodes the measurement information as unprocessed analog data before qubit state assignment. Finally, we compare the performance of our algorithm against other measurement error mitigation methods, such as iterative Bayesian unfolding and the Mthree method, and show how our method can be integrated on top of other readout error mitigation protocols.