The space-based gravitational-wave (GW) observatory LISA will offer unparalleled science returns, including a view of massive black hole mergers up to high redshifts, precision tests of general relativity and black hole structure, a census of thousands of compact binaries in the Galaxy, and the possibility of detecting stochastic signals from the early Universe. While the Mock LISA Data Challenges (2006–2011) gave us confidence that LISA will be able to fulfill its scientific potential, we still have a rather incomplete idea of what the end-to-end LISA science analysis should look like. The task at hand is substantial:
It is tempting to assume that current algorithms and prototype codes will scale up to this challenge, thanks to the greatly increased computational power that will become available by the time of LISA’s launch in the early 2030s. In reality, harnessing that power will require very different methods, adapted to future high-performance computational architectures that we can only glimpse now. Thus, we need to begin our exploration at this time, seeking inspiration from other disciplines (e.g., big data processing, computational biology, the most advanced applications in astroinformatics) and learning to pose the same physical questions in different, future-proof ways—or even daring to imagine questions that will be tractable only with future machines. The broad objective of this study was to imagine how evolved or rethought data analysis algorithms and source-modeling codes will perform the LISA science analysis on the computers of the future, with the hope of guiding LISA science and data analysis research and development in the years to come.