MPI_XSTAR: MPI-based Parallelization of the XSTAR Photoionization Program

Ashkbiz Danehkar,
Harvard-Smithsonian Center for Astrophysics, 60 Garden Street, Cambridge, MA 02138, USA

Michael A. Nowak,
Massachusetts Institute of Technology, Kavli Institute for Astrophysics, Cambridge, MA 02139, USA

Julia C. Lee,
Harvard John A. Paulson School of Engineering & Applied Science, 29 Oxford Street, Cambridge, MA 02138 USA

Randall K. Smith
Harvard-Smithsonian Center for Astrophysics, 60 Garden Street, Cambridge, MA 02138, USA

Date: Received 2017 October 31; accepted 2017 November 21; published 2017 December 29


We describe a program for the parallel implementation of multiple runs of XSTAR, a photoionization code that is used to predict the physical properties of an ionized gas from its emission and/or absorption lines. The parallelization program, called MPI_XSTAR, has been developed and implemented in the C++ language by using the Message Passing Interface (MPI) protocol, a conventional standard of parallel computing. We have benchmarked parallel multiprocessing executions of XSTAR, using MPI_XSTAR, against a serial execution of XSTAR, in terms of the parallelization speedup and the computing resource efficiency. Our experience indicates that the parallel execution runs significantly faster than the serial execution, however, the efficiency in terms of the computing resource usage decreases with increasing the number of processors used in the parallel computing.
Keywords: XSTAR - Message Passing Interface - Parallel Computing - High-Performance Computing - X-rays: galaxies - quasars: absorption lines - X-rays: binaries
Journal Reference: A. Danehkar, M. A. Nowak, J. C. Lee, and R. K. Smith. Publications of the Astronomical Society of the Pacific, 130:024501, 2018. doi:10.1088/1538-3873/aa9dff

Ashkbiz Danehkar