Поиск :
Личный кабинет :
Электронный каталог: Hayrapetyan, A. - Portable Acceleration of CMS Computing Workflows with Coprocessors as a Service
Hayrapetyan, A. - Portable Acceleration of CMS Computing Workflows with Coprocessors as a Service
Статья
Автор: Hayrapetyan, A.
Computing and Software for Big Science [Electronic resource]: Portable Acceleration of CMS Computing Workflows with Coprocessors as a Service
б.г.
ISBN отсутствует
Автор: Hayrapetyan, A.
Computing and Software for Big Science [Electronic resource]: Portable Acceleration of CMS Computing Workflows with Coprocessors as a Service
б.г.
ISBN отсутствует
Статья
Hayrapetyan, A.
Portable Acceleration of CMS Computing Workflows with Coprocessors as a Service / A.Hayrapetyan, S.Afanasiev, D.Budkouski, I.Golutvin, I.Gorbunov, V.Karjavine, V.Korenkov, N.Krasnikov, A.Lanev, A.Malakhov, V.Matveev, V.Palichik, V.Perelygin, M.Savina, V.Shalaev, S.Shmatov, S.Shulha, V.Smirnov, O.Teryaev, N.Voytishin, B.S.Yuldashev, A.Zarubin, I.Zhizhin, Z.Tsamalaidze, [CMS Collab.] // Computing and Software for Big Science [Electronic resource]. – 2024. – Vol. 8, No. 1. – P. 17. – URL: https://doi.org/10.1007/s41781-024-00124-1. – Bibliogr.: 120.
Computing demands for large scientific experiments, such as the CMS experiment at the CERN LHC, will increase dramatically in the next decades. To complement the future performance increases of software running on central processing units (CPUs), explorations of coprocessor usage in data processing hold great potential and interest. Coprocessors are a class of computer processors that supplement CPUs, often improving the execution of certain functions due to architectural design choices. We explore the approach of Services for Optimized Network Inference on Coprocessors (SONIC) and study the deployment of this as-a-service approach in large-scale data processing. In the studies, we take a data processing workflow of the CMS experiment and run the main workflow on CPUs, while offloading several machine learning (ML) inference tasks onto either remote or local coprocessors, specifically graphics processing units (GPUs). With experiments performed at Google Cloud, the Purdue Tier-2 computing center, and combinations of the two, we demonstrate the acceleration of these ML algorithms individually on coprocessors and the corresponding throughput improvement for the entire workflow. This approach can be easily generalized to different types of coprocessors and deployed on local CPUs without decreasing the throughput performance. We emphasize that the SONIC approach enables high coprocessor usage and enables the portability to run workflows on different types of coprocessors.
ОИЯИ = ОИЯИ (JINR)2024
Спец.(статьи,препринты) = С 346.2в - Взаимодействие протонов с протонами
Спец.(статьи,препринты) = Ц 840 в - Программы обработки экспериментальных данных и управление физическими установками$
Hayrapetyan, A.
Portable Acceleration of CMS Computing Workflows with Coprocessors as a Service / A.Hayrapetyan, S.Afanasiev, D.Budkouski, I.Golutvin, I.Gorbunov, V.Karjavine, V.Korenkov, N.Krasnikov, A.Lanev, A.Malakhov, V.Matveev, V.Palichik, V.Perelygin, M.Savina, V.Shalaev, S.Shmatov, S.Shulha, V.Smirnov, O.Teryaev, N.Voytishin, B.S.Yuldashev, A.Zarubin, I.Zhizhin, Z.Tsamalaidze, [CMS Collab.] // Computing and Software for Big Science [Electronic resource]. – 2024. – Vol. 8, No. 1. – P. 17. – URL: https://doi.org/10.1007/s41781-024-00124-1. – Bibliogr.: 120.
Computing demands for large scientific experiments, such as the CMS experiment at the CERN LHC, will increase dramatically in the next decades. To complement the future performance increases of software running on central processing units (CPUs), explorations of coprocessor usage in data processing hold great potential and interest. Coprocessors are a class of computer processors that supplement CPUs, often improving the execution of certain functions due to architectural design choices. We explore the approach of Services for Optimized Network Inference on Coprocessors (SONIC) and study the deployment of this as-a-service approach in large-scale data processing. In the studies, we take a data processing workflow of the CMS experiment and run the main workflow on CPUs, while offloading several machine learning (ML) inference tasks onto either remote or local coprocessors, specifically graphics processing units (GPUs). With experiments performed at Google Cloud, the Purdue Tier-2 computing center, and combinations of the two, we demonstrate the acceleration of these ML algorithms individually on coprocessors and the corresponding throughput improvement for the entire workflow. This approach can be easily generalized to different types of coprocessors and deployed on local CPUs without decreasing the throughput performance. We emphasize that the SONIC approach enables high coprocessor usage and enables the portability to run workflows on different types of coprocessors.
ОИЯИ = ОИЯИ (JINR)2024
Спец.(статьи,препринты) = С 346.2в - Взаимодействие протонов с протонами
Спец.(статьи,препринты) = Ц 840 в - Программы обработки экспериментальных данных и управление физическими установками$