Software
Data Management
Paper Title Page
TUPDP071
Integrating Information for Assessment and Optimising  
 
  • R.A. Spann
    SARAO, Cape Town, South Africa
  • S.N. Hulme
    SALT, Cape Town, South Africa
  • NJ. Koopman
    Self Employment, Private address, USA
 
  Data is ubiquitous - creating information and knowledge from that data is time consuming. In the Operations environment the requirement to access information from multiple data sources is critical for decision making across a diverse set of issues - from commissioning to stable operations to engineering upgrades. This has driven the need to access data from disparate data sources in a cohesive and coherent manner. We discuss the motivation for the novel way of packaging data; how this not only solves the integration of data sources, but allows improved traceability for changes as well as cross schema/organisation information exploration and integration; and it provides pre-wrangled data. We present an application and technology independent information structure and the framework for the implementation.  
poster icon Poster TUPDP071 [31.128 MB]  
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WE3BCO01 Modular and Scalable Archiving for EPICS and Other Time Series Using ScyllaDB and Rust 1008
 
  • D. Werder, T. Humar
    PSI, Villigen PSI, Switzerland
 
  At PSI we currently run too many different products with the common goal of archiving timestamped data. This includes EPICS Channel Archiver as well as Archiver Appliance for EPICS IOC’s, a buffer storage for beam-synchronous data at SwissFEL, and more. This number of monolithic solutions is too large to maintain and overlaps in functionality. Each solution brings their own storage engine, file format and centralized design which is hard to scale. In this talk I report on how we factored the system into modular components with clean interfaces. At the core, the different storage engines and file formats have been replaced by ScyllaDB, which is an open source product with enterprise support and remarkable adoption in the industry. We gain from its distributed, fault-tolerant and scalable design. The ingest of data into ScyllaDB is factored into components according to the different type of protocols of the sources, e.g. Channel Access. Here we build upon the Rust language and achieve robust, maintainable and performant services. One interface to access and process the recorded data is the HTTP retrieval service. This service offers e.g. search among the channels by various criteria, full event data as well as aggregated and binned data in either json or binary formats. This service can also run user-defined data transformations and act as a source for Grafana for a first view into recorded channel data. Our setup for SwissFEL ingests the ~370k EPICS updates/s from ~220k PVs (scalar and waveform), having rates between 0.1 and 100 Hz.  
slides icon Slides WE3BCO01 [1.179 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-WE3BCO01  
About • Received ※ 04 October 2023 — Revised ※ 09 November 2023 — Accepted ※ 14 December 2023 — Issued ※ 14 December 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WE3BCO03 Data Management for Tracking Optic Lifetimes at the National Ignition Facility 1012
 
  • R.D. Clark, L.M. Kegelmeyer
    LLNL, Livermore, California, USA
 
  Funding: This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.
The National Ignition Facility (NIF), the most energetic laser in the world, employs over 9000 optics to reshape, amplify, redirect, smooth, focus, and convert the wavelength of laser light as it travels along 192 beamlines. Underlying the management of these optics is an extensive Oracle database storing details of the entire life of each optic from the time it leaves the vendor to the time it is retired. This journey includes testing and verification, preparing, installing, monitoring, removing, and in some cases repairing and re-using the optics. This talk will address data structures and processes that enable storing information about each step like identifying where an optic is in its lifecycle and tracking damage through time. We will describe tools for reporting status and enabling key decisions like which damage sites should be blocked or repaired and which optics exchanged. Managing relational information and ensuring its integrity is key to managing the status and inventory of optics for NIF.
LLNL Release Number: LLNL-ABS-847598
 
slides icon Slides WE3BCO03 [2.379 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-WE3BCO03  
About • Received ※ 26 September 2023 — Revised ※ 09 October 2023 — Accepted ※ 13 October 2023 — Issued ※ 24 October 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WE3BCO04 Improving Observability of the SCADA Systems Using Elastic APM, Reactive Streams and Asynchronous Communication 1016
 
  • I. Khokhriakov
    University of California, San Diego (UCSD), La Jolla, California, USA
  • V. Mazalova
    CFEL, Hamburg, Germany
  • O. Merkulova
    IK, Moscow, Russia
 
  As modern control systems grow in complexity, ensuring observability and traceability becomes more challenging. To meet this challenge, we present a novel solution that seamlessly integrates with multiple SCADA frameworks to provide end-to-end visibility into complex system interactions. Our solution utilizes Elastic APM to monitor and trace the performance of system components, allowing for real-time analysis and diagnosis of issues. In addition, our solution is built using reactive design principles and asynchronous communication, enabling it to scale to meet the demands of large, distributed systems. This presentation will describe our approach and discuss how it can be applied to various use cases, including particle accelerators and other scientific facilities. We will also discuss the benefits of our solution, such as improved system observability and traceability, reduced downtime, and better resource allocation. We believe that our approach represents a significant step forward in the development of modern control systems, and we look forward to sharing our work with the community at ICALEPCS 2023.
* Igor Khokhriakov et al,
A novel solution for controlling hardware components of accelerators and beamlines
JOURNAL OF SYNCHROTRON RADIATION · Apr 5, 2022
 
slides icon Slides WE3BCO04 [3.377 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-WE3BCO04  
About • Received ※ 29 September 2023 — Revised ※ 14 November 2023 — Accepted ※ 19 December 2023 — Issued ※ 22 December 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WE3BCO05 The CMS Detector Control Systems Archiving Upgrade 1022
 
  • W. Karimeh
    CERN, Meyrin, Switzerland
 
  The CMS experiment relies on its Detector Control System (DCS) to monitor and control over 10 million channels, ensuring a safe and operable detector that is ready to take physics data. The data is archived in the CMS Oracle conditions database, which is accessed by operators, trigger and data acquisition systems. In the upcoming extended year-end technical stop of 2023/2024, the CMS DCS software will be upgraded to the latest WinCC-OA release, which will utilise the SQLite database and the Next Generation Archiver (NGA), replacing the current Raima database and RDB manager. Taking advantage of this opportunity, CMS has developed its own version of the NGA backend to improve its DCS database interface. This paper presents the CMS DCS NGA backend design and mechanism to improve the efficiency of the read-and-write data flow. This is achieved by simplifying the current Oracle conditions schema and introducing a new caching mechanism. The proposed backend will enable faster data access and retrieval, ultimately improving the overall performance of the CMS DCS.  
slides icon Slides WE3BCO05 [1.920 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-WE3BCO05  
About • Received ※ 06 October 2023 — Revised ※ 12 October 2023 — Accepted ※ 14 December 2023 — Issued ※ 14 December 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WE3BCO06 Assonant: A Beamline-Agnostic Event Processing Engine for Data Collection and Standardization 1025
 
  • P.B. Mausbach, E.X. Miqueles, A. Pinto
    LNLS, Campinas, Brazil
 
  Synchrotron radiation facilities comprise beamlines designed to perform a wide range of X-ray experimental techniques which require complex instruments to monitor thermodynamic variables, sample-related variables, among others. Thus, synchrotron beamlines can produce heterogeneous sets of data and metadata, hereafter referred to as data, which impose several challenges to standardizing them. For open science and FAIR principles, such standardization is paramount for research reproducibility, besides accelerating the development of scalable and reusable data-driven solutions. To address this issue, the Assonant was devised to collect and standardize the data produced at beamlines of Sirius, the Brazilian fourth-generation synchrotron light source. This solution enables a NeXus-compliant technique-centric data standard at Sirius transparently for beamline teams by removing the burden of standardization tasks from them and providing a unified standardization solution for several techniques at Sirius. The Assonant implements a software interface to abstract data format-related specificities and to send the produced data to an event-driven infrastructure composed of streaming processing and microservices, able to transform the data flow according to NeXus*. This paper presents the development process of Assonant, the strategy adopted to standardize beamlines with different operating stages, and challenges faced during the standardization process for macromolecular crystallography and imaging data at Sirius.
* M. Könnecke et al., ’The nexus data format’, Journal of applied crystallography, vol. 48, no. 1, pp. 301-305, 2015.
 
slides icon Slides WE3BCO06 [4.909 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-WE3BCO06  
About • Received ※ 05 October 2023 — Accepted ※ 08 December 2023 — Issued ※ 18 December 2023  
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WE3BCO07 Extending the ICAT Metadata Catalogue to New Scientific Use Cases 1033
 
  • A. Götz, M. Bodin, A. De Maria Antolinos, M. Gaonach
    ESRF, Grenoble, France
  • M. AlMohammad, S.A. Matalgah
    SESAME, Allan, Jordan
  • P. Austin, V. Bozhinov, L.E. Davies, A. Gonzalez Beltran, K.S. Phipps
    STFC/RAL/SCD, Didcot, United Kingdom
  • R. Cabezas Quirós
    ALBA-CELLS, Cerdanyola del Vallès, Spain
  • R. Krahl
    HZB, Berlin, Germany
  • A. Pinto
    LNLS, Campinas, Brazil
  • K. Syder
    DLS, Oxfordshire, United Kingdom
 
  The ICAT metadata catalogue is a flexible solution for managing scientific metadata and data from a wide variety of domains following the FAIR data principles. This paper will present an update of recent developments of the ICAT metadata catalogue and the latest status of the ICAT collaboration. ICAT was originally developed by UK Science and Technology Facilities Council (STFC) to manage the scientific data of ISIS Neutron and Muon Source and Diamond Light Source. They have since been joined by a number of other institutes including ESRF, HZB, SESAME, and ALBA who together now form the ICAT Collaboration [1]. ICAT has been used to manage petabytes of scientific data for ISIS, DLS, ESRF, HZB, and in the future SESAME and ALBA and make these data FAIR. The latest version of the ICAT core as well as the new user interfaces, DataGateway and DataHub, and extensions to ICAT for implementing free text searching, a common search interface across Photon and Neutron catalogues, a protocol-based interface that allows making the metadata available for findability, electronic logbooks, sample tracking, and web-based data and domain specific viewers developed by the community will be presented. Finally recent developments to use ICAT to develop applications for processed data with rich metadata in the fields of small angle scattering, macromolecular crystallography and cryo-electron microscopy will be described. [1] https://icatproject.org  
slides icon Slides WE3BCO07 [7.888 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-WE3BCO07  
About • Received ※ 05 October 2023 — Revised ※ 23 October 2023 — Accepted ※ 14 December 2023 — Issued ※ 14 December 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WE3BCO08 Efficient and Automated Metadata Recording and Viewing for Scientific Experiments at MAX IV 1041
 
  • D. van Dijken, V. Da Silva, M. Eguiraun, V. Hardion, J.M. Klingberg, M. Leorato, M. Lindberg
    MAX IV Laboratory, Lund University, Lund, Sweden
 
  With the advancements in beamline instrumentation, synchrotron research facilities have seen a significant improvement. The detectors used today can generate thousands of frames within seconds. Consequently, an organized and adaptable framework is essential to facilitate the efficient access and assessment of the enormous volumes of data produced. Our communication presents a metadata management solution recently implemented at MAX IV, which automatically retrieves and records metadata from Tango devices relevant to the current experiment. The solution includes user-selected scientific metadata and predefined defaults related to the beamline setup, which are integrated into the Sardana control system and automatically recorded during each scan via the SciFish[1] library. The metadata recorded is stored in the SciCat[2] database, which can be accessed through a web-based interface called Scanlog[3]. The interface, built on ReactJS, allows users to easily sort, filter, and extract important information from the recorded metadata. The tool also provides real-time access to metadata, enabling users to monitor experiments and export data for post-processing. These new software tools ensure that recorded data is findable, accessible, interoperable and reusable (FAIR[4]) for many years to come. Collaborations are on-going to develop these tools at other particle accelerator research facilities.
[1] https://gitlab.com/MaxIV/lib-maxiv-scifish
[2] https://scicatproject.github.io/
[3] https://gitlab.com/MaxIV/svc-maxiv-scanlog
[4] https://www.nature.com/articles/sdata201618
 
slides icon Slides WE3BCO08 [1.914 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-WE3BCO08  
About • Received ※ 06 October 2023 — Revised ※ 23 October 2023 — Accepted ※ 14 December 2023 — Issued ※ 16 December 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WE3BCO09 IR of FAIR - Principles at the Instrument Level 1046
 
  • G. Günther, O. Mannix, V. Serve
    HZB, Berlin, Germany
  • S. Baunack
    KPH, Mainz, Germany
  • L. Capozza, F. Maas, M.C. Wilfert
    HIM, Mainz, Germany
  • O. Freyermuth
    Uni Bonn, Bonn, Germany
  • P. Gonzalez-Caminal, S. Karstensen, A. Lindner, I. Oceano, C. Schneide, K. Schwarz, T. Schörner-Sadenius, L.-M. Stein
    DESY, Hamburg, Germany
  • B. Gou
    IMP/CAS, Lanzhou, People’s Republic of China
  • J. Isaak, S. Typel
    TU Darmstadt, Darmstadt, Germany
  • A.K. Mistry
    GSI, Darmstadt, Germany
 
  Awareness of the need for FAIR data management has increased in recent years but examples of how to achieve this are often missing. Focusing on the large-scale instrument A4 at the MAMI accelerator, we transfer findings of the EMIL project at the BESSY synchrotron* to improve raw data, i.e. the primary output stored on long-term basis, according to the FAIR principles. Here, the instrument control software plays a key role as the central authority to start measurements and orchestrate connected (meta)data-taking processes. In regular discussions we incorporate the experiences of a wider community and engage to optimize instrument output through various measures from conversion to machine-readable formats over metadata enrichment to additional files creating scientific context. The improvements were already applied to currently built next generation instruments and could serve as a general guideline for publishing data sets.
*G. Günther et al. FAIR meets EMIL: Principles in Practice. Proceedings of ICALEPCS2021, https://doi.org/10.18429/JACoW-ICALEPCS2021-WEBL05
 
slides icon Slides WE3BCO09 [1.400 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-WE3BCO09  
About • Received ※ 04 October 2023 — Revised ※ 24 October 2023 — Accepted ※ 08 December 2023 — Issued ※ 15 December 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
THMBCMO01 New Developements on HDB++, the High-performance Data Archiving for Tango Controls 1190
 
  • D. Lacoste, R. Bourtembourg
    ESRF, Grenoble, France
  • J. Forsberg
    MAX IV Laboratory, Lund University, Lund, Sweden
  • T. Juerges
    SKAO, Macclesfield, United Kingdom
  • J.J.D. Mol
    ASTRON, Dwingeloo, The Netherlands
  • L. Pivetta, G. Scalamera
    Elettra-Sincrotrone Trieste S.C.p.A., Basovizza, Italy
  • S. Rubio-Manrique
    ALBA-CELLS, Cerdanyola del Vallès, Spain
 
  The Tango HDB++ project is a high performance event-driven archiving system which stores data with micro-second resolution timestamps. HDB++ supports many different backends, including MySQL/MariaDB, TimeScaleDB (a time-series PostgreSQL extension), and soon SQLite. Building on its flexible design, latest developments made supporting new backends even easier. HDB++ keeps improving with new features such as batch insertion and by becoming easier to install or setup in a testing environment, using ready to use docker images and striving to simplify all the steps of deployment. The HDB++ project is not only a data storage installation, but a full ecosystem to manage data, query it, and get the information needed. In this effort a lot of tools were developed to put a powerful backend to its proper use and be able to get the best out of the stored data. In this paper we will present as well the latest developments in data extraction, from low level libraries to web viewer integration such as grafana. Pointing out strategies in use in terms of data decimation, compression and others to help deliver data as fast as possible.  
slides icon Slides THMBCMO01 [0.926 MB]  
poster icon Poster THMBCMO01 [0.726 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-THMBCMO01  
About • Received ※ 05 October 2023 — Revised ※ 24 October 2023 — Accepted ※ 08 December 2023 — Issued ※ 16 December 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
THMBCMO02 Enhancing Data Management with SciCat: A Comprehensive Overview of a Metadata Catalogue for Research Infrastructures 1195
 
  • C. Minotti, A. Ashton, S.E. Bliven, S. Egli
    PSI, Villigen PSI, Switzerland
  • F.B. Bolmsten, M. Novelli, T.S. Richter
    ESS, Copenhagen, Denmark
  • M. Leorato
    MAX IV Laboratory, Lund University, Lund, Sweden
  • D. McReynolds
    LBNL, Berkeley, California, USA
  • L.A. Shemilt
    RFI, Didcot, United Kingdom
 
  As the volume and quantity of data continue to increase, the role of data management becomes even more crucial. It is essential to have tools that facilitate the management of data in order to manage the ever-growing amount of data. SciCat is a metadata catalogue that utilizes a NoSQL database, enabling it to accept heterogeneous data and customize it to meet the unique needs of scientists and facilities. With its API-centric architecture, SciCat simplifies the integration process with existing infrastructures, allowing for easy access to its capabilities and seamless integration into workflows, including cloud-based systems. The session aims to provide a comprehensive introduction of SciCat, a metadata catalogue started as a collaboration between PSI, ESS, and MAXIV, which has been adopted by numerous Research Infrastructures (RIs) worldwide. The presentation will delve into the guiding principles that underpin this project and the challenges that it endeavours to address. Moreover, it will showcase the features that have been implemented, starting from the ingestion of data to its eventual publication. Given the growing importance of the FAIR (Findable, Accessible, Interoperable, and Reusable) principles, the presentation will touch upon how their uptake is facilitated and will also provide an overview of the work carried out under the Horizon 2020 EU grant for FAIR.  
slides icon Slides THMBCMO02 [5.158 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-THMBCMO02  
About • Received ※ 05 October 2023 — Revised ※ 09 October 2023 — Accepted ※ 14 December 2023 — Issued ※ 20 December 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
THPDP014 SECoP and SECoP@HMC - Metadata in the Sample Environment Communication Protocol 1322
 
  • K. Kiefer, B. Klemke, L. Rossa, P. Wegmann
    HZB, Berlin, Germany
  • G. Brandl, E. Faulhaber, A. Zaft
    MLZ, Garching, Germany
  • N. Ekström, A. Pettersson
    ESS, Lund, Sweden
  • J. Kotanski, T. Kracht
    DESY, Hamburg, Germany
  • M. Zolliker
    PSI, Villigen PSI, Switzerland
 
  Funding: The project SECoP@HMC receives funding by the Helmholtz Association’s Initiative and Networking Fund (IVF).
The integration of sample environment (SE) equipment in x-ray and neutron experiments is a complex challenge both in the physical world and in the digital world. Dif-ferent experiment control software offer different interfac-es for the connection of SE equipment. Therefore, it is time-consuming to integrate new SE or to share SE equipment between facilities. To tackle this problem, the International Society for Sample Environment (ISSE, [1]) developed the Sample Environment Communication Protocol (SECoP) to standardize the communication between instrument control software and SE equipment [2]. SECoP offers, on the one hand, a generalized way to control SE equipment. On the other hand, SECoP holds the possibility to transport SE metadata in a well-defined way. In addition, SECoP provides machine readable self-description of the SE equipment which enables a fully automated integration into the instrument control soft-ware and into the processes for data storage. Using SECoP as a common standard for controlling SE equipment and generating SE metadata will save resources and intrinsi-cally give the opportunity to supply standardized and FAIR data compliant SE metadata. It will also supply a well-defined interface for user-provided SE equipment, for equipment shared by different research facilities and for industry. In this article will show how SECoP can help to provide a meaningful and complete set of metadata for SE equipment and we will present SECoP and the SECoP@HMC project supported by the Helmholtz Metadata Collaboration.
*K. Kiefer, et al. (2020). An introduction to SECoP - the sample environment communication protocol. Journal of Neutron Research, 21(3-4), pp.181-195
 
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-THPDP014  
About • Received ※ 06 October 2023 — Revised ※ 10 October 2023 — Accepted ※ 14 December 2023 — Issued ※ 22 December 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
THPDP017 A Data Acquisition Middle Layer Server with Python Support for Linac Operation and Experiments Monitoring and Control 1330
 
  • V. Rybnikov, A. Sulc
    DESY, Hamburg, Germany
 
  This paper presents online anomaly detection on low-level radio frequency (LLRF) cavities running on FLASH/XFEL DAQ system*. The code is run by a DAQ Middle Layer (ML) server, which has on-line access to all collected data. The ML server executes a Python script that runs a pre-trained machine learning model on every shot in the FLASH/XFEL machine. We discuss the challenges associated with real-time anomaly detection due to high data rates generated by RF cavities, and introduce a DAQ system pipeline and algorithms used for online detection on arbitrary channels in our control system. The system’s performance is evaluated using real data from operational RF cavities. We also focus on the DAQ monitor server’s features and its implementation.
*A. Aghababyan et al., ’Multi-Processor Based Fast Data Acquisition for a Free Electron Laser and Experiments’, in IEEE Transactions on Nuclear Science, vol. 55, No. 1, pp. 256-260, February 2008
 
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-THPDP017  
About • Received ※ 02 October 2023 — Revised ※ 25 October 2023 — Accepted ※ 13 December 2023 — Issued ※ 20 December 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
THPDP024 Automatic Configuration of Motors at the European XFEL 1358
 
  • F. Sohn, W. Ehsan, G. Giovanetti, D. Goeries, I. Karpics, K. Sukharnikov
    EuXFEL, Schenefeld, Germany
 
  The European XFEL (EuXFEL) scientific facility relies heavily on the SCADA control system Karabo* to configure and control a plethora of hardware devices. In this contribution a software solution for automatic configuration of collections of like Karabo devices is presented. Parameter presets for the automatic configuration are stored in a central database. In particular, the tool is used in the configuration of collections of single-axis motors, which is a recurring task at EuXFEL. To facilitate flexible experimental setup, motors are moved within the EuXFEL and reused at various locations in the operation of scientific instruments. A set of parameters has to be configured for each motor controller, depending on the controller and actuator model attached to a given programmable logic controller terminal, and the location of the motor. Since manual configurations are time-consuming and error-prone for large numbers of devices, a database-driven configuration of motor parameters is desirable. The software tool allows to assign and apply stored preset configurations to individual motors. Differences between the online configurations of the motors and the stored configurations are highlighted. Moreover, the software includes a "locking" feature to prevent motor usage after unintentional reconfigurations, which could lead to hardware damage.
* Hauf, Steffen, et al. "The Karabo distributed control system." Journal of synchrotron radiation 26.5 (2019): 1448-1461.
 
poster icon Poster THPDP024 [0.549 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-THPDP024  
About • Received ※ 05 October 2023 — Revised ※ 25 October 2023 — Accepted ※ 13 December 2023 — Issued ※ 19 December 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
THPDP036 Research on HALF Historical Data Archiver Technology 1394
 
  • X.K. Sun, D.D. Zhang
    USTC/NSRL, Hefei, Anhui, People’s Republic of China
  • H. Chen
    USTC, SNST, Anhui, People’s Republic of China
 
  The Hefei Advanced Light Facility (HALF) is a 2.2-GeV 4th synchrotron radiation light source, which is scheduled to start construction in Hefei, China in 2023. The HALF contains an injector and a 480-m diffraction limited storage ring, and 10 beamlines for phase one. The HALF historical data archiver system is responsible to store operation data for the entire facility including accelerator and beamlines. It is necessary to choose a high-performance database for the massive structured data generated by HALF. A fair test platform is designed and built to test the performance of six commonly used databases in the accelerator field. The test metrics include reading and writing performance, availability, scalability, and software ecosystem. This paper introduces the design of the database test scheme, the construction of the test platform and the future test plan in detail.  
poster icon Poster THPDP036 [0.933 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-THPDP036  
About • Received ※ 28 September 2023 — Revised ※ 26 October 2023 — Accepted ※ 11 December 2023 — Issued ※ 12 December 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
THPDP047 ELK Stack Deployment with Ansible 1411
 
  • T. Gatsi, X.P. Baloyi, J.L. Lekganyane, R.L. Schwartz
    SARAO, Cape Town, South Africa
 
  The 64-dish MeerKAT radio telescope, constructed in South Africa, became the largest and most sensitive radio telescope in the Southern Hemisphere until integrated with the Square Kilometer Array (SKA). Our Control and Monitoring system for Radio Astronomy Project such as MeerKAT produces a lot of data and logs that require proper handling. Viewing and analysis to trace and track system issues and as well as investigate technical software issues require one to go back in time to look for event occurrence. We therefore deployed an ELK software stack ( Elasticsearch, Kibana, Logstash) using Ansible in order to have the capability to aggregate system process logs. We deploy the stack as a cluster comprising lxc containers running inside a Proxmox Virtual Environment using Ansible as a software deployment tool. Each container in the cluster performs cluster duties such as deciding where to place index shards and when to move them. Each container is a data node that makes up the heart of the cluster. We deploy the stack as a cluster for load balancing purposes. Logstash ingests ,transforms and sends the data to the Kibana Graphical User Interface for visualization. Elasticsearch indexes, analyzes, and searches the ingested data into Kibana and our Operations Team and other system users can visualize and analyze these logs on the Kibana GUI frontend.  
poster icon Poster THPDP047 [0.503 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-THPDP047  
About • Received ※ 03 October 2023 — Revised ※ 12 October 2023 — Accepted ※ 13 December 2023 — Issued ※ 19 December 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
THPDP048 SARAO Science Repository: Sustainable Use of MeerKAT Data 1415
 
  • Z. Kukuma, G. Coetzer, R.S. Kupa, C. Schollar
    SARAO, Cape Town, South Africa
 
  Funding: National Research Foundation (South Africa)
The South African Radio Astronomy Observatory (SARAO) is excited to announce the forthcoming release of its digital repository for managing and preserving astronomical data. The repository, built using the DSpace platform, will allow researchers to catalogue and discover research data in a standardised way, while Digital Object Identifiers (DOIs) through the Datacite service will ensure the unique identification and persistent citation of data. The data will be hosted on a Ceph archive, which provides reliable storage and efficient retrieval using the s3 protocol. We are looking forward to hosting science data from any scientist who has used SARAO instruments. Researchers will be able to apply to host their data on the SARAO digital repository service, which will be released in the coming month. This repository will serve as a critical resource for the astronomy community, providing easy access to valuable data for research and collaboration. With the increasing demand for digital preservation and data accessibility, we believe that the SARAO digital repository will set a standard for other astronomical institutions to follow. We are committed to ensuring that our data remains available and accessible for the long term, and we invite all interested researchers to participate in this exciting initiative.
 
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-THPDP048  
About • Received ※ 05 October 2023 — Revised ※ 12 October 2023 — Accepted ※ 17 December 2023 — Issued ※ 22 December 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
THPDP082 Teaching an Old Accelerator New Tricks 1545
 
  • D.J. Novak, K.J. Bunnell, C. Dickerson, D. Stanton
    ANL, Lemont, Illinois, USA
 
  Funding: This work was supported by the U.S. Department of Energy, under Contract No. DE-AC02-06CH11357. This research used resources of ANLs ATLAS facility, which is a DOE Office of Science User Facility.
The Argonne Tandem Linac Accelerator System (ATLAS) has been a National User Facility since 1985. In that time, many of the systems that help operators retrieve, modify, and store beamline parameters have not kept pace with the advancement of technology. Development of a new method of storing and retrieving beamline parameters resulted in the testing and installation of a time-series database as a potential replacement for the traditional relational database. InfluxDB was selected due to its self-hosted Open-Source version availability as well as the simplicity of installation and setup. A program was written to periodically gather all accelerator parameters in the control system and store them in the time-series database. This resulted in over 13,000 distinct data points, captured at 5-minute intervals. A second test captured 35 channels on a 1-minute cadence. Graphing of the captured data is being done on Grafana, an Open-Source version is available that co-exists well with InfluxDB as the back-end. Grafana made visualizing the data simple and flexible. The testing has allowed for the use of modern graphing tools to generate new insights into operating the accelerator, as well as opened the door to building large data sets suitable for Artificial Intelligence and Machine Learning applications.
 
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-THPDP082  
About • Received ※ 10 October 2023 — Revised ※ 11 October 2023 — Accepted ※ 06 December 2023 — Issued ※ 13 December 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
THPDP089
Centralized Logging and Alerts for EPICS-based Control Systems with Logstash and Grafana  
 
  • K.R. Lauer
    SLAC, Menlo Park, California, USA
 
  Funding: This work is supported by Department of Energy contract DE-AC02-76SF00515.
Controls-focused centralized logging on the experimental side of the LCLS aims to bring together logging information from a variety of disparate sources into a single database for easy correlation and alerting. Our application of EPICS covers thousands of IOCs, dozens of Channel Access gateways, hundreds of PLCs and other physical devices, and numerous user-facing applications all running simultaneously. Each of these elements has its own idiosyncrasies in terms of how log messages are generated, where they are stored (if at all), and what information is contained. Our centralized logging implementation routes messages from our most common sources to a logstash instance which is configured to interpret each message and store the parsed information into a database. This system includes support for caput logs, Channel Access gateway put logs, messages generated from TwinCAT PLCs, user-facing Python applications, and the EPICS error logging facility. Aggregated logs can then be readily queried alongside EPICS Process Variable data in Grafana. Alerts can be easily configured by end-users to notify users of situations by way of Slack message and e-mail.
 
poster icon Poster THPDP089 [2.207 MB]  
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
THPDP101 Creating of HDF5 Files as Data Source for Analyses Using the Example of ALPS IIc and the DOOCS Control System 1570
 
  • S. Karstensen, P. Gonzalez-Caminal, A. Lindner, I. Oceano, V. Rybnikov, K. Schwarz, G. Sedov
    DESY, Hamburg, Germany
  • G. Günther, O. Mannix
    HZB, Berlin, Germany
 
  ALPS II is a light-shining through a wall (LSW) experiment to search for WISPs (very Weakly Interacting Slim Particles). Potential WISP candidates are axion-like particles or hidden sector photons. Axion-like particles may convert to light (and vice versa) in presence of a magnetic field. Similarly, hidden sector photons "mix" with light independent of any magnetic fields. This is exploited by ALPS II- Light from strong laser is shone into a magnetic field. Laser photons can be converted into a WISPs in front of a light-blocking barrier and reconverted into photons behind that barrier.  The experiment exploits optical resonators for laser power build-up in a large-scale optical cavity to boost the available power for the WISP production as well as their reconversion probability to light. The Distributed Object-Oriented Control System - DOOCS - provides a versatile software framework for creating accelerator-based control system applications. These can range from monitoring simple temperature sensors up to high-level controls and feedbacks of beam parameters as required for complex accelerator operations. In order to enable data analysis by researchers who do not have access to the DOOCS internal control system to read measured values, the measurement and control data are extracted from the control system and saved in HDF5 file format. Through this process, the data is decoupled from the control system and can be analysed on the NAF computer system, among other things. NodeRed acts here as a graphical tool for creating HDF5 files.  
poster icon Poster THPDP101 [50.659 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-THPDP101  
About • Received ※ 04 October 2023 — Revised ※ 12 October 2023 — Accepted ※ 06 December 2023 — Issued ※ 18 December 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)