Paper | Title | Other Keywords | Page |
---|---|---|---|
MO2BCO07 | Continuous Integration and Debian Packaging for Rapidly Evolving Software | controls, software, framework, interface | 61 |
|
|||
We describe our Jenkins-based continuous integration system and Debian packaging methods, and their application to the rapid development of the ChimeraTK framework. ChimeraTK is a C++ framework for control system applications and hardware access with a high level of abstraction and consists of more than 30 constantly changing interdependent libraries. Each component has its own release cycle for rapid development, yet API and ABI changes must be propagated to prevent problems in dependent libraries and over 60 applications. We present how we configured a Jenkins-based continuous integration system to detect problems quickly and systematically for the rapid development of ChimeraTK. The Debian packaging system is designed to ensure the compatibility of binary interfaces (ABI) and of development files (API). We present our approach using build scripts that allow the deployment of rapidly changing libraries and their dependent applications as Debian packages. These even permit applications to load runtime plugins that draw from the same core library, yet are compiled independently. | |||
Slides MO2BCO07 [0.805 MB] | |||
DOI • | reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-MO2BCO07 | ||
About • | Received ※ 06 October 2023 — Accepted ※ 13 October 2023 — Issued ※ 26 October 2023 | ||
Cite • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | ||
MO2AO03 | The Solid Sample Scanning Workflow at the European XFEL | target, FEL, experiment, controls | 78 |
|
|||
The fast solid sample scanner (FSSS) used at the HED instrument of the European XFEL (EuXFEL) enables data collection from multiple samples mounted into standardized frames which can be exchanged via a transfer system without breaking the interaction chamber vacuum. In order to maximize the effective target shot repetition rate, it is a key requirement to use sample holders containing pre-aligned targets measured on an accurate level of a few micrometers. This contribution describes the automated sample delivery workflow for performing solid sample scanning using the FSSS. This workflow covers the entire process, from automatically identifying target positions within the sample, using machine learning algorithms, to set the parameters needed to perform the scans. The integration of this solution into the EuXFEL control system, Karabo, not only allows to control and perform the scans with the existing scan tool but also provides tools for image annotation and data acquisition. The solution thus enables the storage of data and metadata for future correlation across a variety of beamline parameters set during the experiment. | |||
Slides MO2AO03 [12.892 MB] | |||
DOI • | reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-MO2AO03 | ||
About • | Received ※ 06 October 2023 — Revised ※ 09 October 2023 — Accepted ※ 11 December 2023 — Issued ※ 20 December 2023 | ||
Cite • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | ||
TU2BCO01 | Database’s Disaster Recovery Meets a Ransomware Attack | network, target, software, GUI | 280 |
|
|||
Cyberattacks are a growing threat to organizations around the world, including observatories. These attacks can cause significant disruption to operations and can be costly to recover from. This paper provides an overview of the history of cyberattacks, the motivations of attackers, and the organization of cybercrime groups. It also discusses the steps that can be taken to quickly restore a key component of any organization, the database, and the lessons learned during the recovery process. The paper concludes by identifying some areas for improvement in cybersecurity, such as the need for better training for employees, more secure networks, and more robust data backup and recovery procedures. | |||
DOI • | reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-TU2BCO01 | ||
About • | Received ※ 05 October 2023 — Revised ※ 11 October 2023 — Accepted ※ 16 November 2023 — Issued ※ 16 December 2023 | ||
Cite • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | ||
TUMBCMO08 | Extending Phoebus Data Browser to Alternative Data Sources | EPICS, controls, interface, experiment | 355 |
|
|||
The Phoebus user interface to EPICS is an integral part of the new control system for the ISIS Neutron and Muon Source accelerators and targets. Phoebus can use the EPICS Archiver Appliance, which has been deployed as part of the transition to EPICS, to display the history of PVs. However, ISIS data has and continues to be stored in the InfluxDB time series database. To enable access to this data, a Python application to interface between Phoebus and other databases has been developed. Our implementation utilises Quart, an asynchronous web framework, to allow multiple simultaneous data requests. Google Protocol Buffer, natively supported by Phoebus, is used for communication between Phoebus and the database. By employing subclassing, our system can in principle adapt to different databases, allowing flexibility and extensibility. Our open-source approach enhances Phoebus’s capabilities, enabling the community to integrate it within a wider range of applications. | |||
Slides TUMBCMO08 [0.799 MB] | |||
Poster TUMBCMO08 [0.431 MB] | |||
DOI • | reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-TUMBCMO08 | ||
About • | Received ※ 06 October 2023 — Revised ※ 12 October 2023 — Accepted ※ 21 November 2023 — Issued ※ 14 December 2023 | ||
Cite • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | ||
TUMBCMO15 | Enhancing Electronic Logbooks Using Machine Learning | controls, interface, electron, power-supply | 382 |
|
|||
Funding: Work supported by Brookhaven Science Associates, LLC under Contract No. DE-SC0012704 The electronic logbook (elog) system used at Brookhaven National Laboratory’s Collider-Accelerator Department (C-AD) allows users to customize logbook settings, including specification of favorite logbooks. Using machine learning techniques, customizations can be further personalized to provide users with a view of entries that match their specific interests. We will utilize natural language processing (NLP), optical character recognition (OCR), and topic models to augment the elog system. NLP techniques will be used to process and classify text entries. To analyze entries including images with text, such as screenshots of controls system applications, we will apply OCR. Topic models will generate entry recommendations that will be compared to previously tested language processing models. We will develop a command line interface tool to ease automation of NLP tasks in the controls system and create a web interface to test entry recommendations. This technique will create recommendations for each user, providing custom sets of entries and possibly eliminate the need for manual searching. |
|||
Slides TUMBCMO15 [0.905 MB] | |||
Poster TUMBCMO15 [4.697 MB] | |||
DOI • | reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-TUMBCMO15 | ||
About • | Received ※ 04 October 2023 — Revised ※ 12 October 2023 — Accepted ※ 24 November 2023 — Issued ※ 10 December 2023 | ||
Cite • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | ||
TUPDP024 | Technical Design Concept and First Steps in the Development of the New Accelerator Control System for PETRAIV | controls, interface, operation, software | 552 |
|
|||
At DESY, extensive technical planning and prototyping work is currently underway for the upgrade of the PETRAIII synchrotron light source to PETRAIV, a fourth-generation low-emittance machine. As part of this planned project, the accelerator control system will also be modernized. This paper reports on the main decisions taken in this context and gives an overview of the scope of the development and implementation work. | |||
Poster TUPDP024 [0.766 MB] | |||
DOI • | reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-TUPDP024 | ||
About • | Received ※ 14 September 2023 — Revised ※ 08 October 2023 — Accepted ※ 12 October 2023 — Issued ※ 22 October 2023 | ||
Cite • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | ||
TUPDP035 | New Developments for eGiga2m Historic Database Web Visualizer | controls, status, extraction, factory | 588 |
|
|||
eGiga is an historic database web visualizer since 2002. At the beginning it was connected to a proprietary database schema, support for other schemas was added later, for example HDB and HDB++. eGiga was deeply refactored in 2015 becoming eGiga2m. Between 2022 and 2023 a few improvements have been made, among them, optimization of large data extraction, improvement of images and pdf exports, substitution of 3d chart library with a touch screen enabled one; the addition of: logger status info, a new canvas responsive chart library, adjustable splitter, support for TimescaleDB and HDF5 data format, correlations and time series analysis, and ARIMA (autoregressive integrated moving average) forecast. | |||
Poster TUPDP035 [0.821 MB] | |||
DOI • | reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-TUPDP035 | ||
About • | Received ※ 05 October 2023 — Revised ※ 11 October 2023 — Accepted ※ 14 December 2023 — Issued ※ 17 December 2023 | ||
Cite • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | ||
TUPDP045 | Monitoring the SKA Infrastructure for CICD | monitoring, target, TANGO, distributed | 622 |
|
|||
Funding: INAF The Square Kilometre Array (SKA) is an international effort to build two radio interferometers in South Africa and Australia, forming one Observatory monitored and controlled from global headquarters (GHQ) based in the United Kingdom at Jodrell Bank. The selected solution for monitoring the SKA CICD (continuous integration and continuous deployment) Infrastructure is Prometheus with the help of Thanos. Thanos is used for high availability, resilience, and long term storage retention for monitoring data. For data visualisation, the Grafana project emerged as an important tool for displaying data in order to make specific reasoning and debugging of particular aspect of the infrastructure in place. In this paper, the monitoring platform is presented while considering quality aspect such as performance, scalability, and data preservation. |
|||
DOI • | reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-TUPDP045 | ||
About • | Received ※ 27 September 2023 — Revised ※ 18 October 2023 — Accepted ※ 14 December 2023 — Issued ※ 19 December 2023 | ||
Cite • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | ||
TUPDP047 | Development of Operator Interface Using Angular at the KEK e⁻/e⁺ Injector Linac | operation, linac, electron, interface | 631 |
|
|||
At the KEK e⁻/e⁺ injector linac, the first electronic operation logbook system was developed using a relational database in 1995. This logbook system has the capability to automatically record detailed operational status changes. In addition, operators can manually input detailed information about operational problems, which is helpful for future troubleshooting. In 2010, the logbook system was improved with the implementation of a redundant database, an Adobe Flash based frontend, and an image file handling feature. In 2011, the CSS archiver system with PostgreSQL and a new web-based archiver viewer utilizing Adobe Flash. However, with the discontinuation of Adobe Flash support at the end of 2020, it became necessary to develop a new frontend without Flash for both the operation logbook and archiver viewer systems. For this purpose, the authors adopted the Angular framework, which is widely used for building web applications using JavaScript. In this paper, we report the development of operator interfaces using Angular for the injector linac. | |||
DOI • | reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-TUPDP047 | ||
About • | Received ※ 05 October 2023 — Revised ※ 08 October 2023 — Accepted ※ 10 December 2023 — Issued ※ 19 December 2023 | ||
Cite • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | ||
TUPDP120 | How Embracing a Common Tech Stack Can Improve the Legacy Software Migration Experience | software, framework, laser, experiment | 860 |
|
|||
Funding: This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344 Over the last several years, the National Ignition Facility (NIF), the world’s largest and most energetic laser, has regularly conducted approximately 400 shots per year. Each experiment is defined by up to 48 unique pulse shapes, with each pulse shape potentially having thousands of configurable data points. The importance of accurately representing small changes in pulse shape, illustrated by the historic ignition experiment in December 2022, highlights the necessity for pulse designers at NIF to have access to robust, easy to use, and accurate design software that can integrate with the existing and future ecosystem of software at NIF. To develop and maintain this type of complex software, the Shot Data Systems (SDS) group has recently embraced leveraging a common set of recommended technologies and frameworks for software development across their suite of applications. This paper will detail SDS’s experience migrating an existing legacy Java Swing-based pulse shape editor into a modern web application leveraging technologies recommended by the common tech stack, including Spring Boot, TypeScript, React and Docker with Kubernetes, as well as discuss how embracing a common set of technologies influenced the migration path, improved the developer experience, and how it will benefit the extensibility and maintainability of the application for years to come. LLNL Release Number: LLNL-ABS-848203 |
|||
Poster TUPDP120 [0.611 MB] | |||
DOI • | reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-TUPDP120 | ||
About • | Received ※ 27 September 2023 — Revised ※ 09 October 2023 — Accepted ※ 04 December 2023 — Issued ※ 16 December 2023 | ||
Cite • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | ||
TUPDP125 | Design and Implementation the LCLS-II Machine Protection System | software, controls, interface, EPICS | 877 |
|
|||
The linear accelerator complex at the SLAC National Accelerator Laboratory has been upgraded to include LCLS-II, a new linac capable of producing beam power as high as several hundred kW with CW beam rates up to 1 MHz while maintaining existing capabilities from the copper machine. Because of these high-power beams, a new Machine Protection System with a latency of less than 100 us was designed and installed to prevent damage to the machine when a fault or beam loss is detected. The new LCLS-II MPS must work in parallel with the existing MPS from the respective sources all the way through the user hutches to provide a mechanism to reduce the beam rate or shut down operation in a beamline without impacting the neighboring beamline when a fault condition is detected. Because either beamline can use either accelerator as its source and each accelerator has different operating requirements, great care was taken in the overall system design to ensure the necessary operation can be achieved with a seamless experience for the accelerator operators. The overall system design of the LCLS-II MPS software including the ability to interact with the existing systems and the tools developed for the control room to provide the user operation experience will be described. | |||
Poster TUPDP125 [1.360 MB] | |||
DOI • | reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-TUPDP125 | ||
About • | Received ※ 04 October 2023 — Revised ※ 30 November 2023 — Accepted ※ 04 December 2023 — Issued ※ 14 December 2023 | ||
Cite • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | ||
WE3BCO01 | Modular and Scalable Archiving for EPICS and Other Time Series Using ScyllaDB and Rust | EPICS, FEL, MMI, operation | 1008 |
|
|||
At PSI we currently run too many different products with the common goal of archiving timestamped data. This includes EPICS Channel Archiver as well as Archiver Appliance for EPICS IOC’s, a buffer storage for beam-synchronous data at SwissFEL, and more. This number of monolithic solutions is too large to maintain and overlaps in functionality. Each solution brings their own storage engine, file format and centralized design which is hard to scale. In this talk I report on how we factored the system into modular components with clean interfaces. At the core, the different storage engines and file formats have been replaced by ScyllaDB, which is an open source product with enterprise support and remarkable adoption in the industry. We gain from its distributed, fault-tolerant and scalable design. The ingest of data into ScyllaDB is factored into components according to the different type of protocols of the sources, e.g. Channel Access. Here we build upon the Rust language and achieve robust, maintainable and performant services. One interface to access and process the recorded data is the HTTP retrieval service. This service offers e.g. search among the channels by various criteria, full event data as well as aggregated and binned data in either json or binary formats. This service can also run user-defined data transformations and act as a source for Grafana for a first view into recorded channel data. Our setup for SwissFEL ingests the ~370k EPICS updates/s from ~220k PVs (scalar and waveform), having rates between 0.1 and 100 Hz. | |||
Slides WE3BCO01 [1.179 MB] | |||
DOI • | reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-WE3BCO01 | ||
About • | Received ※ 04 October 2023 — Revised ※ 09 November 2023 — Accepted ※ 14 December 2023 — Issued ※ 14 December 2023 | ||
Cite • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | ||
WE3BCO03 | Data Management for Tracking Optic Lifetimes at the National Ignition Facility | optics, site, laser, status | 1012 |
|
|||
Funding: This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. The National Ignition Facility (NIF), the most energetic laser in the world, employs over 9000 optics to reshape, amplify, redirect, smooth, focus, and convert the wavelength of laser light as it travels along 192 beamlines. Underlying the management of these optics is an extensive Oracle database storing details of the entire life of each optic from the time it leaves the vendor to the time it is retired. This journey includes testing and verification, preparing, installing, monitoring, removing, and in some cases repairing and re-using the optics. This talk will address data structures and processes that enable storing information about each step like identifying where an optic is in its lifecycle and tracking damage through time. We will describe tools for reporting status and enabling key decisions like which damage sites should be blocked or repaired and which optics exchanged. Managing relational information and ensuring its integrity is key to managing the status and inventory of optics for NIF. LLNL Release Number: LLNL-ABS-847598 |
|||
Slides WE3BCO03 [2.379 MB] | |||
DOI • | reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-WE3BCO03 | ||
About • | Received ※ 26 September 2023 — Revised ※ 09 October 2023 — Accepted ※ 13 October 2023 — Issued ※ 24 October 2023 | ||
Cite • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | ||
WE3BCO05 | The CMS Detector Control Systems Archiving Upgrade | controls, operation, detector, software | 1022 |
|
|||
The CMS experiment relies on its Detector Control System (DCS) to monitor and control over 10 million channels, ensuring a safe and operable detector that is ready to take physics data. The data is archived in the CMS Oracle conditions database, which is accessed by operators, trigger and data acquisition systems. In the upcoming extended year-end technical stop of 2023/2024, the CMS DCS software will be upgraded to the latest WinCC-OA release, which will utilise the SQLite database and the Next Generation Archiver (NGA), replacing the current Raima database and RDB manager. Taking advantage of this opportunity, CMS has developed its own version of the NGA backend to improve its DCS database interface. This paper presents the CMS DCS NGA backend design and mechanism to improve the efficiency of the read-and-write data flow. This is achieved by simplifying the current Oracle conditions schema and introducing a new caching mechanism. The proposed backend will enable faster data access and retrieval, ultimately improving the overall performance of the CMS DCS. | |||
Slides WE3BCO05 [1.920 MB] | |||
DOI • | reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-WE3BCO05 | ||
About • | Received ※ 06 October 2023 — Revised ※ 12 October 2023 — Accepted ※ 14 December 2023 — Issued ※ 14 December 2023 | ||
Cite • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | ||
WE3BCO08 | Efficient and Automated Metadata Recording and Viewing for Scientific Experiments at MAX IV | experiment, TANGO, interface, controls | 1041 |
|
|||
With the advancements in beamline instrumentation, synchrotron research facilities have seen a significant improvement. The detectors used today can generate thousands of frames within seconds. Consequently, an organized and adaptable framework is essential to facilitate the efficient access and assessment of the enormous volumes of data produced. Our communication presents a metadata management solution recently implemented at MAX IV, which automatically retrieves and records metadata from Tango devices relevant to the current experiment. The solution includes user-selected scientific metadata and predefined defaults related to the beamline setup, which are integrated into the Sardana control system and automatically recorded during each scan via the SciFish[1] library. The metadata recorded is stored in the SciCat[2] database, which can be accessed through a web-based interface called Scanlog[3]. The interface, built on ReactJS, allows users to easily sort, filter, and extract important information from the recorded metadata. The tool also provides real-time access to metadata, enabling users to monitor experiments and export data for post-processing. These new software tools ensure that recorded data is findable, accessible, interoperable and reusable (FAIR[4]) for many years to come. Collaborations are on-going to develop these tools at other particle accelerator research facilities.
[1] https://gitlab.com/MaxIV/lib-maxiv-scifish [2] https://scicatproject.github.io/ [3] https://gitlab.com/MaxIV/svc-maxiv-scanlog [4] https://www.nature.com/articles/sdata201618 |
|||
Slides WE3BCO08 [1.914 MB] | |||
DOI • | reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-WE3BCO08 | ||
About • | Received ※ 06 October 2023 — Revised ※ 23 October 2023 — Accepted ※ 14 December 2023 — Issued ※ 16 December 2023 | ||
Cite • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | ||
THMBCMO01 | New Developements on HDB++, the High-performance Data Archiving for Tango Controls | TANGO, controls, interface, extraction | 1190 |
|
|||
The Tango HDB++ project is a high performance event-driven archiving system which stores data with micro-second resolution timestamps. HDB++ supports many different backends, including MySQL/MariaDB, TimeScaleDB (a time-series PostgreSQL extension), and soon SQLite. Building on its flexible design, latest developments made supporting new backends even easier. HDB++ keeps improving with new features such as batch insertion and by becoming easier to install or setup in a testing environment, using ready to use docker images and striving to simplify all the steps of deployment. The HDB++ project is not only a data storage installation, but a full ecosystem to manage data, query it, and get the information needed. In this effort a lot of tools were developed to put a powerful backend to its proper use and be able to get the best out of the stored data. In this paper we will present as well the latest developments in data extraction, from low level libraries to web viewer integration such as grafana. Pointing out strategies in use in terms of data decimation, compression and others to help deliver data as fast as possible. | |||
Slides THMBCMO01 [0.926 MB] | |||
Poster THMBCMO01 [0.726 MB] | |||
DOI • | reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-THMBCMO01 | ||
About • | Received ※ 05 October 2023 — Revised ※ 24 October 2023 — Accepted ※ 08 December 2023 — Issued ※ 16 December 2023 | ||
Cite • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | ||
THMBCMO02 | Enhancing Data Management with SciCat: A Comprehensive Overview of a Metadata Catalogue for Research Infrastructures | experiment, neutron, controls, framework | 1195 |
|
|||
As the volume and quantity of data continue to increase, the role of data management becomes even more crucial. It is essential to have tools that facilitate the management of data in order to manage the ever-growing amount of data. SciCat is a metadata catalogue that utilizes a NoSQL database, enabling it to accept heterogeneous data and customize it to meet the unique needs of scientists and facilities. With its API-centric architecture, SciCat simplifies the integration process with existing infrastructures, allowing for easy access to its capabilities and seamless integration into workflows, including cloud-based systems. The session aims to provide a comprehensive introduction of SciCat, a metadata catalogue started as a collaboration between PSI, ESS, and MAXIV, which has been adopted by numerous Research Infrastructures (RIs) worldwide. The presentation will delve into the guiding principles that underpin this project and the challenges that it endeavours to address. Moreover, it will showcase the features that have been implemented, starting from the ingestion of data to its eventual publication. Given the growing importance of the FAIR (Findable, Accessible, Interoperable, and Reusable) principles, the presentation will touch upon how their uptake is facilitated and will also provide an overview of the work carried out under the Horizon 2020 EU grant for FAIR. | |||
Slides THMBCMO02 [5.158 MB] | |||
DOI • | reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-THMBCMO02 | ||
About • | Received ※ 05 October 2023 — Revised ※ 09 October 2023 — Accepted ※ 14 December 2023 — Issued ※ 20 December 2023 | ||
Cite • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | ||
THMBCMO08 | whatrecord: A Python-Based EPICS File Format Tool | EPICS, controls, HOM, PLC | 1206 |
|
|||
Funding: This work is supported by Department of Energy contract DE-AC02-76SF00515. whatrecord is a Python-based parsing tool for interacting with a variety of EPICS file formats, including R3 and R7 database files. The project aims for compliance with epics-base by using Lark grammars that closely reflect the original Lex/Yacc grammars. It offers a suite of tools for working with its supported file formats, with convenient Python-facing dataclass object representations and easy JSON serialization. A prototype backend web server for hosting IOC and record information is also included as well as a Vue.js-based frontend, an EPICS build system Makefile dependency inspector, a static analyzer-of-sorts for startup scripts, and a host of other things that the author added at whim to this side project. |
|||
Slides THMBCMO08 [1.442 MB] | |||
Poster THMBCMO08 [1.440 MB] | |||
DOI • | reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-THMBCMO08 | ||
About • | Received ※ 03 October 2023 — Revised ※ 24 October 2023 — Accepted ※ 14 December 2023 — Issued ※ 21 December 2023 | ||
Cite • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | ||
THPDP007 | Rolling Out a New Platform for Information System Architecture at SOLEIL | MMI, operation, TANGO, software | 1301 |
|
|||
SOLEIL Information System is a 20-year legacy with multiple software and IT solutions following constantly evolving business requirements. Lots of non-uniform and siloed information systems have been experienced increasing the IT complexity. The future of SOLEIL (SOLEIL II*) will be based on a new architecture embracing native support for continuous digital transformation and will enhance user experience. Redesigning an information system given synchrotron-based science challenges requires a homogeneous and flexible approach. A new organizational setup is starting with the implementation of a transversal architectural committee. Its missions will be to set the foundation of architecture design principles and to foster all projects’ teams to apply them. The committee will support the building of architectural specifications and will drive all architecture gate reviews. Interoperability is a key pillar for SOLEIL II. Therefore, a synchronous and asynchronous inter-processes communications is being built as a platform to connect existing systems and future ones; it is based both on an event broker and an API manager. An implementation has been developed to interconnect our existing operational tools (CMMS** and our ITSM*** portal). Our current use case is a brand new application dedicated to samples’ lifecycle interconnected with various existing business applications. This paper will detail our holistic approach for addressing the future evolution of our information system, made mandatory given the new requirements from SOLEIL II.
* SOLEIL II: Towards A Major Transformation of the Facility ** CMMS: Computerized Maintenance Management System *** ITSM: Information Technology Service Management |
|||
Poster THPDP007 [1.397 MB] | |||
DOI • | reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-THPDP007 | ||
About • | Received ※ 05 October 2023 — Revised ※ 25 October 2023 — Accepted ※ 13 December 2023 — Issued ※ 16 December 2023 | ||
Cite • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | ||
THPDP036 | Research on HALF Historical Data Archiver Technology | EPICS, controls, experiment, distributed | 1394 |
|
|||
The Hefei Advanced Light Facility (HALF) is a 2.2-GeV 4th synchrotron radiation light source, which is scheduled to start construction in Hefei, China in 2023. The HALF contains an injector and a 480-m diffraction limited storage ring, and 10 beamlines for phase one. The HALF historical data archiver system is responsible to store operation data for the entire facility including accelerator and beamlines. It is necessary to choose a high-performance database for the massive structured data generated by HALF. A fair test platform is designed and built to test the performance of six commonly used databases in the accelerator field. The test metrics include reading and writing performance, availability, scalability, and software ecosystem. This paper introduces the design of the database test scheme, the construction of the test platform and the future test plan in detail. | |||
Poster THPDP036 [0.933 MB] | |||
DOI • | reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-THPDP036 | ||
About • | Received ※ 28 September 2023 — Revised ※ 26 October 2023 — Accepted ※ 11 December 2023 — Issued ※ 12 December 2023 | ||
Cite • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | ||
THPDP048 | SARAO Science Repository: Sustainable Use of MeerKAT Data | software, interface, framework, data-management | 1415 |
|
|||
Funding: National Research Foundation (South Africa) The South African Radio Astronomy Observatory (SARAO) is excited to announce the forthcoming release of its digital repository for managing and preserving astronomical data. The repository, built using the DSpace platform, will allow researchers to catalogue and discover research data in a standardised way, while Digital Object Identifiers (DOIs) through the Datacite service will ensure the unique identification and persistent citation of data. The data will be hosted on a Ceph archive, which provides reliable storage and efficient retrieval using the s3 protocol. We are looking forward to hosting science data from any scientist who has used SARAO instruments. Researchers will be able to apply to host their data on the SARAO digital repository service, which will be released in the coming month. This repository will serve as a critical resource for the astronomy community, providing easy access to valuable data for research and collaboration. With the increasing demand for digital preservation and data accessibility, we believe that the SARAO digital repository will set a standard for other astronomical institutions to follow. We are committed to ensuring that our data remains available and accessible for the long term, and we invite all interested researchers to participate in this exciting initiative. |
|||
DOI • | reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-THPDP048 | ||
About • | Received ※ 05 October 2023 — Revised ※ 12 October 2023 — Accepted ※ 17 December 2023 — Issued ※ 22 December 2023 | ||
Cite • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | ||
THPDP068 | Implementing High Performance & Highly Reliable Time Series Acquisition Software for the CERN-Wide Accelerator Data Logging Service | network, controls, operation, software | 1494 |
|
|||
The CERN Accelerator Data Logging Service (NXCALS) stores data generated by the accelerator infrastructure and beam related devices. This amounts to 3.5TB of data per day, coming from more than 2.5 million signals from heterogeneous systems at various frequencies. Around 85% of this data is transmitted through the Controls Middleware (CMW) infrastructure. To reliably gather such volumes of data, the acquisition system must be highly available, resilient and robust. It also has to be highly efficient and easily scalable, given the regularly growing data rates and volumes, particularly for the increases expected to be produced by the future High Luminosity LHC. This paper describes the NXCALS time series acquisition software, known as Data Sources. System architecture, design choices, and recovery solutions for various failure scenarios (e.g. network disruptions or cluster split-brain problems) will be covered. Technical implementation details will be discussed, covering the clustering of Akka Actors collecting data from tens of thousands of CMW devices and sharing the lessons learned. The NXCALS system has been operational since 2018 and has demonstrated the capability to fulfil all aforementioned characteristics, while also ensuring self-healing capabilities and no data losses during redeployments. The engineering challenge, architecture, lessons learned, and the implementation of this acquisition system are not CERN-specific and are therefore relevant to other institutes facing comparable challenges. | |||
Poster THPDP068 [2.960 MB] | |||
DOI • | reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-THPDP068 | ||
About • | Received ※ 06 October 2023 — Revised ※ 12 October 2023 — Accepted ※ 13 October 2023 — Issued ※ 20 November 2023 | ||
Cite • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | ||
THPDP073 | Scilog: A Flexible Logbook System for Experiment Data Management | experiment, target, controls, GUI | 1512 |
|
|||
Capturing both raw and metadata during an experiment is of the utmost importance, as it provides valuable context for the decisions made during the experiment and the acquisition strategy. However, logbooks often lack seamless integration with facility-specific services such as authentication and data acquisition systems and can prove to be a burden, particularly in high-pressure situations during experiments. To address these challenges, SciLog has been developed as a logbook system utilizing MongoDB, Loopback, and Angular. Its primary objective is to provide a flexible and extensible environment, as well as a user-friendly interface. SciLog relies on atomic entries in a NoSQL database that can be easily queried, sorted, and displayed according to the user’s requirements. The integration with facility-specific authorization systems and the automatic import of new experiment proposals enable a user experience that is specifically tailored for the challenging environment of experiments conducted at large research facilities. The system is currently in use during beam time at the Paul Scherrer Institut, where it is collecting valuable feedback from scientists to enhance its capabilities. | |||
DOI • | reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-THPDP073 | ||
About • | Received ※ 05 October 2023 — Revised ※ 10 October 2023 — Accepted ※ 08 December 2023 — Issued ※ 11 December 2023 | ||
Cite • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | ||
THPDP082 | Teaching an Old Accelerator New Tricks | controls, experiment, operation, linac | 1545 |
|
|||
Funding: This work was supported by the U.S. Department of Energy, under Contract No. DE-AC02-06CH11357. This research used resources of ANLs ATLAS facility, which is a DOE Office of Science User Facility. The Argonne Tandem Linac Accelerator System (ATLAS) has been a National User Facility since 1985. In that time, many of the systems that help operators retrieve, modify, and store beamline parameters have not kept pace with the advancement of technology. Development of a new method of storing and retrieving beamline parameters resulted in the testing and installation of a time-series database as a potential replacement for the traditional relational database. InfluxDB was selected due to its self-hosted Open-Source version availability as well as the simplicity of installation and setup. A program was written to periodically gather all accelerator parameters in the control system and store them in the time-series database. This resulted in over 13,000 distinct data points, captured at 5-minute intervals. A second test captured 35 channels on a 1-minute cadence. Graphing of the captured data is being done on Grafana, an Open-Source version is available that co-exists well with InfluxDB as the back-end. Grafana made visualizing the data simple and flexible. The testing has allowed for the use of modern graphing tools to generate new insights into operating the accelerator, as well as opened the door to building large data sets suitable for Artificial Intelligence and Machine Learning applications. |
|||
DOI • | reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-THPDP082 | ||
About • | Received ※ 10 October 2023 — Revised ※ 11 October 2023 — Accepted ※ 06 December 2023 — Issued ※ 13 December 2023 | ||
Cite • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | ||
THSDSC05 | The SKAO Engineering Data Archive: From Basic Design to Prototype Deployments in Kubernetes | software, controls, TANGO, extraction | 1590 |
|
|||
During its construction and production life cycles, the Square Kilometre Array Observatory (SKAO) will generate non-scientific, i.e. engineering, data. The sources of the engineering data are either hardware devices or software programs that generate this data. Thanks to the Tango Controls software framework, the engineering data can be automatically stored in a relational database, which SKAP refers to as the Engineering Data Archive (EDA). Making the data in the EDA accessible and available to engineers and users in the observatory is as important as storing the data itself. Possible use cases for the data are verification of systems under test, performance evaluation of systems under test, predictive maintenance and general performance monitoring over time. Therefore we tried to build on the knowledge that other research facilities in the Tango Controls collaboration already gained, when they designed, implemented, deployed and ran their engineering data archives. SKAO implemented a prototype for its EDA, that leverages several open-source software packages, with Tango Controls’ HDB++, the Timescaledb time series database and Kubernetes at its core. In this overview we will answer the immediate question "But why do we not just do, what others are doing?" and explain the reasoning behind our choices in the design and in the implementation. | |||
Poster THSDSC05 [3.062 MB] | |||
DOI • | reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-THSDSC05 | ||
About • | Received ※ 05 October 2023 — Revised ※ 27 October 2023 — Accepted ※ 05 December 2023 — Issued ※ 11 December 2023 | ||
Cite • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | ||
FR2BCO03 | Taranta Project - Update and Current Status | TANGO, controls, factory, experiment | 1657 |
|
|||
Taranta, developed jointly by MAX IV Laboratory and SKA Observatory, is a web based no-code interface for remote control of instruments at accelerators and other scientific facilities. It has seen a great success in system development and scientific experiment usage. In the past two years, the panel of users has greatly expanded. The first generation of Taranta was not able to handle the challenges introduced by the user cases, notably the decreased performance when a high number of data points are requested, as well as new functionality requests. Therefore, a series of refactoring and performance improvements of Taranta are ongoing, to prepare it for handling large data transmission between Taranta and multiple sources of information, and to provide more possibilities for users to develop their own dashboards. This article presents the status of the Taranta project from the aspects of widgets updates, packages management, optimization of the communication with the backend TangoGQL, as well as the investigation on a new python library compatible with the newest python version for TangoGQL. In addition to the technical improvements, more facilities other than MAX IV and SKAO are considering to join Taranta project. One workshop has been successfully held and there will be more in the future. This article also presents the lesson learned from this project, the road map, and the GUI strategy for the near future. | |||
Slides FR2BCO03 [4.759 MB] | |||
DOI • | reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-FR2BCO03 | ||
About • | Received ※ 06 October 2023 — Accepted ※ 21 November 2023 — Issued ※ 23 November 2023 | ||
Cite • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | ||
FR2BCO05 | Magnet Information Management System Based on Web Application for the KEK e⁻/e⁺ Injector Linac | linac, controls, software, operation | 1669 |
|
|||
The KEK injector linac provides e⁻/e⁺ beam to four independent storage rings and a positron damping ring. An accurate information management system of the accelerator components is very important since it is utilized for the beam tuning model. Especially, the incorrect magnet database may cause large deterioration in the quality of beam emittance. In KEK linac, a text-based database system has been used for the information management of magnet system in the long time. It comprises several independent text files which are master information to generate the EPICS database files and the other configuration files required for many linac control software. In this management scheme, it is not easy to access and update any information for the common user except control software expert. For this reason, a new web application-based magnet information management system was developed with the Angular and PHP framework. In the new system, the magnet information can be easily extracted and modified through any web browser for any user. In this paper, we report the new magnet information management system in detail. | |||
Slides FR2BCO05 [2.146 MB] | |||
DOI • | reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-FR2BCO05 | ||
About • | Received ※ 09 October 2023 — Revised ※ 11 October 2023 — Accepted ※ 20 November 2023 — Issued ※ 18 December 2023 | ||
Cite • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | ||