Software
Software Architecture & Technology Evolution
Paper Title Page
TH2AO01 Log Anomaly Detection on EuXFEL Nodes 1126
 
  • A. Sulc, A. Eichler, T. Wilksen
    DESY, Hamburg, Germany
 
  Funding: This work was supported by HamburgX grant LFF-HHX-03 to the Center for Data and Computing in Natural Sciences (CDCS) from the Hamburg Ministry of Science, Research, Equalities and Districts.
This article introduces a method to detect anomalies in the log data generated by control system nodes at the European XFEL accelerator. The primary aim of this proposed method is to offer operators a comprehensive understanding of the availability, status, and problems specific to each node. This information is vital for ensuring the smooth operation. The sequential nature of logs and the absence of a rich text corpus that is specific to our nodes pose a significant limitation for traditional and learning-based approaches for anomaly detection. To overcome this limitation, we propose a method that uses word embedding and models individual nodes as a sequence of these vectors that commonly co-occur, using a Hidden Markov Model (HMM). We score individual log entries by computing a probability ratio between the probability of the full log sequence including the new entry and the probability of just the previous log entries, without the new entry. This ratio indicates how probable the sequence becomes when the new entry is added. The proposed approach can detect anomalies by scoring and ranking log entries from EuXFEL nodes where entries that receive high scores are potential anomalies that do not fit the routine of the node. This method provides a warning system to alert operators about these irregular log events that may indicate issues.
 
slides icon Slides TH2AO01 [1.420 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-TH2AO01  
About • Received ※ 30 September 2023 — Accepted ※ 08 December 2023 — Issued ※ 13 December 2023  
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
TH2AO02 High Availability Alarm System Deployed with Kubernetes 1134
 
  • J.J. Bellister, T. Schwander, T. Summers
    SLAC, Menlo Park, California, USA
 
  To support multiple scientific facilities at SLAC, a modern alarm system designed for availability, integrability, and extensibility is required. The new alarm system deployed at SLAC fulfills these requirements by blending the Phoebus alarm server with existing open-source technologies for deployment, management, and visualization. To deliver a high-availability deployment, Kubernetes was chosen for orchestration of the system. By deploying all parts of the system as containers with Kubernetes, each component becomes robust to failures, self-healing, and readily recoverable. Well-supported Kubernetes Operators were selected to manage Kafka and Elasticsearch in accordance with current best practices, using high-level declarative deployment files to shift deployment details into the software itself and facilitate nearly seamless future upgrades. An automated process based on git-sync allows for automated restarts of the alarm server when configuration files change eliminating the need for sysadmin intervention. To encourage increased accelerator operator engagement, multiple interfaces are provided for interacting with alarms. Grafana dashboards offer a user-friendly way to build displays with minimal code, while a custom Python client allows for direct consumption from the Kafka message queue and access to any information logged by the system.  
slides icon Slides TH2AO02 [0.798 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-TH2AO02  
About • Received ※ 06 October 2023 — Revised ※ 09 October 2023 — Accepted ※ 14 December 2023 — Issued ※ 18 December 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
TH2AO03 An Update on the CERN Journey from Bare Metal to Orchestrated Containerization for Controls 1138
 
  • T. Oulevey, B. Copy, F. Locci, S.T. Page, C. Roderick, M. Vanden Eynden, J.-B. de Martel
    CERN, Meyrin, Switzerland
 
  At CERN, work has been undertaken since 2019 to transition from running Accelerator controls software on bare metal to running in an orchestrated, containerized environment. This will allow engineers to optimise infrastructure cost, to improve disaster recovery and business continuity, and to streamline DevOps practices along with better security. Container adoption requires developers to apply portable practices including aspects related to persistence integration, network exposure, and secrets management. It also promotes process isolation and supports enhanced observability. Building on containerization, orchestration platforms (such as Kubernetes) can be used to drive the life cycle of independent services into a larger scale infrastructure. This paper describes the strategies employed at CERN to make a smooth transition towards an orchestrated containerised environment and discusses the challenges based on the experience gained during an extended proof-of-concept phase.  
slides icon Slides TH2AO03 [0.480 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-TH2AO03  
About • Received ※ 06 October 2023 — Revised ※ 24 October 2023 — Accepted ※ 14 December 2023 — Issued ※ 19 December 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
TH2AO04 Developing Modern High-Level Controls APIs 1145
 
  • B. Urbaniec, L. Burdzanowski, S.G. Gennaro
    CERN, Meyrin, Switzerland
 
  The CERN Accelerator Controls are comprised of various high-level services that work together to provide a highly available, robust, and versatile means of controlling the Accelerator Complex. Each service includes an API (Application Programming Interface) which is used both for service-to-service interactions, as well as by end-user applications. These APIs need to support interactions from heterogeneous clients using a variety of programming languages including Java, Python, C++, or direct HTTP/REST calls. This presents several technical challenges, including aspects such as reliability, availability and scalability. API usability is another important factor with accents on ease of access and minimizing the exposure to Controls domain complexity. At the same time, there is the requirement to efficiently and safely cater for the inevitable need to evolve the APIs over time. This paper describes concrete technical and design solutions addressing these challenges, based on experience gathered over numerous years. To further support this, the paper presents examples of real-life telemetry data focused on latency and throughput, along with the corresponding analysis. The paper also describes on-going and future API development.  
slides icon Slides TH2AO04 [2.676 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-TH2AO04  
About • Received ※ 03 October 2023 — Revised ※ 12 October 2023 — Accepted ※ 17 December 2023 — Issued ※ 18 December 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
TH2AO05 Secure Role-Based Access Control for RHIC Complex 1150
 
  • A. Sukhanov, J. Morris
    BNL, Upton, New York, USA
 
  Funding: Work supported by Brookhaven Science Associates, LLC under Contract No. DE-SC0012704 with the U.S. Department of Energy.
This paper describes the requirements, design, and implementation of Role-Based Access Control (RBAC) for RHIC Complex. The system is being designed to protect from accidental, unauthorized access to equipment of the RHIC Complex, but it also can provide significant protection against malicious attacks. The role assignment is dynamic. Roles are primarily based on user id but elevated roles may be assigned for limited periods of time. Protection at the device manager level may be provided for an entire server or for individual device parameters. A prototype version of the system has been deployed at RHIC complex since 2022. The authentication is performed on a dedicated device manager, which generates an encrypted token, based on user ID, expiration time, and role level. Device managers are equipped with an authorization mechanism, which supports three methods of authorization: Static, Local and Centralized. Transactions with token manager take place ’atomically’, during secured set() or get() requests. The system has small overhead: ~0.5 ms for token processing and ~1.5 ms for network round trip. Only python based device managers are participating in the prototype system. Testing has begun with C++ device managers, including those that run on VxWorks platforms. For easy transition, dedicated intermediate shield managers can be deployed to protect access to device managers which do not directly support authorization.
 
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-TH2AO05  
About • Received ※ 04 October 2023 — Revised ※ 14 November 2023 — Accepted ※ 19 December 2023 — Issued ※ 22 December 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
TH2AO06 SKA Tango Operator 1155
 
  • M. Di Carlo, M. Dolci
    INAF - OAAB, Teramo, Italy
  • P. Harding, U.Y. Yilmaz
    SKAO, Macclesfield, United Kingdom
  • J.B. Morgado
    Universidade do Porto, Faculdade de Ciências, Porto, Portugal
  • P. Osorio
    Atlar Innovation, Pampilhosa da Serra, Portugal
 
  Funding: INAF
The Square Kilometre Array (SKA) is an international effort to build two radio interferometers in South Africa and Australia, forming one Observatory monitored and controlled from global headquarters (GHQ) based in the United Kingdom at Jodrell Bank. The software for the monitoring and control system is developed based on the TANGO-controls framework, which provide a distributed architecture for driving software and hardware using CORBA distributed objects that represent devices that communicate with ZeroMQ events internally. This system runs in a containerised environment managed by Kubernetes (k8s). k8s provides primitive resource types for the abstract management of compute, network and storage, as well as a comprehensive set of APIs for customising all aspects of cluster behaviour. These capabilities are encapsulated in a framework (Operator SDK) which enables the creation of higher order resources types assembled out of the k8s primitives (\verb|Pods|, \verb|Services|, \verb|PersistentVolumes|), so that abstract resources can be managed as first class citizens within k8s. These methods of resource assembly and management have proven useful for reconciling some of the differences between the TANGO world and that of Cloud Native computing, where the use of Custom Resource Definitions (CRD) (i.e., Device Server and DatabaseDS) and a supporting Operator developed in the k8s framework has given rise to better usage of TANGO-controls in k8s.
 
slides icon Slides TH2AO06 [2.622 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-TH2AO06  
About • Received ※ 27 September 2023 — Revised ※ 24 October 2023 — Accepted ※ 14 December 2023 — Issued ※ 21 December 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
THMBCMO07 Reflective Servers: Seamless Offloading of Resource Intensive Data Delivery 1201
 
  • S.L. Clark, T. D’Ottavio, M. Harvey, J.P. Jamilkowski, J. Morris, S. Nemesure
    BNL, Upton, New York, USA
 
  Funding: Work supported by Brookhaven Science Associates, LLC under Contract No. DE-SC0012704 with the U.S. Department of Energy.
Brookhaven National Laboratory’s Collider-Accelerator Department houses over 550 Front-End Computers (FECs) of varying specifications and resource requirements. These FECs provide operations-critical functions to the complex, and uptime is a concern among the most resource constrained units. Asynchronous data delivery is widely used by applications to provide live feedback of current conditions but contributes significantly towards resource exhaustion of FECs. To provide a balance of performance and efficiency, the Reflective system has been developed to support unrestricted use of asynchronous data delivery with even the most resource constrained FECs in the complex. The Reflective system provides components which work in unison to offload responsibilities typically handled by core controls infrastructure to hosts with the resources necessary to handle heavier workloads. The Reflective system aims to be a drop-in component of the controls system, requiring few modifications and remaining completely transparent to users and applications alike.
 
slides icon Slides THMBCMO07 [0.963 MB]  
poster icon Poster THMBCMO07 [6.670 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-THMBCMO07  
About • Received ※ 04 October 2023 — Accepted ※ 08 December 2023 — Issued ※ 15 December 2023  
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
THMBCMO08 whatrecord: A Python-Based EPICS File Format Tool 1206
 
  • K.R. Lauer
    SLAC, Menlo Park, California, USA
 
  Funding: This work is supported by Department of Energy contract DE-AC02-76SF00515.
whatrecord is a Python-based parsing tool for interacting with a variety of EPICS file formats, including R3 and R7 database files. The project aims for compliance with epics-base by using Lark grammars that closely reflect the original Lex/Yacc grammars. It offers a suite of tools for working with its supported file formats, with convenient Python-facing dataclass object representations and easy JSON serialization. A prototype backend web server for hosting IOC and record information is also included as well as a Vue.js-based frontend, an EPICS build system Makefile dependency inspector, a static analyzer-of-sorts for startup scripts, and a host of other things that the author added at whim to this side project.
 
slides icon Slides THMBCMO08 [1.442 MB]  
poster icon Poster THMBCMO08 [1.440 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-THMBCMO08  
About • Received ※ 03 October 2023 — Revised ※ 24 October 2023 — Accepted ※ 14 December 2023 — Issued ※ 21 December 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
THMBCMO09
DAQ System Based on Sardana and PandABox for Combined SAXS, Fluorescence and UV-Vis Spectroscopy Techniques at MAX IV CoSAXS Beamline  
 
  • V. Da Silva, R. Appio, M. Eguiraun, F. Herranz-Trillo, A.F. Joubert, M. Leorato, Y.L. Li, M. Lindberg, C. Takahashi, A.E. Terry
    MAX IV Laboratory, Lund University, Lund, Sweden
  • C. Dicko
    Lund Institute of Technology (LTH), Lund University, Lund, Sweden
  • W.T. Kitka
    S2Innovation, Kraków, Poland
 
  CoSAXS is the Coherent and Small Angle X-ray Scattering (SAXS) beamline placed at the diffraction-limited 3 GeV storage ring at MAX IV Laboratory. This paper presents the data acquisition (DAQ) strategy for combined SAXS, Ultraviolet-visible (UV-Vis) and Fluorescence Spectroscopy techniques. In general terms, the beamline control system is based on TANGO and on top of it, Sardana provides an advanced scan framework. Sardana performs the experiment orchestration, configuring and preparing the X-ray detector and the Spectrometers for UV-Vis and Fluorescence. Hardware triggers are used to synchronize the DAQ for the different techniques running simultaneously. The implementation is done using PandABox, which generates pulse trains for the X-ray detector and spectrometers. PandABox integration into the system is done with a Sardana Trigger Gate Controller, used to configure the pulse trains parameters as well to orchestrate the hardware triggers during a scan. This paper describes the individual techniques’ integration into the control system, the experiment orchestration and synchronization and the new experiment possibilities this multi-technique DAQ system brings to MAX IV beamlines.  
slides icon Slides THMBCMO09 [0.570 MB]  
poster icon Poster THMBCMO09 [1.600 MB]  
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
THMBCMO10 SECoP Integration for the Ophyd Hardware Abstraction Layer 1212
 
  • P. Wegmann, K. Kiefer, O. Mannix, L. Rossa, W. Smith
    HZB, Berlin, Germany
  • E. Faulhaber
    MLZ, Garching, Germany
  • M. Zolliker
    PSI, Villigen PSI, Switzerland
 
  At the core of the Bluesky experimental control ecosystem the ophyd hardware abstraction, a consistent high-level interface layer, is extremely powerful for complex device integration. It introduces the device data model to EPICS and eases integration of alien control protocols. This paper focuses on the integration of the Sample Environment Communication Protocol (SECoP)* into the ophyd layer, enabling seamless incorporation of sample environment hardware into beamline experiments at photon and neutron sources. The SECoP integration was designed to have a simple interface and provide plug-and-play functionality while preserving all metadata and structural information about the controlled hardware. Leveraging the self-describing characteristics of SECoP, automatic generation and configuration of ophyd devices is facilitated upon connecting to a Sample Environment Control (SEC) node. This work builds upon a modified SECoP-client provided by the Frappy framework**, intended for programming SEC nodes with a SECoP interface. This paper presents an overview of the architecture and implementation of the ophyd-SECoP integration and includes examples for better understanding.
*Klaus Kiefer et al. "An introduction to SECoP - the sample environment communication protocol".
**Markus Zolliker and Enrico Faulhaber url: https://github.com/sampleenvironment/Frappy.
 
slides icon Slides THMBCMO10 [0.596 MB]  
poster icon Poster THMBCMO10 [0.809 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-THMBCMO10  
About • Received ※ 05 October 2023 — Accepted ※ 08 December 2023 — Issued ※ 14 December 2023  
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
THMBCMO11 Full Stack PLC to EPICS Integration at ESS 1216
 
  • A. Rizzo, E.E. Foy, D. Hasselgren, A.Z. Horváth, A. Petrushenko, J.A. Quintanilla, S.C.F. Rose, A. Simelio
    ESS, Lund, Sweden
 
  The European Spallation Source is one of the largest science and technology infrastructure projects being built today. The Control System at ESS is then essential for the synchronisation and day-to-day running of all the equipment responsible for the production of neutrons for the experimental programs. The standardised PLC platform for ESS to handle slower signal comes from Siemens*, while for faster data interchange with deterministic timing and higher processing power, from Beckoff/EtherCAT**. All the Control Systems based on the above technologies are integrated using EPICS framework***. We will present how the full stack integration from PLC to EPICS is done at ESS using our standard Configuration Management Ecosystem.
* https://www.siemens.com/global/en/products/automation/systems/industrial/plc.html
** https://www.beckhoff.com/en-en/products/i-o/ethercat/
*** https://epics-controls.org/
 
slides icon Slides THMBCMO11 [0.178 MB]  
poster icon Poster THMBCMO11 [0.613 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-THMBCMO11  
About • Received ※ 05 October 2023 — Revised ※ 25 October 2023 — Accepted ※ 08 December 2023 — Issued ※ 18 December 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
THPDP002 The Micro-Services of Cern’s Critical Current Test Benches 1295
 
  • C. Charrondière, A. Ballarino, C. Barth, J.F. Fleiter, P. Koziol, H. Reymond
    CERN, Meyrin, Switzerland
  • O.Ø. Andreassen, T. Boutboul, S.C. Hopkins
    European Organization for Nuclear Research (CERN), Geneva, Switzerland
 
  In order to characterize the critical-current density of low temperature superconductors such as niobium¿titanium (NbTi) and niobium¿tin (Nb₃Sn) or high temperature superconductors such as magnesium-diboride MgB₂ or Rare-earth Barium Copper Oxide REBCO tapes, a wide range of custom instruments and interfaces are used. The critical current of a superconductor depends on temperature, magnetic field, current and strain, requiring high precision measurements in the nano Volt range, well-synchronized instrumentation, and the possibility to quickly adapt and replace instrumentation if needed. The micro-service based application presented in this paper allows operators to measure a variety of analog signals, such as the temperature of the cryostats and sample under test, magnetic field, current passing through the sample, voltage across the sample, pressure, helium level etc. During the run, the software protects the sample from quenching, controlling the current passed through it using high-speed field programmable gate array (FPGA) systems on Linux Real-Time (RT) based PCI eXtensions controllers (PXIe). The application records, analyzes and reports to the external Oracle database all parameters related to the test. In this paper, we describe the development of the micro-service based control system, how the interlocks and protection functionalities work, and how we had to develop a multi-windowed scalable acquisition application that could be adapted to the many changes occurring in the test facility.  
poster icon Poster THPDP002 [6.988 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-THPDP002  
About • Received ※ 06 October 2023 — Revised ※ 10 October 2023 — Accepted ※ 13 October 2023 — Issued ※ 26 October 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
THPDP007 Rolling Out a New Platform for Information System Architecture at SOLEIL 1301
 
  • G. Abeillé, Y.-M. Abiven, B. Gagey
    SOLEIL, Gif-sur-Yvette, France
  • P. Grojean, F. Quillien, C. Rognon, V. Szyndler
    Emoxa, Boulogne-Billancourt, France
 
  SOLEIL Information System is a 20-year legacy with multiple software and IT solutions following constantly evolving business requirements. Lots of non-uniform and siloed information systems have been experienced increasing the IT complexity. The future of SOLEIL (SOLEIL II*) will be based on a new architecture embracing native support for continuous digital transformation and will enhance user experience. Redesigning an information system given synchrotron-based science challenges requires a homogeneous and flexible approach. A new organizational setup is starting with the implementation of a transversal architectural committee. Its missions will be to set the foundation of architecture design principles and to foster all projects’ teams to apply them. The committee will support the building of architectural specifications and will drive all architecture gate reviews. Interoperability is a key pillar for SOLEIL II. Therefore, a synchronous and asynchronous inter-processes communications is being built as a platform to connect existing systems and future ones; it is based both on an event broker and an API manager. An implementation has been developed to interconnect our existing operational tools (CMMS** and our ITSM*** portal). Our current use case is a brand new application dedicated to samples’ lifecycle interconnected with various existing business applications. This paper will detail our holistic approach for addressing the future evolution of our information system, made mandatory given the new requirements from SOLEIL II.
* SOLEIL II: Towards A Major Transformation of the Facility
** CMMS: Computerized Maintenance Management System
*** ITSM: Information Technology Service Management
 
poster icon Poster THPDP007 [1.397 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-THPDP007  
About • Received ※ 05 October 2023 — Revised ※ 25 October 2023 — Accepted ※ 13 December 2023 — Issued ※ 16 December 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
THPDP013 EPICS Integration for Rapid Control Prototyping Hardware from Speedgoat 1317
 
  • L. Rossa, M. Brendike
    HZB, Berlin, Germany
 
  To exploit the full potential of fourth generation Synchrotron Sources, new beamline instrumentation is increasingly developed with a mechatronics approach. [*,**,***] Implementing this approach raises the need for Rapid Control Prototyping (RCP) and Hardware-In-the-Loop (HIL) simulations. To integrate such RCP and HIL systems into every-day beamline operation we developed an interface from a Speedgoat real-time performance machine - programmable via MATLAB Simulink - to EPICS. The interface was developed to be simple to use and still flexible. The Simulink software developer uses dedicated Simulink-blocks to export model information and real-time data into structured UDP Ethernet frames. The corresponding EPICS IOC listens to the UDP frames and auto-generates a corresponding database file to fit the data-stream from the Simulink model. The EPICS IOC can run on either a beamline measurement PC or to keep things spatially close on a mini PC (such as a Raspberry Pi) attached to the Speedgoat machine. An overview of the interface idea, architecture and implementation, together with some simple examples will be presented.
* https://doi.org/10.18429/JACoW-MEDSI2016-MOPE19
** https://doi.org/10.18429/JACoW-ICALEPCS2019-TUCPL05
*** https://orbi.uliege.be/bitstream/2268/262789/1/TUIO02.pdf
 
poster icon Poster THPDP013 [1.143 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-THPDP013  
About • Received ※ 29 September 2023 — Revised ※ 25 October 2023 — Accepted ※ 13 December 2023 — Issued ※ 18 December 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
THPDP023 Evolution of Control System and PLC Integration at the European XFEL 1354
 
  • A. Samadli, T. Freyermuth, P. Gessler, G. Giovanetti, S. Hauf, D.G. Hickin, N. Mashayekh, A. Silenzi
    EuXFEL, Schenefeld, Germany
 
  The Karabo software framework* is a pluggable, distributed control system that offers rapid control feedback to meet the complex requirements of the European X-ray Free Electron Laser facility. Programmable Logic Controllers (PLC) using Beckhoff technology are the main hardware control interface system within the Karabo Control System. The communication between Karabo and PLC currently uses an in-house developed TCP/IP protocol using the same port for operational-related communications and self-description (the description of all available devices sent by PLC). While this simplifies the interface, it creates a notable load on the client and lacks certain features, such as a textual description of each command, property names coherent with the rest of the control system as well as state-awareness of available commands and properties**. To address these issues and to improve user experience, the new implementation will provide a comprehensive self-description, all delivered via a dedicated TCP port and serialized in a JSON format. A Python Asyncio implementation of the Karabo device responsible for message decoding, dispatching to and from the PLC, and establishing communication with relevant software devices in Karabo incorporates lessons learned from prior design decisions to support new updates and increase developer productivity.
* Hauf, et al. The Karabo distributed control system J.Sync. Rad.26.5(2019): 1448ff
** T. Freyermuth et al. Progression Towards Adaptability in the PLC Library at the EuXFEL, PCaPAC’22, pp. 102-106. 
 
poster icon Poster THPDP023 [0.338 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-THPDP023  
About • Received ※ 05 October 2023 — Revised ※ 25 October 2023 — Accepted ※ 13 December 2023 — Issued ※ 18 December 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
THPDP026 Voltumna Linux: A Custom Distribution for (Embedded) Systems 1366
 
  • L. Pivetta, A.I. Bogani, G. Scalamera
    Elettra-Sincrotrone Trieste S.C.p.A., Basovizza, Italy
 
  In the last years a thorough approach has been adopted to address the aging and the variability of control system platforms at Elettra Sincrotrone Trieste. The second generation of an in-house built operating system, named Voltumna Linux, which is based on immutable image approach, is now ready for production, supporting a number of commercial-off-the-shelf embedded systems. Moreover, the same approach is perfectly suitable for rack-mount servers, with large memory support, that often require the inclusion of third party or closed source packages. Being entirely based on Git for revision control, Voltumna Linux brings in a number of advantages, such as reproducibility of the product, ease of upgrading or downgrading complete systems, centralized management and deployment of the user software to name a few.  
poster icon Poster THPDP026 [1.482 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-THPDP026  
About • Received ※ 04 October 2023 — Revised ※ 25 October 2023 — Accepted ※ 13 December 2023 — Issued ※ 15 December 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
THPDP028 Particle Swarm Optimization Techniques for Automatic Beam Transport at the Lnl Superconducting Linac Accelerators 1370
 
  • M. Montis, L. Bellan
    INFN/LNL, Legnaro (PD), Italy
 
  The superconductive quarter wave cavities hadron Lin-ac ALPI is the final acceleration stage at the Legnaro National Laboratories and it is going to be used as re-acceleration line of the radioactive ion beams for the SPES (Selective Production of Exotic Species) project. The Linac was designed in ’90s with the available techniques and it was one of the peak technologies of this kind in Europe at those times, controls included. In the last decade, controls related to all the functional systems composing the accelerator have been ungraded to an EPICS-based solution. This upgrade has given us the opportunity to design and test new possible solutions for automatic beam transport. The work described in this paper is based on the experience and results (in terms of time, costs, and manpower) obtained using Particle Swarm Optimization (PSO) techniques for beam transport optimization applied to the ALPI accelerator. Due to the flexibility and robustness of this method, this tool will be extended to other parts of the facility.  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-THPDP028  
About • Received ※ 06 September 2023 — Revised ※ 10 October 2023 — Accepted ※ 10 December 2023 — Issued ※ 16 December 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
THPDP029 Alpi-Piave Beam Transport Control System Upgrade at Legnaro National Laboratories 1374
 
  • M. Montis, F. Gelain, M.G. Giacchini
    INFN/LNL, Legnaro (PD), Italy
 
  During the last decade, the control system employed for ALPI and PIAVE Accelerators was upgraded to the new EPICS-based framework as part of the new standards adopted in the SPES project in construction in Legnaro. The actual control for beam transport was fully completed in 2015 and it has been in production since that year. Due to the power supply upgrade and to optimize costs and maintenance time, the original controllers based on in-dustrial PCs were substituted with dedicated serial-over-ethernet devices and Virtual Machines (VMs). In this work we will describe the solution designed and imple-mented for ALPI-PIAVE accelerators.  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-THPDP029  
About • Received ※ 18 September 2023 — Revised ※ 10 October 2023 — Accepted ※ 18 December 2023 — Issued ※ 21 December 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
THPDP030 ESS Drift Tube Linac Control System Commissioning: Results and Lessons Learned 1377
 
  • M. Montis, L. Antoniazzi, A. Baldo, M.G. Giacchini
    INFN/LNL, Legnaro (PD), Italy
  • A. Rizzo
    ESS, Lund, Sweden
 
  European Spallation Source (ESS) will be a neutron source using proton beam Linac of expected 5MW beam power. Designed and implemented by INFN-LNL, the Drift Tube Linac (DTL) control system is based on EPICS framework as indicated by the Project Requirements. This document aims to describe the results of the first part of the control system commissioning stage in 2022, where INFN and ESS teams were involved in the final tests on site. This phase was the first step toward a complete de-ployment of the control system, where the installation was composed by three sequential stages, according to the apparatus commissioning schedule. In this scenario, the firsts Site Acceptance Test (SAT) and Site Integrated Test (SIT) were crucial, and their results were the mile-stones for the other stages: the lessons learned can be important to speed up the future integration, calibration, and tuning of such a complex control system.

 
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-THPDP030  
About • Received ※ 18 September 2023 — Revised ※ 10 October 2023 — Accepted ※ 13 October 2023 — Issued ※ 26 October 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
THPDP037 The Alarm System at HLS-II 1399
 
  • S. Xu, X.K. Sun
    USTC/NSRL, Hefei, Anhui, People’s Republic of China
 
  The control system of the Hefei Light Source II (HLS-II) is a distributed system based on Experimental Physics and Industrial Control System. The alarm system of HLS-II is responsible for monitoring the alarm state of the facility and distributing the alarm message in time. The monitoring range of the alarm system covers the devices of HLS-II technical group and the server platform. Zabbix is an open-source software tool to monitor the server platform. Custom metrics collection is achieved by implementing external scripts written in Python and automated agent deployment discovers the monitored servers running with Zabbix agents. The alarm distribution strategy of the front end devices is designed to overcome alarm floods. The alarm system of HLS-II provides multiple messaging channels to notify the responsible staff, including WeChat, SMS and web-based GUI. The alarm system of HLS-II has been deployed since December 2022. The result shows the alarm system facilitates the operator to troubleshoot problem efficiently to improve the availability of HLS-II.  
poster icon Poster THPDP037 [0.653 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-THPDP037  
About • Received ※ 30 September 2023 — Accepted ※ 08 December 2023 — Issued ※ 13 December 2023  
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
THPDP051 LLRF and Timing System Integration at ESS 1426
 
  • G.S. Fedel, A.A. Gorzawski, J.J. Jamróz, J.P.S. Martins, N. Milas, A.P. Persson, A.M. Svensson, R.H. Zeng
    ESS, Lund, Sweden
 
  The Low Level Radio Frequency (LLRF) system is an important part of a Spallation Source facility as ESS. LLRF is commonly used with many different setups depending on the aim: preparation, calibration, conditioning, commission and others. These different setups are strongly connected to another important system on accelerators: the Timing System. This proceeding presents how at ESS we implemented the integration between LLRF and Timing systems on the control system scope. The integration of these two systems provides different and important features as: allow different ways to trigger the RF system (synced or not to other systems), define how the RF output will be defined (based on the features of the expected beam), re-configure LLRF depending on the timing setup and more. This integration was developed on both ends, LLRF and timing, and is mostly concentrated on the control system layer based on EPICS. Dealing with the different scenarios, synchronicity and considering all the software, hardware and firmware involved are some of the challenges of this integration. The result of this work was used during the ESS accelerator commissioning in 2022 and will be used on next ESS accelerator commissioning in 2023.  
poster icon Poster THPDP051 [0.993 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-THPDP051  
About • Received ※ 05 October 2023 — Accepted ※ 08 December 2023 — Issued ※ 12 December 2023  
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
THPDP061 Python Expert Applications for Large Beam Instrumentation Systems at CERN 1460
 
  • J. Martínez Samblas, E. Calvo Giraldo, M. Gonzalez-Berges, M. Krupa
    CERN, Meyrin, Switzerland
 
  In recent years, beam diagnostics systems with increasingly large numbers of monitors, and systems handling vast amounts of data have been deployed at CERN. Their regular operation and maintenance poses a significant challenge. These systems have to run 24/7 when the accelerators are operating and the quality of the data they produce has to be guaranteed. This paper presents our experience developing applications in Python which are used to assure the readiness and availability of these large systems. The paper will first give a brief introduction to the different functionalities required, before presenting the chosen architectural design. Although the applications work mostly with online data, logged data is also used in some cases. For the implementation, standard Python libraries (e.g. PyQt, pandas, NumPy) have been used, and given the demanding performance requirements of these applications, several optimisations have had to be introduced. Feedback from users, collected during the first year’s run after CERN’s Long Shutdown period and the 2023 LHC commissioning, will also be presented. Finally, several ideas for future work will be described.  
poster icon Poster THPDP061 [2.010 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-THPDP061  
About • Received ※ 05 October 2023 — Revised ※ 26 October 2023 — Accepted ※ 13 December 2023 — Issued ※ 21 December 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
THPDP067 Towards a Flexible and Secure Python Package Repository Service 1489
 
  • I. Sinkarenko, B. Copy, P.J. Elson, F. Iannaccone, W.F. Koorn
    CERN, Meyrin, Switzerland
 
  The use of 3rd-party and internal software packages has become a crucial part of modern software development. Not only does it enable faster development, but it also facilitates sharing of common components, which is often necessary for ensuring correctness and robustness of developed software. To enable this workflow, a package repository is needed to store internal packages and provide a proxy to 3rd-party repository services. This is particularly important for systems that operate in constrained networks, as is common for accelerator control systems. Despite its benefits, installing arbitrary software from a 3rd-party package repository can pose security and operational risks. Therefore, it is crucial to implement effective security measures, such as usage logging, package moderation and security scanning. However, experience at CERN has shown off-the-shelf tools for running a flexible repository service for Python packages not to be satisfactory. For instance, the dependency confusion attack first published in 2021 has still not been fully addressed by the main open-source repository services. An in-house development was conducted to address this, using a modular approach to building a Python package repository that enables the creation of a powerful and security-friendly repository service using small components. This paper describes the components that exist, demonstrates their capabilities within CERN and discusses future plans. The solution is not CERN-specific and is likely to be relevant to other institutes facing comparable challenges.  
poster icon Poster THPDP067 [0.510 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-THPDP067  
About • Received ※ 05 October 2023 — Revised ※ 12 October 2023 — Accepted ※ 13 December 2023 — Issued ※ 16 December 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
THPDP068 Implementing High Performance & Highly Reliable Time Series Acquisition Software for the CERN-Wide Accelerator Data Logging Service 1494
 
  • M. Sobieszek, V. Baggiolini, R. Mucha, C. Roderick, P. Sowinski, J.P. Wozniak
    CERN, Meyrin, Switzerland
 
  The CERN Accelerator Data Logging Service (NXCALS) stores data generated by the accelerator infrastructure and beam related devices. This amounts to 3.5TB of data per day, coming from more than 2.5 million signals from heterogeneous systems at various frequencies. Around 85% of this data is transmitted through the Controls Middleware (CMW) infrastructure. To reliably gather such volumes of data, the acquisition system must be highly available, resilient and robust. It also has to be highly efficient and easily scalable, given the regularly growing data rates and volumes, particularly for the increases expected to be produced by the future High Luminosity LHC. This paper describes the NXCALS time series acquisition software, known as Data Sources. System architecture, design choices, and recovery solutions for various failure scenarios (e.g. network disruptions or cluster split-brain problems) will be covered. Technical implementation details will be discussed, covering the clustering of Akka Actors collecting data from tens of thousands of CMW devices and sharing the lessons learned. The NXCALS system has been operational since 2018 and has demonstrated the capability to fulfil all aforementioned characteristics, while also ensuring self-healing capabilities and no data losses during redeployments. The engineering challenge, architecture, lessons learned, and the implementation of this acquisition system are not CERN-specific and are therefore relevant to other institutes facing comparable challenges.  
poster icon Poster THPDP068 [2.960 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-THPDP068  
About • Received ※ 06 October 2023 — Revised ※ 12 October 2023 — Accepted ※ 13 October 2023 — Issued ※ 20 November 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
THPDP069 A Generic Real-Time Software in C++ for Digital Camera-Based Acquisition Systems at CERN 1499
 
  • A. Topaloudis, E. Bravin, S. Burger, S. Jackson, S. Mazzoni, E. Poimenidou, E. Senes
    CERN, Meyrin, Switzerland
 
  Until recently, most of CERN’s beam visualisation systems have been based on increasingly obsolescent analogue cameras. Hence, there is an on-going campaign to replace old or install new digital equivalents. There are many challenges associated with providing a homogenised solution for the data acquisition of the various visualization systems in an accelerator complex as diverse as CERN’s. However, a generic real-time software in C++ has been developed and already installed in several locations to control such systems. This paper describes the software and the additional tools that have also been developed to exploit the acquisition systems, including a Graphical User Interface (GUI) in Java/Swing and web fixed displays. Furthermore, it analyses the specific challenges of each use-case and the chosen solutions that resolve issues including any subsequent performance limitations.  
poster icon Poster THPDP069 [1.787 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-THPDP069  
About • Received ※ 05 October 2023 — Accepted ※ 08 December 2023 — Issued ※ 18 December 2023  
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
THPDP070 Building, Deploying and Provisioning Embedded Operating Systems at PSI 1505
 
  • D. Anicic
    PSI, Villigen PSI, Switzerland
 
  In the scope of the Swiss Light Source (SLS) upgrade project, SLS 2.0, at Paul Scherrer Institute (PSI) two New Processing Platforms (NPP), both running RT Linux, have been added to the portfolio of existing VxWorks and Linux VME systems. At the lower end we have picked a variety of boards, all based on the Xilinx Zynq UltraScale+ MPSoC. Even though these devices have less processing power, due to the built-in FPGA and Real-time CPU (RPU) they can deliver strict, hard RT performance. For high-throughput, soft-RT applications we went for Intel Xeon based single-board PCs in the CPCI-S form factor. All platforms are operated as diskless systems. For the Zynq systems we have decided on building in-house a Yocto Kirkstone Linux distribution, whereas for the Xeon PCs we employ off-the-shelf Debian 10 Buster. In addition to these new NPP systems, in the scope of our new EtherCAT-based Motion project, we have decided to use small x8664 servers, which will run the same Debian distribution as NPP. In this contribution we present the selected Operating Systems (OS) and discuss how we build, deploy and provision them to the diskless clients.  
poster icon Poster THPDP070 [0.758 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-THPDP070  
About • Received ※ 02 October 2023 — Accepted ※ 13 October 2023 — Issued ※ 19 October 2023  
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
THPDP073 Scilog: A Flexible Logbook System for Experiment Data Management 1512
 
  • K. Wakonig, A. Ashton, C. Minotti
    PSI, Villigen PSI, Switzerland
 
  Capturing both raw and metadata during an experiment is of the utmost importance, as it provides valuable context for the decisions made during the experiment and the acquisition strategy. However, logbooks often lack seamless integration with facility-specific services such as authentication and data acquisition systems and can prove to be a burden, particularly in high-pressure situations during experiments. To address these challenges, SciLog has been developed as a logbook system utilizing MongoDB, Loopback, and Angular. Its primary objective is to provide a flexible and extensible environment, as well as a user-friendly interface. SciLog relies on atomic entries in a NoSQL database that can be easily queried, sorted, and displayed according to the user’s requirements. The integration with facility-specific authorization systems and the automatic import of new experiment proposals enable a user experience that is specifically tailored for the challenging environment of experiments conducted at large research facilities. The system is currently in use during beam time at the Paul Scherrer Institut, where it is collecting valuable feedback from scientists to enhance its capabilities.  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-THPDP073  
About • Received ※ 05 October 2023 — Revised ※ 10 October 2023 — Accepted ※ 08 December 2023 — Issued ※ 11 December 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
THPDP076 Stream-based Virtual Device Simulation for Enhanced EPICS Integration and Automated Testing 1522
 
  • M. Lukaszewski, K. Klys
    E9, London, United Kingdom
 
  Integrating devices into the Experimental Physics and Industrial Control System (EPICS) can often take a suboptimal path due to discrepancies between available documentation and real device behaviour. To address this issue, we introduce "vd" (virtual device), a software for simulating stream-based virtual devices that enables testing communication without connecting to the real device. It is focused on the communication layer rather than the device’s underlying physics. The vd listens to a TCP port for client commands and employs ASCII-based byte stream communication. It offers easy configuration through a user-friendly config file containing all necessary information to simulate a device, including parameters for the simulated device and information exchanged via TCP, such as commands and queries related to each parameter. Defining the protocol for data exchange through a configuration file allows users to simulate various devices without modifying the simulator’s code. The vd’s architecture enables its use as a library for creating advanced simulations, making it a tool for testing and validating device communication and integration into EPICS. Furthermore, the vd can be integrated into CI pipelines, facilitating automated testing and validation of device communication, ultimately improving the quality of the produced control system.  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-THPDP076  
About • Received ※ 06 October 2023 — Accepted ※ 08 December 2023 — Issued ※ 12 December 2023  
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
THPDP077 Tango Integration of the SKA-Low Power and Signal Distribution System 1526
 
  • E.L. Arandjelovic, U.K. Pedersen
    OSL, St Ives, Cambridgeshire, United Kingdom
  • E.L. Arandjelovic, D. Devereux, U.K. Pedersen
    SKAO, Macclesfield, United Kingdom
  • D. Devereux
    CSIRO, Clayton, Australia
  • J. Engelbrecht
    VIVO, Somerset West, South Africa
 
  Funding: Square Kilometre Array Observatory
The Power and Signal Distribution System (PaSD) is a key component of the SKA-Low telescope, responsible for control and monitoring of local power to the electronic components of the RF signal chain for the antennas, and collecting the RF signals for transmission to the Central Processing Facility. The system comprises "SMART boxes" (SMART: Small Modular Aggregation and RFoF Trunk) which each connect directly to around 10 antennas to provide local monitoring and control, and one Field Node Distribution Hub (FNDH) per station which distributes power to all the SMART boxes and provides a communications gateway as well as additional local monitoring. All communication to the SMART boxes is funnelled through the FNDH on a multi-drop serial bus using the Modbus ASCII protocol. This paper will describe how the PaSD will be integrated into the Tango-based SKA-Low Monitoring Control and Calibration Subsystem (MCCS) software, including the facility for a drop-in Python simulator which can be used to test the software.
 
poster icon Poster THPDP077 [20.237 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-THPDP077  
About • Received ※ 04 October 2023 — Accepted ※ 08 December 2023 — Issued ※ 14 December 2023  
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
THPDP079 Integration of Bespoke Daq Software with Tango Controls in the SKAO Software Framework: From Problems to Progress 1533
 
  • A.J. Clemens
    OSL, St Ives, Cambridgeshire, United Kingdom
  • D. Devereux
    CSIRO, Clayton, Australia
  • D. Devereux
    SKAO, Macclesfield, United Kingdom
  • A. Magro
    ISSA, Msida, Malta
 
  The Square Kilometre Array Observatory (SKAO) project is an international effort to build two radio interferometers in South Africa and Australia to form one Observatory monitored and controlled from the global headquarters in the United Kingdom at Jodrell Bank. The Monitoring, Control and Calibration System (MCCS) is the "front-end" management software for the Low telescope which provides monitoring and control capabilities as well as implementing calibration processes and providing complex diagnostics support. Once completed the Low telescope will boast over 130, 000 individual log-periodic antennas and so the scale of the data generated will be huge. It is estimated that an average of 8 terabits per second of data will be transferred from the SKAO telescopes in both countries to Central Processing Facilities (CPFs) located at the telescope sites. In order to keep pace with this magnitude of data production an equally impressive data acquisition (DAQ) system is required. This paper outlines the challenges encountered and solutions adopted whilst incorporating a bespoke DAQ library within the SKAO’s Kubernetes-Tango ecosystem in the MCCS subsystem in order to allow high speed data capture whilst maintaining a consistent deployment experience.  
poster icon Poster THPDP079 [0.981 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-THPDP079  
About • Received ※ 02 October 2023 — Accepted ※ 08 December 2023 — Issued ※ 19 December 2023  
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
THPDP087 LCLS-II Controls Software Architecture for the Wire Scan Diagnostics 1556
 
  • N. Balakrishnan, J.D. Bong, A.S. Fisher, B.T. Jacobson, L. Sapozhnikov
    SLAC, Menlo Park, California, USA
 
  Funding: This work was supported by Department of Energy, Office of Basic Energy Sciences, contract DE-AC02-76SF00515
The Super Conducting (SC) Linac Coherent Light Source II (LCLS-II) facility at SLAC is capable of delivering an electron beam at a fast rate of up to 1MHz. The high-rate necessitates the processing algorithms and data exchanges with other high-rate systems to be implemented with FPGA technology. For LCLS-II, SLAC has deployed a common platform solution (hardware, firmware, software) which is used by timing, machine protection and diagnostics systems. The wire scanner diagnostic system uses this solution to acquire beam synchronous time-stamped readings, of wire scanner position and beam loss during the scan, for each individual bunch. This paper explores the software architecture and control system integration for LCLS-II wire scanners using the common platform solution.
 
poster icon Poster THPDP087 [1.079 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-THPDP087  
About • Received ※ 06 October 2023 — Revised ※ 10 October 2023 — Accepted ※ 06 December 2023 — Issued ※ 09 December 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
THSDSC03 Integrate EPICS 7 with MATLAB Using PVAccess for Python (P4P) Module 1580
 
  • K.T. Kim, J.J. Bellister, K.H. Kim, E. Williams, S. Zelazny
    SLAC, Menlo Park, California, USA
 
  MATLAB is essential for accelerator scientists engaged in data analysis and processing across diverse fields, including particle physics experiments, synchrotron light sources, XFELs, and telescopes, due to its extensive range of built-in functions and tools. Scientists also depend on EPICS 7* to control and monitor complex systems. Since Python has gained popularity in the scientific community and many facilities have been migrating towards it, SLAC has developed matpva, a Python interface to integrate EPICS 7 with MATLAB. Matpva utilizes the Python P4P module** and EPICS 7 to offer a robust and reliable interface for MATLAB users that employ EPICS 7. The EPICS 7 PVAccess API allows higher-level scientific applications to get/set/monitor simple and complex structures from an EPICS 7-based control system. Moreover, matpva simplifies the process by handling the data type conversion from Python to MATLAB, making it easier for researchers to focus on their analyses and innovative ideas instead of technical data conversion. By leveraging matpva, researchers can work more efficiently and make discoveries in diverse fields, including particle physics and astronomy.
* See https://epics-controls.org/resources-and-support/base/epics-7/ to learn more about EPICS 7
** Visit https://mdavidsaver.github.io/p4p/ to learn more about the P4P
 
poster icon Poster THSDSC03 [0.865 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-THSDSC03  
About • Received ※ 06 October 2023 — Revised ※ 10 October 2023 — Accepted ※ 06 December 2023 — Issued ※ 15 December 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
THSDSC04 CamServer: Stream Processing at SwissFEL and SLS 2.0 1585
 
  • A. Gobbo, A. Babic
    PSI, Villigen PSI, Switzerland
 
  CamServer is a Python package for data stream processing developed at Paul Scherrer Institute (PSI). It is a key component of SwissFEL’s data acquisition, where it is deployed on a cluster of servers and used for displaying and processing images from all cameras. It scales linearly with the number of servers and is capable of handling multiple high-resolution cameras at 100Hz, as well as a variety of data types and sources. The processing unit, called a pipeline, runs in a private process that can be either permanent or spawned on demand. Pipelines consume and produce ZMQ streams, but input data can be arbitrary using an adapter layer (e.g. EPICS). A proxy server handles requests and creates pipelines on the cluster’s worker nodes according to rules. Some processing scripts are available out of the box (e.g. calculation of standard beam metrics) but users can upload custom ones. The system is managed via its REST API, using a client library or a GUI application. CamServer’s output data streams are consumed by a variety of client types such as data storage, image visualization, monitoring and DAQ applications. This work describes the use of CamServer, the status of the SwissFEL’s cluster and the development roadmap with plans for SLS 2.0.  
poster icon Poster THSDSC04 [1.276 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-THSDSC04  
About • Received ※ 03 October 2023 — Revised ※ 10 October 2023 — Accepted ※ 06 December 2023 — Issued ※ 19 December 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
THSDSC05 The SKAO Engineering Data Archive: From Basic Design to Prototype Deployments in Kubernetes 1590
 
  • T. Juerges
    SKAO, Macclesfield, United Kingdom
  • A. Dange
    Tata Consultancy Services, Pune, India
 
  During its construction and production life cycles, the Square Kilometre Array Observatory (SKAO) will generate non-scientific, i.e. engineering, data. The sources of the engineering data are either hardware devices or software programs that generate this data. Thanks to the Tango Controls software framework, the engineering data can be automatically stored in a relational database, which SKAP refers to as the Engineering Data Archive (EDA). Making the data in the EDA accessible and available to engineers and users in the observatory is as important as storing the data itself. Possible use cases for the data are verification of systems under test, performance evaluation of systems under test, predictive maintenance and general performance monitoring over time. Therefore we tried to build on the knowledge that other research facilities in the Tango Controls collaboration already gained, when they designed, implemented, deployed and ran their engineering data archives. SKAO implemented a prototype for its EDA, that leverages several open-source software packages, with Tango Controls’ HDB++, the Timescaledb time series database and Kubernetes at its core. In this overview we will answer the immediate question "But why do we not just do, what others are doing?" and explain the reasoning behind our choices in the design and in the implementation.  
poster icon Poster THSDSC05 [3.062 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-THSDSC05  
About • Received ※ 05 October 2023 — Revised ※ 27 October 2023 — Accepted ※ 05 December 2023 — Issued ※ 11 December 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)