Paper | Title | Other Keywords | Page |
---|---|---|---|
TU1BCO02 | Integrating System Knowledge in Unsupervised Anomaly Detection Algorithms for Simulation-Based Failure Prediction of Electronic Circuits | simulation, ISOL, electron, radiation | 249 |
|
|||
Funding: This work has been sponsored by the Wolfgang Gentner Programme of the German Federal Ministry of Education and Research (grant no. 13E18CHA). Machine learning algorithms enable failure prediction of large-scale, distributed systems using historical time-series datasets. Although unsupervised learning algorithms represent a possibility to detect an evolving variety of anomalies, they do not provide links between detected data events and system failures. Additional system knowledge is required for machine learning algorithms to determine the nature of detected anomalies, which may represent either healthy system behavior or failure precursors. However, knowledge on failure behavior is expensive to obtain and might only be available upon pre-selection of anomalous system states using unsupervised algorithms. Moreover, system knowledge obtained from evaluation of system states needs to be appropriately provided to the algorithms to enable performance improvements. In this paper, we will present an approach to efficiently configure the integration of system knowledge into unsupervised anomaly detection algorithms for failure prediction. The methodology is based on simulations of failure modes of electronic circuits. Triggering system failures based on synthetically generated failure behaviors enables analysis of the detectability of failures and generation of different types of datasets containing system knowledge. In this way, the requirements for type and extend of system knowledge from different sources can be determined, and suitable algorithms allowing the integration of additional data can be identified. |
|||
Slides TU1BCO02 [2.541 MB] | |||
DOI • | reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-TU1BCO02 | ||
About • | Received ※ 02 October 2023 — Accepted ※ 12 October 2023 — Issued ※ 25 October 2023 | ||
Cite • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | ||
TU1BCO03 | Systems Modelling, AI/ML Algorithms Applied to Control Systems | hardware, software, controls, software-component | 257 |
|
|||
Funding: National Research Foundation (South Africa) The 64 receptor (with 20 more being built) radio telescope in the Karoo, South Africa, comprises a large number of devices and components connected to the Control-and-Monitoring (CAM) system via the Karoo Array Telescope Communication Protocol (KATCP). KATCP is used extensively for internal communications between CAM components and other subsystems. A KATCP interface exposes requests and sensors; sampling strategies are set on sensors, ranging from several updates per second to infrequent on-change updates. The sensor samples are of different types, from small integers to text fields. The samples and associated timestamps are permanently stored and made available for scientists, engineers and operators to query and analyze. This is a presentation on how to apply Machine Learning tools which utilize data-driven algorithms and statistical models to analyze sensor data sets and then draw inferences from identified patterns or make predictions based on them. The algorithms learn from the sensor data as they run against it, unlike traditional rules-based analytics systems that follow explicit instructions. Since this involves data preprocessing, we will go through how the MeerKAT telescope data storage infrastructure (called Katstore) manages the voluminous variety, velocity and volume of this data. |
|||
Slides TU1BCO03 [1.647 MB] | |||
DOI • | reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-TU1BCO03 | ||
About • | Received ※ 06 October 2023 — Revised ※ 09 November 2023 — Accepted ※ 14 December 2023 — Issued ※ 21 December 2023 | ||
Cite • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | ||
TUMBCMO39 | Enhanced Maintenance and Availability of Handling Equipment using IIoT Technologies | controls, operation, network, framework | 462 |
|
|||
CERN currently houses 6000 handling equipment units categorized into 40 different families, such as electric overhead travelling cranes (EOT), hoists, trucks, and forklifts. These assets are spread throughout the CERN campus, on the surface (indoor and outdoor), as well as in underground tunnels and experimental caverns. Partial access to some areas, a large area to cover, thousands of units, radiation, and diverse needs among handling equipment makes maintenance a cumbersome task. Without automatic monitoring solutions, the handling engineering team must conduct periodic on-site inspections to identify equipment in need of regulatory maintenance, leading to unnecessary inspections in hard-to-reach environments for underused equipment but also reliability risks for overused equipment between two technical visits. To overcome these challenges, a remote monitoring solution was introduced to extend the equipment lifetime and perform optimal maintenance. This paper describes the implementation of a remote monitoring solution integrating IIoT (Industrial Internet of Things) technologies with the existing CERN control infrastructure and frameworks for control systems (UNICOS and WinCC OA). At the present time, over 600 handling equipment units are being monitored successfully and this number will grow thanks to the scalability this solution offers. | |||
Slides TUMBCMO39 [0.560 MB] | |||
DOI • | reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-TUMBCMO39 | ||
About • | Received ※ 03 October 2023 — Accepted ※ 28 November 2023 — Issued ※ 19 December 2023 | ||
Cite • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | ||
TUPDP045 | Monitoring the SKA Infrastructure for CICD | target, database, TANGO, distributed | 622 |
|
|||
Funding: INAF The Square Kilometre Array (SKA) is an international effort to build two radio interferometers in South Africa and Australia, forming one Observatory monitored and controlled from global headquarters (GHQ) based in the United Kingdom at Jodrell Bank. The selected solution for monitoring the SKA CICD (continuous integration and continuous deployment) Infrastructure is Prometheus with the help of Thanos. Thanos is used for high availability, resilience, and long term storage retention for monitoring data. For data visualisation, the Grafana project emerged as an important tool for displaying data in order to make specific reasoning and debugging of particular aspect of the infrastructure in place. In this paper, the monitoring platform is presented while considering quality aspect such as performance, scalability, and data preservation. |
|||
DOI • | reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-TUPDP045 | ||
About • | Received ※ 27 September 2023 — Revised ※ 18 October 2023 — Accepted ※ 14 December 2023 — Issued ※ 19 December 2023 | ||
Cite • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | ||
TUPDP069 | AVN Radio Telescope Conversion Software Systems | controls, software, interface, network | 661 |
|
|||
The African VLBI Network (AVN) is a proposed network of Radio Telescopes involving 8 partner countries across the African continent. The AVN project aims to convert redundant satellite data communications ground stations, where viable, to Radio Telescopes. One of the main objectives of AVN is human capital development in Science, Engineering, Technology and Mathematics (STEM) with regards to radio astronomy in SKA (Square Kilometer Array) African Partner countries. This paper will outline the software systems used for control and monitoring of a single radio telescope. The control and monitoring software consists of the User Interface, Antenna Control System, Receiver Control System and monitoring of all proprietary and off-the-shelf (OTS) components. All proprietary and OTS interfaces are converted to the open protocol (KATCP). | |||
Poster TUPDP069 [10.698 MB] | |||
DOI • | reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-TUPDP069 | ||
About • | Received ※ 20 September 2023 — Revised ※ 09 October 2023 — Accepted ※ 13 October 2023 — Issued ※ 28 October 2023 | ||
Cite • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | ||
TUPDP073 | CAN Monitoring Software for an Antenna Positioner Emulator | software, controls, network, hardware | 673 |
|
|||
Funding: South African Radio Astronomy Observatory The original Controller Area Network (CAN) protocol, was developed for control and monitoring within vehicular systems. It has since been expanded and today, the Open CAN bus protocol is a leading protocol used within servo-control systems for telescope positioning systems. Development of a CAN bus monitoring component is currently underway. This component forms part of a greater software package, designed for an Antenna Positioner Emulator (APE), which is under construction. The APE will mimic movement of a MeerKAT antenna, in both the azimuth and elevation axes, as well as the positioning of the receiver indexer. It will be fitted with the same servo-drives and controller hardware as MeerKAT, however there will be no main dish, sub-reflector, or receiver. The APE monitoring software will receive data from a variety of communication protocols used by different devices within the MeerKAT control system, these include: CAN, Profibus, EnDAT, Resolver and Hiperface data. The monitoring software will run on a BeagleBone Black (BBB) fitted with an ARM processor. Local and remote logging capabilities are provided along with a user interface to initiate the reception of data. The CAN component makes use of the standard SocketCAN driver which is shipped as part of the linux kernel. Initial laboratory tests have been conducted using a CAN system bus adapter that transmits previously captured telescope data. The bespoke CAN receiver hardware connects in-line on the CAN bus and produces the data to a BBB, where the monitoring software logs the data. |
|||
Poster TUPDP073 [1.521 MB] | |||
DOI • | reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-TUPDP073 | ||
About • | Received ※ 06 October 2023 — Revised ※ 20 October 2023 — Accepted ※ 14 December 2023 — Issued ※ 18 December 2023 | ||
Cite • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | ||
TUPDP093 | CERN Proton Irradiation Facility (IRRAD) Data Management, Control and Monitoring System Infrastructure for post-LS2 Experiments | radiation, experiment, controls, proton | 762 |
|
|||
Funding: European Union’s Horizon 2020 Research and Innovation programme under GA no 101004761 and Horizon Europe Research and Innovation programme under Grant Agreement No 101057511. Since upgrades of the CERN Large Hadron Collider are planned and design studies for a post-LHC particle accelerator are ongoing, it is key to ensure that the detectors and electronic components used in the CERN experiments and accelerators can withstand the high amount of radiation produced during particle collisions. To comply with this requirement, scientists perform radiation testing experiments, which consist in exposing these components to high levels of particle radiation to simulate the real operational conditions. The CERN Proton Irradiation Facility (IRRAD) is a well-established reference facility for conducting such experiments. Over the years, the IRRAD facility has developed a dedicated software infrastructure to support the control and monitoring systems used to manage these experiments, as well as to handle other important aspects such as dosimetry, spectrometry, and material traceability. In this paper, new developments and upgrades to the IRRAD software infrastructure are presented. These advances are crucial to ensure that the facility remains up-to-date and able to cope with the increasing (and always more complex) user needs. These software upgrades (some of them carried out within the EU-funded project AIDAinnova and EURO-LABS) will help to improve the efficiency and accuracy of the experiments performed at IRRAD and enhance the capabilities of this facility. |
|||
Poster TUPDP093 [2.888 MB] | |||
DOI • | reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-TUPDP093 | ||
About • | Received ※ 05 October 2023 — Revised ※ 21 October 2023 — Accepted ※ 05 December 2023 — Issued ※ 10 December 2023 | ||
Cite • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | ||
TUSDSC07 | Web Dashboards for CERN Radiation and Environmental Protection Monitoring | SCADA, radiation, real-time, interface | 938 |
|
|||
CERN has developed and operates a SCADA system for radiation and environmental monitoring, which is used by many users with different needs and profiles. To provide tailored access to this control system¿s data, the CERN’s Occupational Health & Safety and Environmental Protection (HSE) Unit has developed a web-based dashboard editor that allows users to create custom dashboards for data analysis. In this paper, we present a technology stack comprising Spring Boot, React, Apache Kafka, WebSockets, and WebGL that provides a powerful tool for a web-based presentation layer for the SCADA system. This stack leverages WebSocket for near-real-time communication between the web browser and the server. Additionally, it provides high-performant, reliable, and scalable data delivery using low-latency data streaming with Apache Kafka. Furthermore, it takes advantage of the GPU’s power with WebGL for data visualization. This web-based dashboard editor and the technology stack provide a faster, more integrated, and accessible solution for building custom dashboards and analyzing data. | |||
Poster TUSDSC07 [1.992 MB] | |||
DOI • | reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-TUSDSC07 | ||
About • | Received ※ 04 October 2023 — Accepted ※ 28 November 2023 — Issued ※ 08 December 2023 | ||
Cite • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | ||
WE3BCO04 | Improving Observability of the SCADA Systems Using Elastic APM, Reactive Streams and Asynchronous Communication | SCADA, controls, real-time, distributed | 1016 |
|
|||
As modern control systems grow in complexity, ensuring observability and traceability becomes more challenging. To meet this challenge, we present a novel solution that seamlessly integrates with multiple SCADA frameworks to provide end-to-end visibility into complex system interactions. Our solution utilizes Elastic APM to monitor and trace the performance of system components, allowing for real-time analysis and diagnosis of issues. In addition, our solution is built using reactive design principles and asynchronous communication, enabling it to scale to meet the demands of large, distributed systems. This presentation will describe our approach and discuss how it can be applied to various use cases, including particle accelerators and other scientific facilities. We will also discuss the benefits of our solution, such as improved system observability and traceability, reduced downtime, and better resource allocation. We believe that our approach represents a significant step forward in the development of modern control systems, and we look forward to sharing our work with the community at ICALEPCS 2023.
* Igor Khokhriakov et al, A novel solution for controlling hardware components of accelerators and beamlines JOURNAL OF SYNCHROTRON RADIATION · Apr 5, 2022 |
|||
Slides WE3BCO04 [3.377 MB] | |||
DOI • | reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-WE3BCO04 | ||
About • | Received ※ 29 September 2023 — Revised ※ 14 November 2023 — Accepted ※ 19 December 2023 — Issued ※ 22 December 2023 | ||
Cite • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | ||
WE3AO01 | Radiation-Tolerant Multi-Application Wireless IoT Platform for Harsh Environments | radiation, network, controls, operation | 1051 |
|
|||
We introduce a radiation-tolerant multi-application wireless IoT platform, specifically designed for deployment in harsh environments such as particle accelerators. The platform integrates radiation-tolerant hardware with the possibility of covering different applications and use cases, including temperature and humidity monitoring, as well as simple equipment control functions. The hardware is capable of withstanding high levels of radiation and communicates wirelessly using LoRa technology, which reduces infrastructure costs and enables quick and easy deployment of operational devices. To validate the platform’s suitability for different applications, we have deployed a radiation monitoring version in the CERN particle accelerator complex and begun testing multi-purpose application devices in radiation test facilities. Our radiation-tolerant IoT platform, in conjunction with the entire network and data management system, opens up possibilities for different applications in harsh environments. | |||
Slides WE3AO01 [19.789 MB] | |||
DOI • | reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-WE3AO01 | ||
About • | Received ※ 04 October 2023 — Revised ※ 23 October 2023 — Accepted ※ 08 December 2023 — Issued ※ 12 December 2023 | ||
Cite • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | ||
WE3AO07 | Measurement of Magnetic Field Using System-On-Chip Sensors | controls, radiation, interface, electron | 1083 |
|
|||
Funding: Work supported by Brookhaven Science Associates, LLC under Contract No. DE-SC0012704 with the U.S. Department of Energy. Magnetic sensors have been developed utilizing various physical phenomena such as Electromagnetic Induction, Hall Effect, Tunnel Magnetoresistance(TMR), Giant Magnetoresistance (GMR), Anisotropic Magnetoresistance (AMR) and Giant Magnetoimpedance (GMI). The compatibility of solid-state magnetic sensors with complementary metal-oxide-semiconductor (CMOS) fabrication processes makes it feasible to achieve integration of sensor with sensing and computing circuitry at the same time, resulting in systems on chip. In this paper we describe application of AMR, TMR and Hall effect integrated sensors for precise measurement of 3D static magnetic field in wide range of magnitudes from 10-6 T to 0.3 T, as well as pulsed magnetic field up to 0.3 T. |
|||
DOI • | reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-WE3AO07 | ||
About • | Received ※ 03 October 2023 — Revised ※ 09 November 2023 — Accepted ※ 17 December 2023 — Issued ※ 18 December 2023 | ||
Cite • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | ||
TH2AO01 | Log Anomaly Detection on EuXFEL Nodes | FEL, network, embedded, GUI | 1126 |
|
|||
Funding: This work was supported by HamburgX grant LFF-HHX-03 to the Center for Data and Computing in Natural Sciences (CDCS) from the Hamburg Ministry of Science, Research, Equalities and Districts. This article introduces a method to detect anomalies in the log data generated by control system nodes at the European XFEL accelerator. The primary aim of this proposed method is to offer operators a comprehensive understanding of the availability, status, and problems specific to each node. This information is vital for ensuring the smooth operation. The sequential nature of logs and the absence of a rich text corpus that is specific to our nodes pose a significant limitation for traditional and learning-based approaches for anomaly detection. To overcome this limitation, we propose a method that uses word embedding and models individual nodes as a sequence of these vectors that commonly co-occur, using a Hidden Markov Model (HMM). We score individual log entries by computing a probability ratio between the probability of the full log sequence including the new entry and the probability of just the previous log entries, without the new entry. This ratio indicates how probable the sequence becomes when the new entry is added. The proposed approach can detect anomalies by scoring and ranking log entries from EuXFEL nodes where entries that receive high scores are potential anomalies that do not fit the routine of the node. This method provides a warning system to alert operators about these irregular log events that may indicate issues. |
|||
Slides TH2AO01 [1.420 MB] | |||
DOI • | reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-TH2AO01 | ||
About • | Received ※ 30 September 2023 — Accepted ※ 08 December 2023 — Issued ※ 13 December 2023 | ||
Cite • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | ||
TH2AO02 | High Availability Alarm System Deployed with Kubernetes | status, interface, feedback, site | 1134 |
|
|||
To support multiple scientific facilities at SLAC, a modern alarm system designed for availability, integrability, and extensibility is required. The new alarm system deployed at SLAC fulfills these requirements by blending the Phoebus alarm server with existing open-source technologies for deployment, management, and visualization. To deliver a high-availability deployment, Kubernetes was chosen for orchestration of the system. By deploying all parts of the system as containers with Kubernetes, each component becomes robust to failures, self-healing, and readily recoverable. Well-supported Kubernetes Operators were selected to manage Kafka and Elasticsearch in accordance with current best practices, using high-level declarative deployment files to shift deployment details into the software itself and facilitate nearly seamless future upgrades. An automated process based on git-sync allows for automated restarts of the alarm server when configuration files change eliminating the need for sysadmin intervention. To encourage increased accelerator operator engagement, multiple interfaces are provided for interacting with alarms. Grafana dashboards offer a user-friendly way to build displays with minimal code, while a custom Python client allows for direct consumption from the Kafka message queue and access to any information logged by the system. | |||
Slides TH2AO02 [0.798 MB] | |||
DOI • | reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-TH2AO02 | ||
About • | Received ※ 06 October 2023 — Revised ※ 09 October 2023 — Accepted ※ 14 December 2023 — Issued ※ 18 December 2023 | ||
Cite • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | ||
THMBCMO26 | FRIB Beam Power Ramp Process Checker at Chopper Monitor | diagnostics, target, controls, FPGA | 1256 |
|
|||
Funding: Work supporting the U.S. Dept. of Energy Office of Science under Cooperative Agreement DE-SC0023633 Chopper in the low energy beam line is a key ele-ment to control beam power in FRIB. As appropriate functioning of chopper is critical for machine protec-tion for FRIB, an FPGA-based chopper monitoring system was developed to monitor the beam gated pulse at logic level, deflection high voltage level, and in-duced charge/discharge current levels, and shut off beam promptly at detection of a deviation outside tolerance. Once FRIB beam power reaches a certain level, a cold start beam ramp mode in which the pulse repetition frequency and pulse width are linearly ramped up becomes required to mitigate heat shock to the target at beam restart. Chopper also needs to gen-erate a notch in every machine cycle of 10 ms that is used for beam diagnostics. To overcome the challeng-es of monitoring such a ramping process and meeting the response time requirement of shutting off beam, two types of process checkers, namely, monitoring at the pulse level and monitoring at the machine cycle level, have been implemented. A pulse look ahead algorithm to calculate the expected range of frequency dips and rises was developed, and a simplified mathe-matical model suitable for multiple ramp stages was built to calculate expected time parameters of accumu-lated pulse on time within a given machine cycle. Both will be discussed in detail in this paper, followed by simulation results with FPGA test bench and actual instrument test results with the beam ramp process. |
|||
Slides THMBCMO26 [0.389 MB] | |||
Poster THMBCMO26 [3.028 MB] | |||
DOI • | reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-THMBCMO26 | ||
About • | Received ※ 04 October 2023 — Revised ※ 10 October 2023 — Accepted ※ 13 October 2023 — Issued ※ 24 October 2023 | ||
Cite • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | ||
THPDP020 | Management of EPICS IOCs in a Distributed Network Environment Using Salt | EPICS, controls, network, hardware | 1340 |
|
|||
An EPICS-based control system typically consists of many individual IOCs, which can be distributed across many computers in a network. Managing hundreds of deployed IOCs, keeping track of where they are running, and providing operators with basic interaction capabilities can easily become a maintenance nightmare. At the Institute for Beam Physics and Technology (IBPT) of the Karlsruhe Institute of Technology (KIT), we operate separate networks for our accelerators KARA and FLUTE and use the Salt Project to manage the IT infrastructure. Custom Salt states take care of deploying our IOCs across multiple servers directly from the code repositories, integrating them into the host operating system and monitoring infrastructure. In addition, this allows the integration into our GUI in order to enable operators to monitor and control the process for each IOC without requiring any specific knowledge of where and how that IOC is deployed. Therefore, we can maintain and scale to any number of IOCs on any numbers of hosts nearly effortless. This paper presents the design of this system, discusses the tools and overall setup required to make it work, and shows off the integration into our GUI and monitoring systems. | |||
Poster THPDP020 [0.431 MB] | |||
DOI • | reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-THPDP020 | ||
About • | Received ※ 04 October 2023 — Accepted ※ 10 December 2023 — Issued ※ 14 December 2023 | ||
Cite • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | ||
THPDP037 | The Alarm System at HLS-II | controls, EPICS, status, distributed | 1399 |
|
|||
The control system of the Hefei Light Source II (HLS-II) is a distributed system based on Experimental Physics and Industrial Control System. The alarm system of HLS-II is responsible for monitoring the alarm state of the facility and distributing the alarm message in time. The monitoring range of the alarm system covers the devices of HLS-II technical group and the server platform. Zabbix is an open-source software tool to monitor the server platform. Custom metrics collection is achieved by implementing external scripts written in Python and automated agent deployment discovers the monitored servers running with Zabbix agents. The alarm distribution strategy of the front end devices is designed to overcome alarm floods. The alarm system of HLS-II provides multiple messaging channels to notify the responsible staff, including WeChat, SMS and web-based GUI. The alarm system of HLS-II has been deployed since December 2022. The result shows the alarm system facilitates the operator to troubleshoot problem efficiently to improve the availability of HLS-II. | |||
Poster THPDP037 [0.653 MB] | |||
DOI • | reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-THPDP037 | ||
About • | Received ※ 30 September 2023 — Accepted ※ 08 December 2023 — Issued ※ 13 December 2023 | ||
Cite • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | ||
THPDP063 | The Embedded Monitoring Processor for High Luminosity LHC | software, interface, controls, hardware | 1470 |
|
|||
The Embedded Monitoring Processor (EMP) is a versatile platform designed for High Luminosity LHC experiments, addressing the communication, processing, and monitoring needs of diverse applications in the ATLAS experiment, with a focus on supporting front-ends based on lpGBT (low power Giga-Bit Transceiver). Built around a commercial SoM, the EMP architecture emphasizes modularity, flexibility and the usage of standard interfaces, aiming to cover a wide range of applications and facilitating detector integrators to design and implement their specific solutions. The EMP software and firmware architecture comprises epos, the EMP operating system, quasar OPC UA servers, dedicated firmware IP cores and an ecosystem of different software libraries. This abstract outlines the software and firmware aspects of the EMP, detailing its integration with lpGBT optical interfaces, programmable logic development, and the role of the LpGbtSw library as a Hardware Abstraction Library for the LpGbt OPC UA server. | |||
DOI • | reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-THPDP063 | ||
About • | Received ※ 06 October 2023 — Revised ※ 27 October 2023 — Accepted ※ 12 December 2023 — Issued ※ 12 December 2023 | ||
Cite • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | ||
THPDP077 | Tango Integration of the SKA-Low Power and Signal Distribution System | controls, TANGO, hardware, software | 1526 |
|
|||
Funding: Square Kilometre Array Observatory The Power and Signal Distribution System (PaSD) is a key component of the SKA-Low telescope, responsible for control and monitoring of local power to the electronic components of the RF signal chain for the antennas, and collecting the RF signals for transmission to the Central Processing Facility. The system comprises "SMART boxes" (SMART: Small Modular Aggregation and RFoF Trunk) which each connect directly to around 10 antennas to provide local monitoring and control, and one Field Node Distribution Hub (FNDH) per station which distributes power to all the SMART boxes and provides a communications gateway as well as additional local monitoring. All communication to the SMART boxes is funnelled through the FNDH on a multi-drop serial bus using the Modbus ASCII protocol. This paper will describe how the PaSD will be integrated into the Tango-based SKA-Low Monitoring Control and Calibration Subsystem (MCCS) software, including the facility for a drop-in Python simulator which can be used to test the software. |
|||
Poster THPDP077 [20.237 MB] | |||
DOI • | reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-THPDP077 | ||
About • | Received ※ 04 October 2023 — Accepted ※ 08 December 2023 — Issued ※ 14 December 2023 | ||
Cite • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | ||
THSDSC04 | CamServer: Stream Processing at SwissFEL and SLS 2.0 | FEL, controls, data-acquisition, EPICS | 1585 |
|
|||
CamServer is a Python package for data stream processing developed at Paul Scherrer Institute (PSI). It is a key component of SwissFEL’s data acquisition, where it is deployed on a cluster of servers and used for displaying and processing images from all cameras. It scales linearly with the number of servers and is capable of handling multiple high-resolution cameras at 100Hz, as well as a variety of data types and sources. The processing unit, called a pipeline, runs in a private process that can be either permanent or spawned on demand. Pipelines consume and produce ZMQ streams, but input data can be arbitrary using an adapter layer (e.g. EPICS). A proxy server handles requests and creates pipelines on the cluster’s worker nodes according to rules. Some processing scripts are available out of the box (e.g. calculation of standard beam metrics) but users can upload custom ones. The system is managed via its REST API, using a client library or a GUI application. CamServer’s output data streams are consumed by a variety of client types such as data storage, image visualization, monitoring and DAQ applications. This work describes the use of CamServer, the status of the SwissFEL’s cluster and the development roadmap with plans for SLS 2.0. | |||
Poster THSDSC04 [1.276 MB] | |||
DOI • | reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-THSDSC04 | ||
About • | Received ※ 03 October 2023 — Revised ※ 10 October 2023 — Accepted ※ 06 December 2023 — Issued ※ 19 December 2023 | ||
Cite • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | ||
THSDSC06 | Developing a Digital Twin for BESSY II Synchrotron Light Source Based on EPICS and Microservice Design | synchrotron, EPICS, controls, lattice | 1594 |
|
|||
Digital twins, i.e. theory and design tools connected to the real devices and machine by mapping of physics components to the technical correspondents, are powerful tools providing accelerators with commissioning predictions and feedback capabilities. This paper describes a new tool allowing for greater flexibility in configuring the modelling part combined with ease of adding new features. To enable the various components developed in EPICS, Python, C, and C++ to work together seamlessly, we adopt a microservice architecture, with REST API services providing the interfaces between the components. End user scripts are implemented as REST API services, allowing for better data analysis and visualization. Finally, the paper describes the integration of dash and ploty for enhanced data comparison and visualization. Overall, this workflow provides a powerful and flexible solution for managing and optimizing BESSY II digital twins, with the potential for further customization and extension to upcoming machines. | |||
Poster THSDSC06 [0.797 MB] | |||
DOI • | reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-THSDSC06 | ||
About • | Received ※ 05 October 2023 — Revised ※ 27 October 2023 — Accepted ※ 05 November 2023 — Issued ※ 05 December 2023 | ||
Cite • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | ||
FR2BCO02 | A Lean UX Approach for Developing Effective Monitoring and Control User Interfaces: A Case Study for the SKA CSP. LMC Subsystem | controls, software, interface, TANGO | 1650 |
|
|||
The Central Signal Processor Local Monitor and Control (CSP. LMC) is a software component that allows the flow of information and commands between the Telescope Manager (TM) and the subsystems dedicated to signal processing, namely the correlator and beamformer, the pulsar search and the pulsar timing engines. It acts as an adapter by specialising the commands and associated data from the TM to the subsystems and by exposing the subsystems as a unified entity while monitoring their status. In this paper, we approach the problem of creating a User Interface (UI) for such a component. Through a series of short learning cycles, we want to explore different ways of looking at the system and build an initial set of UIs that can be refined to be used as engineering UIs in the first Array Assembly of the Square Kilometre Array. The process heavily involves some of the developers of the CSP. LMC in creating the dashboards, and other ones as participants in informal evaluations. In fact, the opportunities offered by Taranta, a tool to develop web UIs without needing web-development skills, make it possible to quickly realise a working dashboard that can be promptly tested. This also supports the short feedback cycle advocated by a Lean UX approach and maps well in a bi-weekly sprint cadence. In this paper, we will describe the method and present the results highlighting strengths and pain points where faced. | |||
DOI • | reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-FR2BCO02 | ||
About • | Received ※ 06 October 2023 — Revised ※ 20 November 2023 — Accepted ※ 05 December 2023 — Issued ※ 13 December 2023 | ||
Cite • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | ||