Keyword: operation
Paper Title Other Keywords Page
MO1BCO01 The Intelligent Observatory controls, software, target, survey 1
 
  • S.B. Potter, S. Chandra, N. Erasmus, M. Hlakola, R.P. Julie, H. Worters, C. van Gend
    SAAO, Observatory, South Africa
 
  The South African Astronomical Observatory (SAAO) has embarked on an ambitious initiative to upgrade its telescopes, instruments, and data analysis capabilities, facilitating their intelligent integration and seamless coordination. This endeavour aims not only to improve efficiency and agility but also to unlock exciting scientific possibilities within the realms of multi-messenger and time-domain astronomy. The program encompasses hardware enhancements enabling autonomous operations, complemented by the development of sophisticated software solutions. Intelligent algorithms have been meticulously crafted to promptly and autonomously respond to real-time alerts from telescopes worldwide and space-based observatories. Overseeing this sophisticated framework is the Observatory Control System, actively managing the observing queue in real-time. This presentation will provide a summary of the program’s notable achievements thus far, with a specific focus on the successful completion and full operational readiness of one of the SAAO telescopes.  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-MO1BCO01  
About • Received ※ 31 October 2023 — Revised ※ 10 October 2023 — Accepted ※ 06 December 2023 — Issued ※ 07 December 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
MO1BCO02 ITER Controls Approaching One Million Integrated EPICS Process Variables controls, software, MMI, network 6
 
  • A. Wallander, B. Bauvir
    ITER Organization, St. Paul lez Durance, France
 
  The ITER Tokamak is currently being assembled in southern France. In parallel, the supporting systems have completed installation and are under commissioning or operation. Over the last couple of years the electrical distribution, building services, liquid & gas, cooling water, reactive power compensation and cryoplant have been integrated, adding up to close to one million process variables. Those systems are operated, or under commissioning, from a temporary main control room or local control rooms close to the equipment using an integrated infrastructure. The ITER control system is therefore in production. As the ITER procurement is 90% in-kind, a major challenge has been the integration of the various systems provided by suppliers from the ITER members. Standardization, CODAC Core System software distribution, training and coaching have all played a positive role. Nevertheless, the integration has been more difficult than foreseen and the central team has been forced to rework much of the delivered software. In this paper we report on the current status of the ITER integrated control system with emphasize on lessons learned from integration of in-kind contributions.  
slides icon Slides MO1BCO02 [3.521 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-MO1BCO02  
About • Received ※ 27 September 2023 — Revised ※ 07 October 2023 — Accepted ※ 15 November 2023 — Issued ※ 07 December 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
MO1BCO04 EIC Controls System Architecture Status and Plans controls, EPICS, software, interface 19
 
  • J.P. Jamilkowski, S.L. Clark, M.R. Costanzo, T. D’Ottavio, M. Harvey, K. Mernick, S. Nemesure, F. Severino, K. Shroff
    BNL, Upton, New York, USA
  • L.R. Dalesio
    Osprey DCS LLC, Ocean City, USA
  • K. Kulmatycski, C. Montag, V.H. Ranjbar, K.S. Smith
    Brookhaven National Laboratory (BNL), Electron-Ion Collider, Upton, New York, USA
 
  Funding: Contract Number DE-AC02-98CH10886 with the auspices of the US Department of Energy
Preparations are underway to build the Electron Ion Collider (EIC) once Relativistic Heavy Ion Collider (RHIC) beam operations are end in 2025, providing an enhanced probe into the building blocks of nuclear physics for decades into the future. With commissioning of the new facility in mind, Accelerator Controls will require modernization in order to keep up with recent improvements in the field as well as to match the fundamental requirements of the accelerators that will be constructed. We will describe the status of the Controls System architecture that has been developed and prototyped for EIC, as well as plans for future work. Major influences on the requirements will be discussed, including EIC Common Platform applications as well as our expectation that we’ll need to support a hybrid environment covering both the proprietary RHIC Accelerator Device Object (ADO) environment as well as EPICS.
 
slides icon Slides MO1BCO04 [1.458 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-MO1BCO04  
About • Received ※ 05 October 2023 — Revised ※ 08 October 2023 — Accepted ※ 14 November 2023 — Issued ※ 11 December 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
MO2BCO01 Driving Behavioural Change of Software Developers in a Global Organisation Assisted by a Paranoid Android software, GUI, feedback, MMI 25
 
  • U.Y. Yilmaz, M.G.P.T. Android
    SKAO, Macclesfield, United Kingdom
  • M.J.A. de Beer
    SARAO, Cape Town, South Africa
 
  Ensuring code quality standards at the Square Kilometre Array Observatory (SKAO) is of utmost importance, as the project spans multiple nations and encompasses a wide range of software products delivered by developers from around the world. To improve code quality and meet certain open-source software prerequisites for a wider collaboration, the SKAO employs the use of a chatbot that provides witty, direct and qualified comments with detailed documentation that guide developers in improving their coding practices. The bot is modelled after a famous character albeit a depressed one, creating a relatable personality for developers. This has resulted in an increase in code quality and faster turnaround times. The bot has not only helped developers adhere to code standards but also fostered a culture of continuous improvement with an engaging and enjoyable process. Here we present the success story of the bot and how a chatbot can drive behavioural change within a global organisation and help DevOps teams to improve developer performance and agility through an innovative and engaging approach to code reviews.  
slides icon Slides MO2BCO01 [8.171 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-MO2BCO01  
About • Received ※ 06 October 2023 — Revised ※ 07 October 2023 — Accepted ※ 14 November 2023 — Issued ※ 19 December 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
MO3AO01 Optimisation of the Touschek Lifetime in Synchrotron Light Sources Using Badger sextupole, injection, storage-ring, quadrupole 108
 
  • S.M. Liuzzo, N. Carmignani, L.R. Carver, L. Hoummi, D. Lacoste, A. Le Meillour, T.P. Perron, S.M. White
    ESRF, Grenoble, France
  • I.V. Agapov, M. Böse, J. Keil, L. Malina, E.S.H. Musa, B. Veglia
    DESY, Hamburg, Germany
  • A.L. Edelen, P. Raimondi, R.J. Roussel, Z. Zhang
    SLAC, Menlo Park, California, USA
  • T. Hellert
    LBNL, Berkeley, California, USA
 
  Funding: This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No. 871072
Badger* is a software designed to easily access several optimizers (simplex, RCDS**, bayesian optimization, etc.) to solve a given multidimensional minimization/maximization task. The Badger software is very flexible and easy to adapt to different facilities. In the framework of the EURIZON European project Badger was used for the EBS and PETRAIII storage rings interfacing with the Tango and TINE control system. Among other tests, the optimisations of Touschek lifetime was performed and compared with the results obtained with existing tools during machine dedicated times.
* Z. Zhang et al., "Badger: The Missing Optimizer in ACR", doi:10.18429/JACoW-IPAC2022-TUPOST058
** X. Huang, "Robust simplex algorithm for online optimization", 10.1103/PhysRevAccelBeams.21.104601
 
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-MO3AO01  
About • Received ※ 28 September 2023 — Revised ※ 08 October 2023 — Accepted ※ 13 October 2023 — Issued ※ 27 October 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
MO3AO02 Implementation of Model Predictive Control for Slow Orbit Feedback Control in MAX IV Accelerators Using PyTango Framework controls, feedback, TANGO, storage-ring 116
 
  • C. Takahashi, J. Breunlin, A. Freitas, M. Sjöström
    MAX IV Laboratory, Lund University, Lund, Sweden
  • P. Giselsson, E. Jensen Gassheld, M. Karlsson
    Lund University, Lund, Sweden
 
  Achieving low emittance and high brightness in modern light sources requires stable beams, which are commonly achieved through feedback solutions. The MAX IV light source has two feedback systems, Fast Orbit Feedback (FOFB) and Slow Orbit Feedback (SOFB), operating in overlapping frequency regions. Currently in MAX IV, a general feedback device implemented in PyTango is used for slow orbit and trajectory correction, but an MPC controller for the beam orbit has been proposed to improve system robustness. The controller uses iterative optimisation of the system model, current measurements, dynamic states and system constraints to calculate changes in the controlled variables. The new device implements the MPC model according to the beam orbit response matrix, subscribes to change events on all beam position attributes and updates the control signal given to the slow magnets with a 10 Hz rate. This project aims to improve system robustness and reduce actuator saturation. The use of PyTango simplifies the implementation of the MPC controller by allowing access to high-level optimisation and control packages. This project will contribute to the development of a high-quality feedback control system for MAX IV accelerators.  
slides icon Slides MO3AO02 [4.234 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-MO3AO02  
About • Received ※ 05 October 2023 — Revised ※ 09 October 2023 — Accepted ※ 14 November 2023 — Issued ※ 19 December 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
MO3AO03 Commissioning and Optimization of the SIRIUS Fast Orbit Feedback controls, feedback, power-supply, network 123
 
  • D.O. Tavares, M.S. Aguiar, F.H. Cardoso, E.P. Coelho, G.R. Cruz, A.F. Giachero, L. Lin, S.R. Marques, A.C.S. Oliveira, G.S. Ramirez, É.N. Rolim, L.M. Russo, F.H. de Sá
    LNLS, Campinas, Brazil
 
  The Sirius Fast Orbit Feedback System (FOFB) entered operation for users in November 2022. The system design aimed at minimizing the overall feedback loop delay, understood as the main performance bottleneck in typical FOFB systems. Driven by this goal, the loop update rate was chosen as high as possible, real-time processing was entirely done in FPGAs, BPMs and corrector power supplies were tightly integrated to the feedback controllers in MicroTCA crates, a small number of BPMs was included in the feedback loop and a dedicated network engine was used. These choices targeted a disturbance rejection crossover frequency of 1 kHz. To deal with the DC currents that build up in the fast orbit corrector power supplies, a method to transfer the DC control effort to the Slow Orbit Feedback System (SOFB) running in parallel was implemented. This contribution gives a brief overview of the system architecture and modelling, and reports on its commissioning, system identification and feedback loop optimization during its first year of operation.  
slides icon Slides MO3AO03 [78.397 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-MO3AO03  
About • Received ※ 06 October 2023 — Revised ※ 09 October 2023 — Accepted ※ 14 November 2023 — Issued ※ 03 December 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
MO3AO05 Path to Ignition at National Ignition Facility (NIF): The Role of the Automated Alignment System alignment, laser, target, controls 138
 
  • B.P. Patel, A.A.S. Awwal, M. Fedorov, R.R. Leach Jr., R.R. Lowe-Webb, V.J. Miller Kamm, P.K. Singh
    LLNL, Livermore, California, USA
 
  Funding: This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344
The historical breakthrough experiment at the National Ignition Facility (NIF) produced fusion ignition in a laboratory for the first time and made headlines around the world. This achievement was the result of decades of research, thousands of people, and hardware and software systems that rivaled the complexity of anything built before. The NIF laser Automatic Alignment (AA) system has played a major role in this accomplishment. Each high yield shot in the NIF laser system requires all 192 laser beams to arrive at the target within 30 picoseconds and be aligned within 50 microns-half the diameter of human hair-all with the correct wavelength and energy. AA makes it possible to align and fire the 192 NIF laser beams efficiently and reliably several times a day. AA is built on multiple layers of complex calculations and algorithms that implement data and image analysis to position optical devices in the beam path in a highly accurate and repeatable manner through the controlled movement of about 66,000 control points. The system was designed to have minimum or no human intervention. This paper will describe AA’s evolution, its role in ignition, and future modernization.
LLNL Release Number: LLNL-ABS-847783
 
slides icon Slides MO3AO05 [10.417 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-MO3AO05  
About • Received ※ 22 September 2023 — Revised ※ 07 October 2023 — Accepted ※ 14 November 2023 — Issued ※ 05 December 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
MO3AO06 Energy Consumption Optimisation by Using Advanced Control Algorithms controls, PLC, MMI, simulation 145
 
  • F. Ghawash, E. Blanco Viñuela, B. Schofield
    CERN, Meyrin, Switzerland
 
  Large industries operate energy-intensive equipment and energy efficiency is an important objective when trying to optimize the final energy consumption. CERN utilizes a large amount of electrical energy to run its accelerators, detectors and test facilities, with a total yearly consumption of 1.3 TWh and peaks of about 200 MW. Final energy consumption reduction can be achieved by dedicated technical solutions and advanced automation technologies, especially those based on optimization algorithms, have revealed a crucial role not only in keeping the processes within required safety and operational conditions but also in incorporating financial factors. MBPC (Model-Based Predictive Control) is a feedback control algorithm which can naturally integrate the capability of achieving reduced energy consumption when including economic factors in the optimization formulation. This paper reports on the experience gathered when applying non-linear MBPC to some of the contributors to the electricity bill at CERN: the cooling and ventilation plants (i.e. cooling towers, chillers, and air handling units). Simulation results with cooling towers showed significant performance improvements and energy savings close to 20% over conventional heuristic solutions. The control problem formulation, the control strategy validation using a digital twin and the initial results in a real industrial plant are reported together with the experience gained implementing the algorithm in industrial controllers.  
slides icon Slides MO3AO06 [3.101 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-MO3AO06  
About • Received ※ 04 October 2023 — Revised ※ 09 October 2023 — Accepted ※ 14 November 2023 — Issued ※ 29 November 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
MO3AO07 Control Design Optimisations of Robots for the Maintenance and Inspection of Particle Accelerators controls, cavity, interface, software 153
 
  • A. Díaz Rosales, M. Di Castro, H. Gamper
    CERN, Meyrin, Switzerland
 
  Automated maintenance and inspection systems have become increasingly important over the last decade for the availability of the accelerators at CERN. This is mainly due to improvements in robotic perception, control and cognition and especially because of the rapid advancement in artificial intelligence. The robotic service at CERN performed the first interventions in 2014 with robotic solutions from external companies. However, it soon became clear that a customized platform needed to be developed in order to satisfy the needs and in order to efficiently navigate through the cluttered, semi-structured environment. This led to the formation of a robotic fleet of about 20 different robotic systems that are currently active at CERN. In order to increase the efficiency and robustness of robotic platforms for future accelerators it is necessary to consider robotic interventions at the early design phase of such machines. Task specific solutions tailored to the specific needs can then be designed, which in general show higher efficiency than multipurpose industrial robotic systems. This paper presents current advances in the design and development of task specific robotic system for maintenance and inspection in particle accelerators, taking the 100 km long Future Circular Collider main tunnel as a use case. The requirements on such a robotic system, including the applied control strategies, are shown, as well as the optimization of the topology and geometry of the robotic system itself.  
slides icon Slides MO3AO07 [3.560 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-MO3AO07  
About • Received ※ 29 September 2023 — Revised ※ 10 October 2023 — Accepted ※ 14 November 2023 — Issued ※ 26 November 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
MO4BCO03 Protecting Your Controls Infrastructure Supply Chain software, controls, framework, software-component 196
 
  • B. Copy, F. Ehm, P.J. Elson, S.T. Page, J.-B. de Martel
    CERN, Meyrin, Switzerland
  • M. Pratoussy
    CPE Lyon, Villeurbanne, France
  • L. Van Mol
    Birmingham University, Birmingham, United Kingdom
 
  Supply chain attacks have been constantly increasing since being first documented in 2013. Profitable and relatively simple to put in place for a potential attacker, they compromise organizations at the core of their operation. The number of high profile supply chain attacks has more than quadrupled in the last four years and the trend is expected to continue unless countermeasures are widely adopted. In the context of open science, the overwhelming reliance of scientific software development on open-source code, as well as the multiplicity of software technologies employed in large scale deployments make it increasingly difficult for asset owners to assess vulnerabilities threatening their activities. Recently introduced regulations by both the US government (White House executive order EO14028) and the EU commission (E.U. Cyber Resilience Act) mandate that both Service and Equipment suppliers of government contracts provide Software Bills of Materials (SBOM) of their commercial products in a standard and open data format. Such SBOM documents can then be used to automate the discovery, and assess the impact of, known or future vulnerabilities and how to best mitigate them. This paper will explain how CERN investigated the implementation of SBOM management in the context of its accelerator controls infrastructure, which solutions are available on the market today, and how they can be used to gradually enforce secure dependency lifecycle policies for the developer community.  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-MO4BCO03  
About • Received ※ 02 October 2023 — Revised ※ 10 October 2023 — Accepted ※ 14 November 2023 — Issued ※ 24 November 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
TU1BCO06 Disentangling Beam Losses in The Fermilab Main Injector Enclosure Using Real-Time Edge AI FPGA, real-time, controls, network 273
 
  • K.J. Hazelwood, J.M.S. Arnold, M.R. Austin, J.R. Berlioz, P.M. Hanlet, M.A. Ibrahim, A.T. Livaudais-Lewis, J. Mitrevski, V.P. Nagaslaev, A. Narayanan, D.J. Nicklaus, G. Pradhan, A.L. Saewert, B.A. Schupbach, K. Seiya, R.M. Thurman-Keup, N.V. Tran
    Fermilab, Batavia, Illinois, USA
  • J.YC. Hu, J. Jiang, H. Liu, S. Memik, R. Shi, A.M. Shuping, M. Thieme, C. Xu
    Northwestern University, EVANSTON, USA
  • A. Narayanan
    Northern Illinois University, DeKalb, Illinois, USA
 
  The Fermilab Main Injector enclosure houses two accelerators, the Main Injector and Recycler Ring. During normal operation, high intensity proton beams exist simultaneously in both. The two accelerators share the same beam loss monitors (BLM) and monitoring system. Deciphering the origin of any of the 260 BLM readings is often difficult. The (Accelerator) Real-time Edge AI for Distributed Systems project, or READS, has developed an AI/ML model, and implemented it on fast FPGA hardware, that disentangles mixed beam losses and attributes probabilities to each BLM as to which machine(s) the loss originated from in real-time. The model inferences are then streamed to the Fermilab accelerator controls network (ACNET) where they are available for operators and experts alike to aid in tuning the machines.  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-TU1BCO06  
About • Received ※ 06 October 2023 — Revised ※ 11 October 2023 — Accepted ※ 15 November 2023 — Issued ※ 06 December 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
TU2BCO02 Protection Layers Design for the High Luminosity LHC Full Remote Alignment System controls, software, alignment, hardware 285
 
  • B. Fernández Adiego, E. Blanco Viñuela, A. Germinario, H. Mainaud Durand, M. Sosin
    CERN, Meyrin, Switzerland
 
  The Full Remote Alignment System (FRAS) is a complex measurement, alignment and control system designed to remotely align components of the Large Hadron Collider (LHC) following its High Luminosity upgrade. The purpose of FRAS is to guarantee optimal alignment of the strong focusing magnets and associated components near the experimental interaction points, while at the same time limiting the radiation dose to which surveyors in the LHC tunnel are subjected. A failure in the FRAS control system, or an operator mistake, could provoke a non desired displacement of a component that could lead to damage of neighbouring equipment. Such an incident would incur a considerable repair cost both in terms of money and time. To mitigate this possibility, an exhaustive risk analysis of FRAS has been performed, with the design of protection layers according to the IEC 61511 standard proposed. This paper presents the different functional safety techniques applied to FRAS, reports on the current project status, and introduces the future activities to complete the safety life cycle.  
slides icon Slides TU2BCO02 [2.757 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-TU2BCO02  
About • Received ※ 03 October 2023 — Accepted ※ 14 December 2023 — Issued ※ 19 December 2023  
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
TU2BCO04 Accelerator Systems Cyber Security Activities at SLAC controls, EPICS, network, simulation 292
 
  • G.R. White, A.L. Edelen
    SLAC, Menlo Park, California, USA
 
  Funding: Work supported in part by the U.S. Department of Energy under contract number DE-AC02-76SF00515.
We describe four cyber security related activities of SLAC and collaborations. First, from a broad review of accelerator computing cyber and mission reliability, our analysis method, findings and outcomes. Second, lab-wide and accelerator penetration testing, in particular methods to control, coordinate, and trap, potentially hazardous scans. Third, a summary gap analysis of recent US regulatory orders from common practice at accelerators, and our plans to address these in collaboration with the US Dept. of Energy. Finally, summary attack vectors of EPICS, and technical plans to add authentication and encryption to EPICS itself.
 
slides icon Slides TU2BCO04 [1.677 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-TU2BCO04  
About • Received ※ 04 October 2023 — Revised ※ 13 October 2023 — Accepted ※ 15 November 2023 — Issued ※ 17 December 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
TU2BCO06 Verification and Validation of the ESS Machine Protection System-of-Systems (MP-SoS) machine-protect, hardware, interface, software 296
 
  • A. Nordt, M. Carroll, S. Gabourin, J. Gustafsson, S. Kövecses de Carvalho, G.L. Ljungquist, S. Pavinato, A. Petrushenko
    ESS, Lund, Sweden
 
  The European Spallation Source, ESS, is a source of spallation neutrons used for neutron scattering experiments, complementary to synchrotron light sources. ESS has very ambitious goals and experimentation with neutrons at ESS should be one or two orders of magnitude more performing compared to other sources. Each proton beam pulse generated by the linear accelerator will have a peak power of 125 MW. The machine’s equipment must be protected from damage due to beam losses, as such losses could lead to melting of e.g. the beam pipe within less than 5 microseconds. System-of-Systems engineering has been applied to deploy systematic and robust protection of the ESS machine. The ESS Machine Protection System of Systems (MP-SoS) consists of large-scale distributed systems, of which the components themselves are complex systems. Testing, verification and validation of the MP-SoS is rather challenging as each constituent system of the MP-SoS has its own management, functionality that is not necessarily designed for protection, and also the different system owners follow their own verification strategies. In this paper, we will present our experience gained through the first 3 beam commissioning phases, ESS has gone through so far. We will describe how we managed to declare MP-SoS to being ready for beam operation without complexifying the task, and we will present the challenges, issues, and lessons learned faced during the verification and validation campaigns.  
slides icon Slides TU2BCO06 [1.930 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-TU2BCO06  
About • Received ※ 31 October 2023 — Revised ※ 03 November 2023 — Accepted ※ 12 December 2023 — Issued ※ 20 December 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
TU2AO01 The Hybrid Identity of a Control System Organization: Balancing Support, Product, and R&D Expectations controls, software, framework, experiment 303
 
  • S. Baymani
    PSI, Villigen PSI, Switzerland
 
  Controls organizations are often expected to fulfill a dual role as both a support organization and an R&D organization, providing advanced and innovative services. This creates a tension between the need to provide services and the desire and necessity to develop cutting-edge technology. In addition, Controls organizations must balance the competing demands of product development, maintenance and operations, and innovation and R&D. These conflicting expectations can lead to neglect of long-term strategic issues and create imbalances within the organization, such as technical debt and lack of innovation. This paper will explore the challenges of navigating these conflicting expectations and the common traps, risks, and consequences of imbalances. Drawing on our experience at PSI, we will discuss specific examples of conflicts and their consequences. We will also propose solutions to overcome or improve these conflicts and identify a long-term, sustainable approach for a hybrid organization such as Controls . Our proposals will cover strategies for balancing support and product development, improving communication, and enabling a culture of innovation. Our goal is to spark a broader discussion around the identity and role of control system organizations within large laboratory organizations, and to provide concrete proposals for organizations looking to balance competing demands and build a sustainable approach to control systems and services.  
slides icon Slides TU2AO01 [2.129 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-TU2AO01  
About • Received ※ 05 October 2023 — Revised ※ 07 October 2023 — Accepted ※ 18 November 2023 — Issued ※ 12 December 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
TU2AO03 A Successful Emergency Response Plan: Lessons in the Controls Section of the ALBA Synchrotron controls, software, MMI, synchrotron 316
 
  • G. Cuní, O. Matilla, J. Nicolàs, M. Pont
    ALBA-CELLS, Cerdanyola del Vallès, Spain
 
  These are challenging times for research institutes in the field of software engineering. Our designs are becoming increasingly complex, and a software engineer needs years of experience to become productive. On the other hand, the software job market is very dynamic, and a computer engineer receives tens of offers from private companies with attractive salaries every year. Occasionally, the perfect storm can occur, and in a short period of time, several key people in a group with years of experience leave. The situation is even more critical when the institute is plunged into a high growth rate with several new instruments under way. Naturally, engaged teams will resist reducing operational service quality, but, on the other hand, the new installations milestones dates will approach quickly. This article outlines the decision-making process and the measures taken to cope with this situation in the ALBA Synchroton’s Controls Section. The plan included reorganizing teamwork, but more importantly, redefining the relationship with our clients and prioritization processes. As a result, the team was restructured and new roles were created. In addition, effective coordination was vital, and new communication channels were established to ensure smooth workflows. The emergency peak period is over in our case, but we have learned a lot of lessons and implemented many changes that will stay with us. They have made us more efficient and more resilient in case of future emergencies.  
slides icon Slides TU2AO03 [1.132 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-TU2AO03  
About • Received ※ 02 October 2023 — Accepted ※ 19 November 2023 — Issued ※ 28 November 2023  
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
TU2AO04 Ensuring Smooth Controls Upgrades During Operation controls, software, interface, GUI 321
 
  • M. Pace, F. Hoguin, E. Matli, W. Sliwinski, B. Urbaniec
    CERN, Meyrin, Switzerland
 
  The CERN Accelerator Controls systems have to remain as stable as possible for operations. However, there are inevitable needs to introduce changes to provide new functionalities and conduct important consolidation activities. To deal with this, a formal procedure and approval process, the Smooth Upgrades procedure, was introduced and refined over a number of years. This involves declaring foreseen Controls changes as a function of the accelerator schedules, validating them with stakeholders, and organising their deployment in the production environment. All of this with the aim of minimising the impact on accelerator operation. The scope of this activity is CERN-wide, covering changes developed by all CERN units involved in Controls and encompassing the whole CERN accelerator and facility complex. In 2022, the mandate was further extended with a more formal approach to coordinate changes of the software interfaces of the devices running on front-end computers, which form a critical part of the smooth deployment process. Today, Smooth Upgrades are considered a key contributor to the performance and stability of the CERN Control system. This paper describes the Smooth Upgrades procedure and the underlying processes and tools such as schedule management, change management, and the monitoring of device usage. The paper also includes the major evolutions which allowed the current level of maturity and efficiency to be reached. Ideas for future improvements will also be covered.  
slides icon Slides TU2AO04 [1.506 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-TU2AO04  
About • Received ※ 06 October 2023 — Revised ※ 10 October 2023 — Accepted ※ 14 December 2023 — Issued ※ 21 December 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
TU2AO05 Maintenance of the National Ignition Facility Controls Hardware System controls, target, laser, experiment 328
 
  • J.L. Vaher, G.K. Brunton, J. Dixon
    LLNL, Livermore, USA
 
  Funding: This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.
At the National Ignition Facility (NIF), achieving fusion ignition for the first time ever in a laboratory required one of the most complex hardware control systems in the world. With approximately 1,200 control racks, 66,000 control points, and 100, 000 cables, maintaining the NIF control system requires an exquisite choreography around experimental operations while adhering to NIF’s safety, security, quality, and efficiency requirements. To ensure systems operate at peak performance and remain available at all times to avoid costly delays, preventative maintenance activities are performed two days per week as the foundation of our effective maintenance strategy. Reactive maintenance addresses critical path issues that impact experimental operations through a rapid response 24x7 on-call support team. Prioritized work requests are reviewed and approved daily by the facility operations scheduling team. NIF is now in the second decade of operations, and the aging of many control systems is threatening to affect performance and availability, potentially impacting planned progress of the fusion ignition program. The team is embarking on a large-scale refurbishment of systems to mitigate this threat. Our robust maintenance program will ensure NIF can capitalize on ignition and push the facility to even greater achievements. This paper will describe the processes, procedures, and metrics used to plan, coordinate, and perform controls hardware maintenance at NIF.
LLNL Release Number: LLNL-ABS-848420
 
slides icon Slides TU2AO05 [1.938 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-TU2AO05  
About • Received ※ 03 October 2023 — Revised ※ 08 October 2023 — Accepted ※ 14 December 2023 — Issued ※ 14 December 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
TUMBCMO07 Dynamic Control Room Interfaces for Complex Particle Accelerator Systems interface, controls, lattice, embedded 351
 
  • B.E. Bolling, G. Fedel, M. Muñoz, D.N. Nordt
    ESS, Lund, Sweden
 
  The European Spallation Source (ESS) is a research facility under construction aiming to be the world’s most powerful pulsed neutron source. It is powered by a complex particle accelerator designed to provide a 2.86 ms long proton pulse at 2 GeV with a repetition rate of 14 Hz. Commissioning of the first part of the accelerator has begun and the requirements on the control system interfaces varies greatly as progress is made and new systems are added. In this paper, three such applications are discussed in separate sections. A Navigator interface was developed for the control room interfaces aimed towards giving operators and users a clear and structured way towards quickly finding the needed interface(s) they need. The construction of this interface is made automatically via a Python-based application and is built on applications in any directory structure both with and without developer interference (fully and semi-automatic methods). The second interface discussed in this paper is the Operations Accelerator Synoptic interface, which uses a set of input lattices and system interface templates to construct configurable synoptic view of the systems in various sections and a controller panel for any selected system. Lastly for this paper there is a configurable Radio Frequency Orchestration interface for Operations, which allows in-situ modification of the interface depending on which systems and components are selected.  
slides icon Slides TUMBCMO07 [3.248 MB]  
poster icon Poster TUMBCMO07 [10.503 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-TUMBCMO07  
About • Received ※ 04 October 2023 — Accepted ※ 21 November 2023 — Issued ※ 04 December 2023  
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
TUMBCMO09 Front-End Monitor and Control Web Application for Large Telescope Infrastructures: A Comparative Analysis TANGO, controls, framework, interface 359
 
  • S. Di Frischia, M. Canzari
    INAF - OAAB, Teramo, Italy
  • V. Alberti
    INAF-OAT, Trieste, Italy
  • A. Georgiou
    CGI, Edinburgh, United Kingdom
  • H.R. Ribeiro
    Universidade do Porto, Faculdade de Ciências, Porto, Portugal
 
  A robust monitor and control front-end application is a crucial feature for large and scalable radio telescope infrastructures such LOFAR and SKA, whereas the control system is required to manage numerous attribute values at a high update rate, and thus the operators must rely on an affordable user-interface platform which covers the whole range of operations. In this paper two state-of-the-art web applications such Grafana and Taranta are taken into account, developing a comparative analysis between the two software suites. Such a choice is motivated mostly because of their widespread use together with the TANGO Controls Framework, and the necessity to offer a ground of comparison for large projects dealing with the development of a monitor and control GUI which interfaces to TANGO. We explain at first the general architecture of both systems, and then we create a typical use-case where an interactive dashboard is built to monitor and control a hardware device. Then, we set up some comparable metrics to evaluate the pros and cons of both platforms, regarding the technical and operational requirements, fault tolerances, developers and operators efforts, and so on. In conclusion, the comparative analysis and its results are summarized with the aim to offer the stakeholders a basis for future choices.  
slides icon Slides TUMBCMO09 [0.621 MB]  
poster icon Poster TUMBCMO09 [1.552 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-TUMBCMO09  
About • Received ※ 05 October 2023 — Revised ※ 12 October 2023 — Accepted ※ 22 November 2023 — Issued ※ 27 November 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
TUMBCMO11 Upgrading and Adapting to CS-Studio Phoebus at Facility for Rare Isotope Beams controls, interface, EPICS, linac 364
 
  • T. Ashwarya, M. Ikegami, J. LeTourneau, A.C. Morton
    FRIB, East Lansing, Michigan, USA
 
  Funding: Work supported by the U.S. Department of Energy Office of Science under Cooperative Agreement DE-SC0000661
For more than a decade, the Eclipse-based Control System Studio has provided FRIB with a rich user interface to its EPICS-based control system. At FRIB, we use the Alarm Handler, BOY Display Manager, Scan Monitor/Editor, Channel Client, Save-and-Restore, and Data Browser to monitor and control various parts of the beamline. Our engineers have developed over 3000 displays using the BOY display manager mapping various segments and areas of the FRIB beamline. CS-Studio Phoebus is the latest next-generation upgrade to the Eclipse-based CS-Studio, which is based on the modern JavaFX-based graphics and aims toward providing existing functionalities and more. FRIB has already transitioned away from the old BEAST alarm servers to the new Kafka-based Phoebus alarm servers which have been monitoring thousands of our EPICS PVs with its robust monitoring and notifying capabilities. We faced certain challenges with conversion of FRIB’s thousands of displays and to address those we deployed scripts to help the bulk conversion of screens with automated mapping between BOY and Display Builder and also continually improved the Phoebus auto-conversion tool. This paper details the ongoing transition of FRIB from Eclipse-based CS-Studio to Phoebus and various adaptations and solutions that we used to ease this transition for our users. Moving to the new Phoebus-based services and client have provided us with an opportunity to rectify and improve on certain issues known to have existed with Eclipse-based CS-Studio and its services.
 
slides icon Slides TUMBCMO11 [0.872 MB]  
poster icon Poster TUMBCMO11 [2.190 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-TUMBCMO11  
About • Received ※ 03 October 2023 — Revised ※ 08 October 2023 — Accepted ※ 30 November 2023 — Issued ※ 16 December 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
TUMBCMO18 Upgrade of the AGOR Cyclotron Control System at UMCG-PARTREC controls, PLC, cyclotron, software 391
 
  • O.J. Kuiken, A. Gerbershagen, P. Schakel, J. Schwab, J.K. van Abbema
    PARTREC, Groningen, The Netherlands
 
  The AGOR cyclotron began development in the late 1980s and was commissioned in 1997. In 2020, when the facility was transferred from the University of Groningen to the University Medical Center Groningen, it marked the start of an upgrade process aimed at ensuring reliable operation. Recent, current and upcoming upgrades and additions encompass the following: Firstly, the current OT network uses custom IO modules based on the outdated Bitbus fieldbus. A pilot study was conducted to evaluate the use of NI CompactRIO-based subracks for analog and digital IO. Also, a similar PLC-based solution is currently under investigation. Secondly, the current control system is based on Vsystem/Vista and alternatives are being investigated. Thirdly, PLCs are upgraded to a newer generation. Fourthly, the current harp electronics and beam current readout electronics both use components that are hard to procure and use a Bitbus interface. New, in-house designs constructed as generic I-V converters eliminate this fieldbus dependency. Fifthly, the present RF slow control employs feedback loops to regulate the RF power and phase. Our new design incorporates functional improvements and condenses several discrete modules into a single cassette, resulting in fewer expected issues with faulty cables and connectors, and enabling us to maintain a larger stock of spares. Finally, the UMCG Radiotherapy department is constructing a new beamline with support from the technical staff at UMCG-PARTREC. The control will be based on NI CompactRIO.  
slides icon Slides TUMBCMO18 [0.771 MB]  
poster icon Poster TUMBCMO18 [2.389 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-TUMBCMO18  
About • Received ※ 06 October 2023 — Revised ※ 08 October 2023 — Accepted ※ 30 November 2023 — Issued ※ 01 December 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
TUMBCMO19 MAX IV Laboratory’s Control System Evolution and Future Strategies controls, experiment, detector, TANGO 395
 
  • V. Hardion, P.J. Bell, T. Eriksson, M. Lindberg, P. Sjöblom, D.P. Spruce
    MAX IV Laboratory, Lund University, Lund, Sweden
 
  The MAX IV Laboratory, a 4th generation synchrotron radiation facility located in southern Sweden, has been operational since 2016. With multiple beamlines and experimental stations completed and in steady use, the facility is now approaching the third phase of development, which includes the final two of the 16 planned beamlines in user operation. The focus is on achieving operational excellence by optimizing reliability and performance. Meanwhile, the strategy for the coming years is driven by the need to accommodate a growing user base, exploring the possibility of operating a Soft X-ray Laser (SXL), and achieving the diffraction limit for 10 keV of the 3 GeV. The Technical Division is responsible for the control and computing systems of the entire laboratory. This new organization provides a coherent strategy and a clear vision, with the ultimate goal of enabling science. The increasing demand for more precise and efficient control systems has led to significant developments and maintenance efforts. Pushing the limits in remote access, data generation, time-resolved and fly-scan experiments, and beam stability requires the proper alignment of technology in IT infrastructure, electronics, software, data analysis, and management. This article discusses the motivation behind the updates, emphasizing the expansion of the control system’s capabilities and reliability. Lastly, the technological strategy will be presented to keep pace with the rapidly evolving technology landscape, ensuring that MAX IV is prepared for its next major upgrade.  
slides icon Slides TUMBCMO19 [8.636 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-TUMBCMO19  
About • Received ※ 06 October 2023 — Revised ※ 12 October 2023 — Accepted ※ 24 November 2023 — Issued ※ 29 November 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
TUMBCMO20 Introduction and Status of Fermilab’s ACORN Project controls, hardware, power-supply, software 401
 
  • D. Finstrom, E.G. Gottschalk
    Fermilab, Batavia, Illinois, USA
 
  Modernizing the Fermilab accelerator control system is essential to future operations of the laboratory’s accelerator complex. The existing control system has evolved over four decades and uses hardware that is no longer available and software that uses obsolete frameworks. The Accelerator Controls Operations Research Network (ACORN) Project will modernize the control system and replace end-of-life power supplies to enable future accelerator complex operations with megawatt particle beams. An overview of the ACORN Project will be presented along with a summary of recent R&D activities.  
slides icon Slides TUMBCMO20 [0.581 MB]  
poster icon Poster TUMBCMO20 [0.455 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-TUMBCMO20  
About • Received ※ 04 October 2023 — Revised ※ 10 October 2023 — Accepted ※ 06 December 2023 — Issued ※ 13 December 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
TUMBCMO21 SOLEIL II: Towards A Major Transformation of the Facility controls, experiment, synchrotron, MMI 404
 
  • Y.-M. Abiven, S.-E. Berrier, A. Buteau, I. Chado, E. Fonda, E. Frahi, B. Gagey, L.S. Nadolski, P. Pierrot
    SOLEIL, Gif-sur-Yvette, France
 
  Operational since 2008, SOLEIL [1] is providing users with access to a wide range of experimental techniques thanks to its 29 beamlines, covering a broad energy range from THz to hard X-ray. In response to new scientific and societal challenges, SOLEIL is undergoing a major transformation with the ongoing SOLEIL II project. This project includes designing an ambitious Diffraction Limited Storage Ring (DLSR) [2] to increase performances in terms of brilliance, coherence, and flux, upgrading the beamlines to provide advanced methods, and driving a digital transformation in data- and user- oriented approaches. This paper presents the project organization and technical details studies for the ongoing upgrades, with a focus on the digital transformation required to address future scientific challenges. It will depict the computing and data management program with the presentation of the targeted IT architecture to improve automated and data-driven processes for optimizing instrumentation. The optimization program covers the facility reconstruction period as well as future operation, including the use of Artificial Intelligence (AI) techniques for data production management, decision-making, complex feedbacks, and data processing. Real-time processes are to be applied in the acquisition scanning design, where detectors and robotic systems will be coupled to optimize beam time.  
slides icon Slides TUMBCMO21 [0.663 MB]  
poster icon Poster TUMBCMO21 [1.908 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-TUMBCMO21  
About • Received ※ 04 October 2023 — Revised ※ 08 October 2023 — Accepted ※ 14 December 2023 — Issued ※ 20 December 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
TUMBCMO25 Operational Controls for Robots Integrated in Accelerator Complexes controls, framework, interface, network 423
 
  • S.F. Fargier, M. Donzé
    European Organization for Nuclear Research (CERN), Geneva, Switzerland
  • M. Di Castro
    CERN, Meyrin, Switzerland
 
  The fourth industrial revolution, the current trend of automation and data interconnection in industrial technologies, is becoming an essential tool to boost maintenance and availability for space applications, warehouse logistics, particle accelerators and for harsh environments in general. The main pillars of Industry 4.0 are Internet of Things (IoT), Wireless Sensors, Cloud Computing, Artificial Intelligence (AI), Machine Learning and Robotics. We are finding more and more way to interconnect existing processes using technology as a connector between machines, operations, equipment and people. Facility maintenance and operation is becoming more streamlined with earlier notifications, simplifying the control and monitor of the operations. Core to success and future growth in this field is the use of robots to perform various tasks, particularly those that are repetitive, unplanned or dangerous, which humans either prefer to avoid or are unable to carry out due to hazards, size constraints, or the extreme environments in which they take place. To be operated in a reliable way within particle accelerator complexes, robot controls and interfaces need to be included in the accelerator control frameworks, which is not obvious when movable systems are operating within a harsh environment. In this paper, the operational controls for robots at CERN is presented. Current robot controls at CERN will be detailed and the use case of the Train Inspection Monorail robot control will be presented.  
slides icon Slides TUMBCMO25 [47.070 MB]  
poster icon Poster TUMBCMO25 [2.228 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-TUMBCMO25  
About • Received ※ 05 October 2023 — Revised ※ 29 November 2023 — Accepted ※ 11 December 2023 — Issued ※ 16 December 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
TUMBCMO30 EPICS Based Tool for LLRF Operation Support and Testing cavity, controls, EPICS, LLRF 432
 
  • K. Klys, W. Cichalewski
    TUL-DMCS, Łódż, Poland
  • P. Pierini
    ESS, Lund, Sweden
 
  Interruptions in the functioning of linear superconductive accelerators LLRF (Low-Level Radio Frequency) systems can result in significant downtime. This can lead to lost productivity and revenue. Accelerators are foreseen to operate under various conditions and in different operating modes. As such, it is crucial to have flexibility in their operation to adapt to demands. Automation is a potential solution to address these challenges by reducing the need for human intervention and improving the control’s quality over the accelerator. The paper describes EPICS-based tools for LLRF control system testing, optimization, and operations support. The proposed software implements procedures and applications that are usually extensions to the core LLRF systems functionalities and are performed by operators. This facilitates the maintenance of the accelerator and increases its flexibility in adaptation to various work conditions and can increase its availability level. The paper focuses on the architecture of the solution. It also depicts its components related to superconducting cavities parameters identification and elements responsible for their tuning. Since the proposed solution is destined for the European Spallation Source control system, the application has a form of multiple IOCs (Input/Output Controllers) wrapped into E3 (ESS EPICS Environment) modules. Nevertheless, it can be adjusted to other control systems - its logic is universal and applicable (after adaptations) to other LLRF control systems with superconducting cavities.  
slides icon Slides TUMBCMO30 [0.466 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-TUMBCMO30  
About • Received ※ 06 October 2023 — Revised ※ 12 October 2023 — Accepted ※ 28 November 2023 — Issued ※ 30 November 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
TUMBCMO37 Personnel Safety Systems for ESS Beam on Dump and Beam on Target Operations MMI, neutron, radiation, target 452
 
  • M. Mansouri, A. Abujame, A. Andersson, M. Carroll, D. Daryadel, M. Eriksson, A. Farshidfar, R. Foroozan, V.A. Harahap, P. Holgersson, J. Lastow, G.L. Ljungquist, N. Naicker, A. Nordt, D. Paulic, A. Petrushenko, D.A. Plotnikov, Y. Takzare
    ESS, Lund, Sweden
 
  The European Spallation Source (ESS) is a Pan-European project with 13 European nations as members, including the host nations Sweden and Denmark. ESS has been through staged installation and commissioning of the facility over the past few years. Along with the facility evolution, several Personnel Safety Systems, as key contributors to the overall personnel safety, have been developed and commissioned to support the safe operation of e.g. test stand for cryomodules Site Acceptance Test, test stand for Ion Source and Low Energy Beam Transport, and trial operation of the Normal Conducting Linac. As ESS is preparing for Beam on Dump (BoD) and Beam on Target (BoT) operations in coming years, PSS development is ongoing to enable safe commissioning and operation of the Linear Accelerator, Target Station, Bunker, and day-one Neutron Instruments. Personnel Safety Systems at ESS (ESS PSS) is an integrated system that is composed of several PSS systems across the facility. Following the experience gained from the earlier PSS built at ESS, modularized solutions have been adopted for ESS PSS that can adapt to the evolving needs of the facility from BoD and BoT operations to installing new Neutron Instruments during facility steady-state operation. This paper provides an overview of the ESS PSS, and its commissioning plan to support BoD and BoT operations.  
slides icon Slides TUMBCMO37 [1.135 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-TUMBCMO37  
About • Received ※ 07 October 2023 — Revised ※ 08 October 2023 — Accepted ※ 13 October 2023 — Issued ※ 23 October 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
TUMBCMO39 Enhanced Maintenance and Availability of Handling Equipment using IIoT Technologies controls, network, monitoring, framework 462
 
  • E. Blanco Viñuela, A.G. Garcia Fernandez, D. Lafarge, G. Thomas, J-C. Tournier
    CERN, Meyrin, Switzerland
 
  CERN currently houses 6000 handling equipment units categorized into 40 different families, such as electric overhead travelling cranes (EOT), hoists, trucks, and forklifts. These assets are spread throughout the CERN campus, on the surface (indoor and outdoor), as well as in underground tunnels and experimental caverns. Partial access to some areas, a large area to cover, thousands of units, radiation, and diverse needs among handling equipment makes maintenance a cumbersome task. Without automatic monitoring solutions, the handling engineering team must conduct periodic on-site inspections to identify equipment in need of regulatory maintenance, leading to unnecessary inspections in hard-to-reach environments for underused equipment but also reliability risks for overused equipment between two technical visits. To overcome these challenges, a remote monitoring solution was introduced to extend the equipment lifetime and perform optimal maintenance. This paper describes the implementation of a remote monitoring solution integrating IIoT (Industrial Internet of Things) technologies with the existing CERN control infrastructure and frameworks for control systems (UNICOS and WinCC OA). At the present time, over 600 handling equipment units are being monitored successfully and this number will grow thanks to the scalability this solution offers.  
slides icon Slides TUMBCMO39 [0.560 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-TUMBCMO39  
About • Received ※ 03 October 2023 — Accepted ※ 28 November 2023 — Issued ※ 19 December 2023  
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
TUPDP001 Working Together for Safer Systems: A Collaboration Model for Verification of PLC Code PLC, software, controls, GUI 467
 
  • I.D. Lopez-Miguel
    IAP TUW, Wien, Austria
  • C. Betz, M. Salinas
    GSI, Darmstadt, Germany
  • E. Blanco Viñuela, B. Fernández Adiego
    CERN, Meyrin, Switzerland
 
  Formal verification techniques are widely used in critical industries to minimize software flaws. However, despite the benefits and recommendations of the functional safety standards, such as IEC 61508 and IEC 61511, formal verification is not yet a common practice in the process industry and large scientific installations. This is mainly due to its complexity and the need for formal methods experts. At CERN, the PLCverif tool was developed to verify PLC programs formally. Although PLCverif hides most of the complexity of using formal methods and removes barriers to formally verifying PLC programs, engineers trying to verify their developments still encounter different obstacles. These challenges include the formalization of program specifications or the creation of formal models. This paper discusses how to overcome these obstacles by proposing a collaboration model that effectively allows the verification of critical PLC programs and promotes knowledge transfer between organizations. By providing a simpler and more accessible way to carry out formal verification, tools like PLCverif can play a crucial role in achieving this goal. The collaboration model splits the specification, development, and verification tasks between organizations. This approach is illustrated through a case study between GSI and CERN.  
poster icon Poster TUPDP001 [0.744 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-TUPDP001  
About • Received ※ 03 October 2023 — Accepted ※ 20 November 2023 — Issued ※ 19 December 2023  
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
TUPDP002 Replacing Core Components of the Processing and Presentation Tiers of the MedAustron Control System controls, MMI, framework, interface 473
 
  • A. Höller, L. Adler, M. Eichinger, D. Gostinski, A. Kerschbaum-Gruber, C. Maderböck, M. Plöchl, S. Vörös
    EBG MedAustron, Wr. Neustadt, Austria
 
  MedAustron is a synchrotron-based ion therapy and research facility in Austria, that has been successfully treating cancer patients since 2016. MedAustron acts as a manufacturer of its own accelerator with a strong commitment to continuous development and improvement for our customers, our users and our patients. The control system plays an integral role in this endeavour. The presented project focuses on replacing the well-established WinCC OA SCADA system, enforcing separation of concerns mainly using .NET and web technologies, along with many upgrades of features and concepts where stakeholders had identified opportunities for improvement during our years of experience with the former control system setup for commissioning, operation and maintenance, as well as improving the user experience. Leveraging our newly developed control system API, we are currently working on an add-on called "Commissioning Worker". The concept foresees the functionality for users to create Python scripts, upload them to the Commissioning Worker, and execute them on demand or on a scheduled basis, making it easy and highly time-efficient to execute tasks and integrate with already established Python frameworks for analysis and optimization. This contribution outlines the key changes and provides examples of how the user experience has been improved.  
poster icon Poster TUPDP002 [4.733 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-TUPDP002  
About • Received ※ 03 October 2023 — Revised ※ 08 October 2023 — Accepted ※ 13 October 2023 — Issued ※ 25 October 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
TUPDP018 About the New Linear Accelerator Control System at GSI controls, timing, linac, software 529
 
  • P. Gerhard
    GSI, Darmstadt, Germany
 
  The first accelerator at GSI, UNILAC, went into operation in the early 1970s. Today, UNILAC is a small accelerator complex, consisting of several ion sources, injector and main linacs comprising 23 RF cavities, several strippers and other instrumentation, serving a number of experimental areas and the synchrotron SIS18. Three ion species can be provided at different energies simultaneously in a fast time multiplex scheme, two at a time. The UNILAC is going to be the heavy ion injector linac for FAIR, supported by a dedicated proton linac. The current linac control system dates back to the 1990s. It was initiated for SIS18 and ESR, which enlarged GSI at the time, and was retrofitted to the UNILAC. The linear decelerator HITRAP was added in the last decade, while an sc cw linac is under development. Today, SIS18, ESR and lately CRYRING are already operated by a new system based on the LHC Software Architecture LSA, as FAIR will be. In order to replace the outdated linac control system and simplify and unify future operation, a new control system on the same basis is being developed for all GSI linacs. This contribution reports about this venture from a machine physicist point of view.  
poster icon Poster TUPDP018 [2.886 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-TUPDP018  
About • Received ※ 05 October 2023 — Accepted ※ 12 October 2023 — Issued ※ 14 October 2023  
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
TUPDP019 Operation of the ESR Storage Ring with the LSA Control System accumulation, experiment, storage-ring, injection 534
 
  • S.A. Litvinov, R. Hess, B. Lorentz, M. Steck
    GSI, Darmstadt, Germany
 
  The LHC Software Architecture (LSA) has been applied to the accelerator complex GSI, Germany as a new control system. The Experimental Storage Ring (ESR) was recommissioned with the LSA and different accelerator and physics experiments were performed in the last several years. The overview of the ESR performance will be presented here. The features and challenges of the operation with LSA system will be outlined as well.  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-TUPDP019  
About • Received ※ 06 October 2023 — Revised ※ 29 November 2023 — Accepted ※ 20 December 2023 — Issued ※ 20 December 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
TUPDP021 Machine Protection System Upgrade for a New Timing System at ELBE timing, PLC, gun, controls 542
 
  • M. Justus, M. Kuntzsch, A. Schwarz, K. Zenker
    HZDR, Dresden, Germany
  • L. Krmpotić, U. Legat, Z. Oven, U. Rojec
    Cosylab, Ljubljana, Slovenia
 
  Running a CW electron accelerator as a user facility for more than two decades necessitates upgrades or even complete redesign of subsystems at some point. At ELBE, the outdated timing system needed a replacement due to obsolete components and functional limitations. Starting in 2019, with Cosylab as contractor and using hardware by Micro Research Finland, the new timing system has been developed and tested and is about to become operational. Besides the ability to generate a broader variety of beam patterns from single pulse mode to 26 MHz CW beams for the two electron sources, one of the benefits of the new system is improved machine safety. The ELBE control systems is mainly based on PLCs and industrial SCADA tools. This contribution depicts how the timing system implementation to the existing machine entailed extensions and modifications of the ELBE machine protection system, i.e. a new core MPS PLC, and how they are being realized.  
poster icon Poster TUPDP021 [0.731 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-TUPDP021  
About • Received ※ 04 October 2023 — Revised ※ 08 October 2023 — Accepted ※ 14 December 2023 — Issued ※ 16 December 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
TUPDP024 Technical Design Concept and First Steps in the Development of the New Accelerator Control System for PETRAIV controls, interface, database, software 552
 
  • R. Bacher, J.D. Behrens, T. Delfs, T. Tempel, J. Wilgen, T. Wilksen
    DESY, Hamburg, Germany
 
  At DESY, extensive technical planning and prototyping work is currently underway for the upgrade of the PETRAIII synchrotron light source to PETRAIV, a fourth-generation low-emittance machine. As part of this planned project, the accelerator control system will also be modernized. This paper reports on the main decisions taken in this context and gives an overview of the scope of the development and implementation work.  
poster icon Poster TUPDP024 [0.766 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-TUPDP024  
About • Received ※ 14 September 2023 — Revised ※ 08 October 2023 — Accepted ※ 12 October 2023 — Issued ※ 22 October 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
TUPDP029 Architecture of the Control System for the Jülich High Brilliance Neutron Source controls, target, neutron, software 565
 
  • H. Kleines, Y. Beßler, O. Felden, R. Gebel, M. Glum, R. Hanslik, S. Janaschke, P. Kämmerling, A. Lehrach, D. Marschall, F. Palm, F. Suxdorf, J. Voigt
    FZJ, Jülich, Germany
  • J. Baggemann, Th. Brückel, T. Gutberlet, A. Möller, U. Rücker, A. Steffens, P. Zakalek
    JCNS, Jülich, Germany
  • O. Meusel, H. Podlech
    IAP, Frankfurt am Main, Germany
 
  In the Jülich High Brilliance Neutron Source (HBS) project Forschungszentrum Jülich is developing a novel High Current Accelerator-driven Neutron Source (HiCANS) that is competitive to medium-flux fission-based research reactors or spallation neutron sources. The HBS will include a 70 MeV linear accelerator which delivers a pulsed proton beam with an average current of 100 mA to three target stations. At each target station the average power will be 100 kW generating neutrons for at least six neutron instruments. The concept for the control system has been developed and published in the HBS technical design report. Main building blocks of the control system will be Control System Studio, EPICS and Siemens PLC technology (for vacuum, motion, personnel protection…). The timing system will be based on commercially available components from Micro-Research Finland. The accelerator LLRF will rely on MTCA.4 developments of DESY that are commercially available, too. A small fraction of the control system has already been implemented for the new JULIC neutron platform, which is an HBS target station demonstrator that has been developed at the existing JULIC cyclotron at Forschungszentrum Jülich.  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-TUPDP029  
About • Received ※ 09 October 2023 — Revised ※ 10 October 2023 — Accepted ※ 13 October 2023 — Issued ※ 17 October 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
TUPDP046 Beam Operation for Particle Physics and Photon Science with Pulse-to-Pulse Modulation at KEK Injector Linac injection, linac, experiment, controls 627
 
  • K. Furukawa, M. Satoh
    KEK, Ibaraki, Japan
 
  The electron and positron accelerator complex at KEK offers unique experimental opportunities in the fields of elementary particle physics with SuperKEKB collider and photon science with two light sources. In order to maximize the experimental performances at those facilities the injector LINAC employs pulse-to-pulse modulation at 50 Hz, injecting beams with diverse properties. The event-based control system effectively manages different beam configurations. This injection scheme was initially designed 15 years ago and has been in full operation since 2019. Over the years, quite a few enhancements have been implemented. As the event-based controls are tightly coupled with microwave systems, machine protection systems and so on, their modifications require meticulous planning. However, the diverse requirements from particle physics and photon science, stemming from the distinct nature of those experiments, often necessitate patient negotiation to meet the demands of both fields. This presentation discusses those operational aspects of the multidisciplinary facility.  
poster icon Poster TUPDP046 [2.498 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-TUPDP046  
About • Received ※ 19 November 2023 — Accepted ※ 10 December 2023 — Issued ※ 11 December 2023  
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
TUPDP047 Development of Operator Interface Using Angular at the KEK e⁻/e⁺ Injector Linac linac, database, electron, interface 631
 
  • M. Satoh, I. Satake
    KEK, Ibaraki, Japan
  • T. Kudou, S. Kusano
    Mitsubishi Electric System & Service Co., Ltd, Tsukuba, Japan
 
  At the KEK e⁻/e⁺ injector linac, the first electronic operation logbook system was developed using a relational database in 1995. This logbook system has the capability to automatically record detailed operational status changes. In addition, operators can manually input detailed information about operational problems, which is helpful for future troubleshooting. In 2010, the logbook system was improved with the implementation of a redundant database, an Adobe Flash based frontend, and an image file handling feature. In 2011, the CSS archiver system with PostgreSQL and a new web-based archiver viewer utilizing Adobe Flash. However, with the discontinuation of Adobe Flash support at the end of 2020, it became necessary to develop a new frontend without Flash for both the operation logbook and archiver viewer systems. For this purpose, the authors adopted the Angular framework, which is widely used for building web applications using JavaScript. In this paper, we report the development of operator interfaces using Angular for the injector linac.  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-TUPDP047  
About • Received ※ 05 October 2023 — Revised ※ 08 October 2023 — Accepted ※ 10 December 2023 — Issued ※ 19 December 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
TUPDP048 The Upgrade of Pulsed Magnet Control System Using PXIe Devices at KEK LINAC controls, EPICS, linac, real-time 635
 
  • D. Wang, M. Satoh
    KEK, Ibaraki, Japan
 
  In the KEK electron-positron injector LINAC, the pulsed magnet control system modulates the magnetic field at intervals of 20 ms, enabling simultaneous injection into four distinct target rings: 2.5 GeV PF, 6.5 GeV PF-AR, 4 GeV SuperKEKB LER, and 7 GeV SuperKEKB HER. This system operates based on a trigger signal delivered from the event timing system. Upon the receiving specified event code, the PXI DAC board outputs a waveform to the pulse driver, which consequently determines the current of the pulsed magnet. The combination of Windows 8.1 and LabVIEW was utilized to implement the control system since 2017. Nonetheless, due to the cessation of support for Windows 8.1, a system upgrade has become imperative. To address this, Linux has been selected as a suitable replacement for Windows and the EPICS driver for PXIe devices is thus required. This manuscript introduces the development of the novel Linux-based pulsed magnet control system.  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-TUPDP048  
About • Received ※ 06 October 2023 — Accepted ※ 11 December 2023 — Issued ※ 14 December 2023  
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
TUPDP049 15 Years of the J-PARC Main Ring Control System Operation and Its Future Plan controls, network, EPICS, software 639
 
  • S. Yamada
    J-PARC, KEK & JAEA, Ibaraki-ken, Japan
 
  The accelerator control system of the J-PARC MR started operation in 2008. Most of the components of the control computers, such as servers, disks, operation terminals, front-end computers and software, which were introduced during the construction phase, have gone through one or two generational changes in the last 15 years. Alongside, the policies for the operation of control computers have changed. This paper reviews the renewal of those components and discusses the philosophy behind the configuration and operational policy. It is also discusses the approach to matters that did not exist at the beginning of the project, such as virtualization or cyber security.  
poster icon Poster TUPDP049 [0.489 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-TUPDP049  
About • Received ※ 05 October 2023 — Revised ※ 25 October 2023 — Accepted ※ 14 December 2023 — Issued ※ 17 December 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
TUPDP050 Development and Test Operation of the Prototype of the New Beam Interlock System for Machine Protection of the RIKEN RI Beam Factory controls, EPICS, FPGA, experiment 645
 
  • M. Komiyama, M. Fujimaki, N. Fukunishi, A. Uchiyama
    RIKEN Nishina Center, Wako, Japan
  • M. Hamanaka, K. Kaneko, R. Koyama, M. Nishimura, H. Yamauchi
    SHI Accelerator Service Ltd., Tokyo, Japan
  • A. Kamoshida
    National Instruments Japan Corporation, MInato-ku, Tokyo, Japan
 
  We have been operating the beam interlock system (BIS) for machine protection of the RIKEN RI Beam Factory (RIBF) since 2006. It stops beams approximately 15 ms after receiving an alert signal from the accelerator and beam line components. We continue to operate BIS successfully; however, we are currently developing a successor system to stop a beam within 1 ms considering that the beam intensity of RIBF will continue to increase in the future. After comparing multiple systems, CompactRIO, a product by National Instruments, was selected for the successor system. Interlock logic for signal input/output is implemented on the field-programmable gate array (FPGA) because fast processing speed is required. On the other hand, signal condition setting and monitoring do not require the same speed as interlock logic. They are implemented on the RT-OS and controlled by using experimental physics and industrial control system (EPICS) by setting up an EPICS server on the RT-OS. As a first step in development, a prototype consisting of two stations that handle only digital alert signals was developed and installed in part of the RIBF in the summer of 2022 (224 input contacts). The signal response time of the prototype, measured with an oscilloscope, averaged 0.52 ms with both stations (the distance between two stations is approximately 75 m). Furthermore, by additionally installing a pull-up circuit at each signal input contact of the system, the system response time was successfully reduced to approximately 0.13 ms.  
poster icon Poster TUPDP050 [0.816 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-TUPDP050  
About • Received ※ 03 October 2023 — Revised ※ 09 October 2023 — Accepted ※ 14 December 2023 — Issued ※ 18 December 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
TUPDP070 Open Time Proposal Submission System for the MeerKAT Radio Telescope instrumentation, software, site, data-management 666
 
  • R.L. Schwartz, T.B. Baloyi, S.S. Sithole
    SARAO, Cape Town, South Africa
 
  Through periodic Call for Proposals, the South African Radio Astronomy Observatory (SARAO), allocates time on the MeerKAT Radio Telescope to the international community for the purpose of maximizing the scientific impact of the telescope, while contributing to South African scientific leadership and human capital development. Proposals are submitted through the proposal submission system, followed by a stringent review process where they are graded based on certain criteria. Time on the telescope is then allocated based on the grade and rank achieved. This paper outlines the details of the Open Time proposal submission and review process, and the design and implementation of the software used to grade the proposals and allocate the time on the MeerKAT Radio Telescope.  
poster icon Poster TUPDP070 [0.490 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-TUPDP070  
About • Received ※ 27 September 2023 — Accepted ※ 13 October 2023 — Issued ※ 19 October 2023  
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
TUPDP072 Overview of Observation Preparation and Scheduling on the MeerKAT Radio Telescope controls, factory, MMI, real-time 669
 
  • L.P. Williams, R.L. Schwartz
    SARAO, Cape Town, South Africa
 
  Funding: National Research Foundation (South Africa)
The MeerKAT radio telescope performs a wide variety of scientific observations. Observation durations range from a few minutes, to many hours, and may form part of observing campaigns that span many weeks. Static observation requirements, such as resources or array configuration, may be determined and verified months in advance. Other requirements however, such as atmospheric conditions, can only be verified hours before the planned observation event. This wide variety of configuration, scheduling and control parameters are managed with features provided by the MeerKAT software. The short term scheduling functionality has expanded from simple queues to support for automatic scheduling (queuing). To support long term schedule planning, the MeerKAT telescope includes an Observation Panning Tool which provides configuration checking as well as dry-run environments that can interact with the production system. Observations are atomized to support simpler specification, facilitating machine learning projects and more flexibility in scheduling around engineering and maintenance events. This paper will provide an overview of observation specification, configuration, and scheduling on the MeerKAT telescope. The support for integration with engineering subsystems is also described. Engineering subsystems include User Supplied Equipment which are hardware and computing resources integrated to expand the MeerKAT telescope’s capabilities.
 
poster icon Poster TUPDP072 [1.546 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-TUPDP072  
About • Received ※ 05 October 2023 — Revised ※ 09 November 2023 — Accepted ※ 20 December 2023 — Issued ※ 21 December 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
TUPDP077 Towards the ALBA II : the Computing Division Preliminary Study controls, hardware, synchrotron, software 691
 
  • O. Matilla, J.A. Avila-Abellan, F. Becheri, S. Blanch-Torné, A.M. Burillo, A. Camps Gimenez, I. Costa, G. Cuní, T. Fernández Maltas, R.H. Homs, J. Moldes, E. Morales, C. Pascual-Izarra, S. Pusó Gallart, A. Pérez Font, Z. Reszela, B. Revuelta, A. Rubio, S. Rubio-Manrique, J. Salabert, N. Serra, X. Serra-Gallifa, N. Soler, S. Vicente Molina, J. Villanueva
    ALBA-CELLS, Cerdanyola del Vallès, Spain
 
  The ALBA Synchrotron has started the work for up-grading the accelerator and beamlines towards a 4th gen-eration source, the future ALBA II, in 2030. A complete redesign of the magnets lattice and an upgrade of the beamlines will be required. But in addition, the success of the ALBA II project will depend on multiple factors. First, after thirteen years in operation, all the subsystems of the current accelerator must be revised. To guarantee their lifetime until 2060, all the possible ageing and obsoles-cence factors must be considered. Besides, many tech-nical enhancements have improved performance and reliability in recent years. Using the latest technologies will also avoid obsolescence in the medium term, both in the hardware and the software. Considering this, the pro-ject ALBA II Computing Preliminary Study (ALBA II CPS) was launched in mid-2021, identifying 11 work packages. In each one, a group of experts were selected to analyze the different challenges and needs in the compu-ting and electronics fields for future accelerator design: from power supplies technologies, IOC architectures, or PLC-based automation systems to synchronization needs, controls software stack, IT Systems infrastructure or ma-chine learning opportunities. Now, we have a clearer picture of what is required. Hence, we can build a realistic project plan to ensure the success of the ALBA II. It is time to get ALBA II off the ground.  
poster icon Poster TUPDP077 [0.687 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-TUPDP077  
About • Received ※ 05 October 2023 — Revised ※ 09 October 2023 — Accepted ※ 11 December 2023 — Issued ※ 15 December 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
TUPDP078 Management of Configuration for Protection Systems at ESS controls, interface, machine-protect, PLC 695
 
  • M. Carroll, G.L. Ljungquist, M. Mansouri, A. Nordt, D. Paulic
    ESS, Lund, Sweden
 
  The European Spallation Source (ESS) in Sweden is one of the largest science and technology infrastructure projects being built today. The facility design and construction include the most powerful linear proton accelerator ever built, a five-tonne, helium-cooled tungsten target wheel and 22 state-of-the-art neutron instruments. The Protection Systems Group (PSG) at ESS are responsible for the delivery and management of all the Personnel Safety Systems (PSS) and Machine Protection Systems (MPS), consisting of up to 30 PSS control systems and 6 machine protection systems. Due to the bespoke and evolving nature of the facility, managing the configuration of all these systems poses a significant challenge for the team. This paper will describe the methodology followed to ensure that the correct configuration is correctly implemented and maintained throughout the full engineering lifecycle for these systems.  
poster icon Poster TUPDP078 [1.216 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-TUPDP078  
About • Received ※ 06 October 2023 — Revised ※ 09 October 2023 — Accepted ※ 13 October 2023 — Issued ※ 26 October 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
TUPDP081 The ESS Fast Beam Interlock System - Design, Deployment and Commissioning of the Normal Conducting Linac MMI, controls, software, FPGA 704
 
  • S. Pavinato, M. Carroll, S. Gabourin, A.A. Gorzawski, A. Nordt
    ESS, Lund, Sweden
 
  The European Spallation Source (ESS) is a research facility based in Lund, Sweden. Its linac will have an high peak current of 62.5 mA and long pulse length of 2.86 ms with a repetition rate of 14 Hz. The Fast Beam Interlock System (FBIS), as core system of the Beam Interlock System at ESS, is a critical system for ensuring the safe and reliable operation of the ESS machine. It is a modular and distributed system. FBIS will collect data from all relevant accelerator and target systems through ~300 direct inputs and decides whether beam operation can start or must stop. The FBIS operates at high data speed and requires low-latency decision-making capability to avoid introducing delays and to ensure the protection of the accelerator. This is achieved through two main hardware blocks equipped with FPGA based boards: a mTCA ’Decision Logic Node’ (DLN), executing the protection logic and realizing interfaces to Higher-Level Safety, Timing and EPICS Control Systems. The second block, a cPCI form-factor ’Signal Condition Unit’ (SCU), implements the interface between FBIS inputs/outputs and DLNs. In this paper we present the implementation of the FBIS control system, the integration of different hardware and software components and a summary on its performance during the latest beam commissioning phase to DTL4 Faraday Cup in 2023.  
poster icon Poster TUPDP081 [2.284 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-TUPDP081  
About • Received ※ 26 September 2023 — Accepted ※ 11 December 2023 — Issued ※ 16 December 2023  
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
TUPDP082 Target Safety System Maintenance target, proton, PLC, site 709
 
  • A. Sadeghzadeh, L. Coney, O. Ingemansson, O.J. Janson, M. Olsson
    ESS, Lund, Sweden
 
  The Target Safety System (TSS) is part of the overall radiation safety plan for the Target Station in the European Spallation Source (ESS). ESS, Target Division, Target Controls and Safety group is responsible for the design and construction of the TSS. TSS stops Proton production if vital process conditions measured at the Target Station, are outside the set boundaries with the potential of causing (radiation) injury to third parties (public outside ESS fences). The TSS is a 3-channel fail-safe safety system consisting of independent sensors, a two redundant train system based on relay and safety PLC technique and independent ways of stopping the proton beam accelerator. TSS will continuously monitor safety parameters in the target He cooling, wheel, and monolith atmosphere systems, evaluate their conditions, and turn off the proton beam if necessary. After passing several stages of off-site test, the TSS cabinets are now installed on site and successfully passed internal integration. In this paper we will explain features we fit into the system to ease emergency repairs, system modification and system safety verification and in general maintainability of the system.  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-TUPDP082  
About • Received ※ 05 October 2023 — Revised ※ 10 October 2023 — Accepted ※ 14 December 2023 — Issued ※ 18 December 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
TUPDP086 Operational Tool for Automatic Setup of Controlled Longitudinal Emittance Blow-Up in the CERN SPS controls, emittance, target, software 723
 
  • N. Bruchon, I. Karpov, N. Madysa, G. Papotti, D. Quartullo
    CERN, Meyrin, Switzerland
 
  The controlled longitudinal emittance blow-up is necessary to ensure the stability of high-intensity LHC-type beams in the CERN SPS. It consists of diffusing the particles in the bunch core by injecting a bandwidth-limited noise into the beam phase loop of the main 200 MHz RF system. Obtaining the correct amplitude and bandwidth of this noise signal is non-trivial, and it may be tedious and time-demanding if done manually. An automatic approach was developed to speed up the determination of optimal settings. The problem complexity is reduced by splitting the blow-up into multiple sub-intervals for which the noise parameters are optimized by observing the longitudinal profiles at the end of each sub-interval. The derived bunch lengths are used to determine the objective function which measures the error with respect to the requirements. The sub-intervals are tackled sequentially. The optimization moves to the next one only when the previous sub-interval is completed. The proposed tool is integrated into the CERN generic optimization framework that features pre-implemented optimization algorithms. Both single- and multi-bunch high-intensity beams are quickly and efficiently stabilized by the optimizer, used so far in high-intensity studies. A possible extension to Bayesian optimization is being investigated.  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-TUPDP086  
About • Received ※ 05 October 2023 — Accepted ※ 11 December 2023 — Issued ※ 19 December 2023  
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
TUPDP087 Enhancing Measurement Quality in HL-LHC Magnets Testing Using Software Techniques on Digital Multimeter Cards-Based System software, hardware, controls, LabView 729
 
  • H. Reymond, O.O. Andreassen, M. Charrondiere, C. Charrondière, P.D. Jankowski
    CERN, Meyrin, Switzerland
 
  The HL-LHC magnets play a critical role in the High-Luminosity Large Hadron Collider project, which aims to increase the luminosity of the LHC and enable more precise studies of fundamental physics. Ensuring the performance and reliability of these magnets requires high-precision measurements of their electrical properties during testing. To meet the R&D program needs of the new superconducting magnet technology, an accurate and generic voltage measurement system was developed after the testing and validation campaign of the LHC magnets. The system was based on a set of digital multimeter (DMM) cards installed in a PXI modular chassis and controlled using CERN’s in-house software development. It allowed for the measurement of the electrical properties of the magnet prototypes during their study phase. However, during the renovation of the magnet test benches and in preparation for the HL-LHC magnet series measurement, some limitations and instabilities were discovered during long recording measurements. As a result, it was decided to redesign the measurement system. The emergence and promises of the new PXIe platform, along with the requirement to build eight new systems to be operated similarly to the existing four, led to a complete redesign of the software. This article describes the various software techniques employed to address platform compatibility issues and significantly improve measurement accuracy, thus ensuring the reliability and quality of the data obtained from the HL-LHC magnet tests.  
poster icon Poster TUPDP087 [6.660 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-TUPDP087  
About • Received ※ 02 October 2023 — Revised ※ 09 October 2023 — Accepted ※ 12 October 2023 — Issued ※ 13 October 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
TUPDP089 Improving CERN’s Web-based Rapid Application Platform controls, software, GUI, timing 740
 
  • E. Galatas, S. Deghaye, J. Raban, C. Roderick, D. Saxena, A. Solomou
    CERN, Meyrin, Switzerland
 
  The Web-based Rapid Application Platform (WRAP) aims to provide a centralized, zero-code, drag-n-drop means of GUI creation*. It was developed at CERN to address the high maintenance cost of supporting multiple evolving GUI-technologies and minimising duplication of effort by those developing different GUI applications. WRAP leverages web technologies and existing controls infrastructure to provide a drop-in solution for a range of use cases. However, providing a centralized platform to cater for diverse needs and to interact with a multitude of data sources presented performance, design, and deployment challenges. This paper describes how the WRAP architecture has evolved to address these challenges, overcoming technological limitations, increasing usability and the resulting end-user adoption.
* "WRAP - A WEB-BASED RAPID APPLICATION DEVELOPMENT FRAMEWORK FOR CERN’S CONTROLS INFRASTRUCTURE", E. Galatas et al, ICALEPCS 2021, Shanghai, THPV013
 
poster icon Poster TUPDP089 [3.174 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-TUPDP089  
About • Received ※ 05 October 2023 — Revised ※ 20 October 2023 — Accepted ※ 14 December 2023 — Issued ※ 22 December 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
TUPDP091 Upgrade of the Process Control System for the Cryogenic Installation of the CERN LHC Atlas Liquid Argon Calorimeter controls, PLC, cryogenics, software 752
 
  • C.F. Fluder, C. Fabre, L.G. Goralczyk, M. Pezzetti, A. Zmuda
    CERN, Meyrin, Switzerland
  • K.M. Mastyna
    AGH, Cracow, Poland
 
  The ATLAS (LHC detector) Liquid Argon Calorimeter is classified as a critical cryogenic system due to its requirement for uninterrupted operation. The system has been in continuous nominal operation since the start-up of the LHC, operating with very high reliability and availability. Over this period, control system maintenance was focused on the most critical hardware and software interventions, without direct impact on the process control system. Consequently, after several years of steady state operation, the process control system became obsolete (reached End of Life), requiring complex support and without the possibility of further improvements. This led to a detailed review towards a complete upgrade of the PLC hardware and process control software. To ensure uninterrupted operation, longer equipment lifecycle, and further system maintainability, the latest technology was chosen. This paper presents the methodology used for the process control system upgrade during development and testing phases, as well as the experience gained during deployment. It details the architecture of the new system based on a redundant (Hot Standby) PLC solution, the quality assurance protocol used during the hardware validation and software testing phases, and the deployment procedure.  
poster icon Poster TUPDP091 [1.886 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-TUPDP091  
About • Received ※ 03 October 2023 — Accepted ※ 06 December 2023 — Issued ※ 11 December 2023  
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
TUPDP092 Life Cycle Management and Reliability Analysis of Controls Hardware Using Operational Data From EAM hardware, controls, electron, status 758
 
  • E. Fortescue, I. Kozsar, V. Schramm
    CERN, Meyrin, Switzerland
 
  The use of operational data from Enterprise Asset Management(EAM) systems has become an increasingly popular approach for conducting reliability analysis of industrial equipment. This paper presents a case study of how EAM data was used to analyse the reliability of CERN’s standard controls hardware, deployed and maintained by the Controls Electronics and Mechatronics group. The first part of the study involved the extraction, treatment and analysis of state-transition data to detect failures. The analysis was conducted using statistical methods, including failure-rate analysis and time-to-failure analysis to identify trends in equipment performance and plan for future obsolescence, upgrades and replacement strategies. The results of the analysis are available via a dynamic online dashboard. The second part of the study considers Front-End computers as repairable systems, composed of the previously studied non-repairable modules. The faults were recorded and analysed using the Accelerator Fault Tracking system. The study brought to light the need for high quality data, which led to improvements in the data recording process and refinement of the infrastructure team’s workflow. In the future, reliability analysis will become even more critical for ensuring the cost-effective and efficient operation of controls systems for accelerators. This study demonstrates the potential of EAM operational data to provide valuable insights into equipment reliability and inform decision-making for repairable and non-repairable systems.  
poster icon Poster TUPDP092 [40.179 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-TUPDP092  
About • Received ※ 04 October 2023 — Revised ※ 11 October 2023 — Accepted ※ 05 December 2023 — Issued ※ 12 December 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
TUPDP095 Design of the Control System for the CERN PSB RF System controls, software, PLC, MMI 772
 
  • D. Landré, Y. Brischetto, M. Haase, M. Niccolini
    CERN, Meyrin, Switzerland
 
  The RF system of the CERN PS Booster (PSB) has been renovated to allow the extraction energy increase and the higher beam intensities required by the LHC Injectors Upgrade (LIU) project. It relies on accelerating cells installed in three straight sections of each of the four accelerating rings of PSB. Each cell is powered by one solid-state RF amplifier. This modularity is also embedded in its controls architecture, which is based on PLCs, several FESA (Front-End Software Architecture) classes, and specialized graphical user interfaces for both operation and expert use. The control system was commissioned during the Long Shutdown 2 (LS2) and allows for the nominal operation of the machine. This paper describes the design and implementation of the control system, as well as the system performance and achieved results.  
poster icon Poster TUPDP095 [0.857 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-TUPDP095  
About • Received ※ 19 September 2023 — Revised ※ 03 October 2023 — Accepted ※ 13 October 2023 — Issued ※ 28 October 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
TUPDP096 Early Fire Detection in High Power Equipment kicker, detector, interface, controls 775
 
  • S. Pavis, E. Carlier, C.A. Lolliot, N. Magnin
    CERN, Meyrin, Switzerland
 
  Very early fire detection in equipment cabinets containing high power supply sources and power electronic switching devices is needed when building and tunnel fire detection systems may not be well placed to detect a fire until it is well established. Highly sensitive aspirating smoke detection systems which continuously sample the air quality inside equipment racks and give local power interlock in the event of smoke detection, are capable of cutting the source of power to these circuits at a very early stage, thereby preventing fires before they become fully established. Sampling pipework can also be routed to specific locations within the cabinet for more zone focused monitoring, while the electronic part of the detection system is located externally of the cabinet for easy operation and maintenance. Several of these early fire detection systems have recently been installed in LHC and SPS accelerator kicker installations, with many more planned. This paper compares the detection technology from typical manufacturers, presents the approach adopted and its mechanical installation and discusses the integration within different control architecture.  
poster icon Poster TUPDP096 [1.139 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-TUPDP096  
About • Received ※ 05 October 2023 — Accepted ※ 05 December 2023 — Issued ※ 18 December 2023  
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
TUPDP099 Spark Activity Monitoring for LHC Beam Dump System high-voltage, GUI, controls, extraction 784
 
  • C.B. Durmus, E. Carlier, N. Magnin, T.D. Mottram, V. Senaj
    CERN, Meyrin, Switzerland
 
  LHC Beam Dump System is composed of 25 fast-pulsed magnets per beam to extract and dilute the beam onto an external absorber block. Each magnet is powered by a high voltage generator to discharge the energy stored in capacitors into the magnet by using high voltage switches. These switches are housed in air in cabinets which are not dust protected. In the past years of LHC operation, we noticed electrical sparks on the high voltage switch due to the release of accumulated charges on the surfaces of the insulators and the switches. These sparks can potentially cause self-trigger of the generators increasing the risk of asynchronous dumps which should be avoided as much as possible. In order to detect dangerous spark activity in the generators before a self-trigger occurs, a Spark Activity Monitoring (SAM) system was developed. SAM consists of 50 detection and acquisition systems deployed at the level of each high voltage generator, and one external global surveillance process. The detection and acquisition systems are based on digitisers to detect and capture spark waveforms coming from current pick-ups placed in various electrical paths inside each generator. The global surveillance process is collecting data from all the acquisition systems in order to assess the risk of self-trigger based on the detected sparks amplitude and rate. This paper describes the architecture, implementation, optimisation, deployment and operational experience of the SAM system.  
poster icon Poster TUPDP099 [1.334 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-TUPDP099  
About • Received ※ 06 October 2023 — Revised ※ 21 October 2023 — Accepted ※ 05 December 2023 — Issued ※ 09 December 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
TUPDP101 A Modular Approach for Accelerator Controls Components Deployment for High Power Pulsed Systems controls, kicker, timing, power-supply 788
 
  • S. Pavis, R.A. Barlow, C. Boucly, E. Carlier, C. Chanavat, C.A. Lolliot, N. Magnin, P. Van Trappen
    CERN, Meyrin, Switzerland
  • N. Voumard
    European Organization for Nuclear Research (CERN), Geneva, Switzerland
 
  As part of the LHC Injector Upgrade (LIU) project, the controls of the PSB and PS injection kickers at CERN have been upgraded during Long Shutdown 2 (LS2) from heterogeneous home-made electronic solutions to a modular and open architecture. Despite both kickers have significantly different functionalities, topologies and operational requirements, standardized hardware and software control blocks have been used for both systems. The new control architecture is built around a set of sub-systems, each one with a specific generic function required for the control of fast pulsed systems such as equipment and personnel safety, slow control and protection, high precision fast timing system, fast interlocking and protection, pulsed signal acquisition and analysis. Each sub-system comprises a combined integration of hardware components and associated low level software. This paper presents the functionality of the different sub-systems, illustrates how they have been integrated for the two different use-cases, discusses the lessons learned from these first implementations and identifies possible evolution in view of deployment in other installations during Long Shutdown 3 (LS3).  
poster icon Poster TUPDP101 [0.842 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-TUPDP101  
About • Received ※ 06 October 2023 — Revised ※ 21 October 2023 — Accepted ※ 05 December 2023 — Issued ※ 06 December 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
TUPDP102 Leveraging Local Intelligence to Industrial Control Systems through Edge Technologies controls, PLC, software, interface 793
 
  • A. Patil, F. Ghawash, B. Schofield, F. Varela
    CERN, Meyrin, Switzerland
  • D. Daniel, K. Kaufmann, A.S. Sündermann
    SAGÖ, Vienna, Austria
  • C. Kern
    Siemens AG, Corporate Technology, München, Germany
 
  Industrial processes often use advanced control algorithms such as Model Predictive Control (MPC) and Machine Learning (ML) to improve performance and efficiency. However, deploying these algorithms can be challenging, particularly when they require significant computational resources and involve complex communication protocols between different control system components. To address these challenges, we showcase an approach leveraging industrial edge technologies to deploy such algorithms. An edge device is a compact and powerful computing device placed at the network’s edge, close to the process control. It executes the algorithms without extensive communication with other control system components, thus reducing latency and load on the central control system. We also employ an analytics function platform to manage the life cycle of the algorithms, including modifications and replacements, without disrupting the industrial process. Furthermore, we demonstrate a use case where an MPC algorithm is run on an edge device to control a Heating, Ventilation, and Air Conditioning (HVAC) system. An edge device running the algorithm can analyze data from temperature sensors, perform complex calculations, and adjust the operation of the HVAC system accordingly. In summary, our approach of utilizing edge technologies enables us to overcome the limitations of traditional approaches to deploying advanced control algorithms in industrial settings, providing more intelligent and efficient control of industrial processes.  
poster icon Poster TUPDP102 [3.321 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-TUPDP102  
About • Received ※ 06 October 2023 — Revised ※ 21 October 2023 — Accepted ※ 05 December 2023 — Issued ※ 12 December 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
TUPDP103 Interlock Super Agent : Enhancing Machine Efficiency and Performance at CERN’s Super Proton Synchrotron software, diagnostics, proton, controls 799
 
  • E. Veyrunes, A. Asko, G. Trad, J. Wenninger
    CERN, Meyrin, Switzerland
 
  In the CERN Super Proton Synchrotron (SPS), finding the source of an interlock signal has become increasingly unmanageable due to the complex interdependencies between the agents in both the beam interlock system (BIS) and the software interlock system (SIS). This often leads to delays, with the inefficiency in diagnosing beam stops impacting the overall performance of the accelerator. The Interlock Super Agent (ISA) was introduced to address this challenge. It traces the interlocks responsible for beam stops, regardless of whether they originated in BIS or SIS. By providing a better understanding of interdependencies, ISA significantly improves machine efficiency by reducing time for diagnosis and by documenting such events through platforms such as the Accelerator Fault Tracking system. The paper will discuss the practical implementation of ISA and its potential application throughout the CERN accelerator complex.  
poster icon Poster TUPDP103 [4.719 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-TUPDP103  
About • Received ※ 25 September 2023 — Revised ※ 11 October 2023 — Accepted ※ 05 December 2023 — Issued ※ 13 December 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
TUPDP104 Progress Towards the Commissioning and Installation of the 2PACL CO₂ Cooling Control Systems for Phase II Upgrade of the ATLAS and CMS Experiments controls, detector, PLC, MMI 802
 
  • L. Zwalinski, V. Bhanot, M.A. Ciupinski, J. Daguin, L. Davoine, M. Doubek, S.J. Galuszka, Y. Herpin, W.K. Hulek, T. Pakulski, P. Petagna, K. Sliwa, D.I. Teixeira, B. Verlaat
    CERN, Meyrin, Switzerland
 
  In the scope of the High Luminosity program of the Large Hadron Collider at CERN, the ATLAS and CMS experiments are advancing the preparation for the production, commissioning and installation of their new environment-friendly low-temperature detector cooling systems for their new trackers, calorimeters and timing layers. The selected secondary ¿on-detector¿ CO₂ pumped loop concept is the evolution of the successful 2PACL technique allowing for oil-free, stable, low-temperature control. The new systems are of unprecedented scale and largely more complex for both mechanics and controls than installations of today. This paper will present a general system overview and the technical progress achieved by the EP-DT group at CERN over the last few years in the development and construction of the future CO₂ cooling systems for silicon detectors at AT-LAS and CMS. We will describe in detail a homogenised infrastructure and control system architecture which spreads between surface and underground and has been applied to both experiments. Systems will be equipped with multi-level redundancy (electrical, mechanical and control) described in detail herein. We will discuss numerous controls-related challenges faced during the prototyping program and solutions deployed that spread from electrical design organization to instrumentation selection and PLC programming. We will finally present how we plan to organise commissioning and system performance check out.  
poster icon Poster TUPDP104 [4.328 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-TUPDP104  
About • Received ※ 01 October 2023 — Revised ※ 09 October 2023 — Accepted ※ 05 December 2023 — Issued ※ 08 December 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
TUPDP108 Progress of the EPICS Transition at the Isis Accelerators EPICS, controls, network, PLC 817
 
  • I.D. Finch, B.R. Aljamal, K.R.L. Baker, R. Brodie, J.-L. Fernández-Hernando, G.D. Howells, M.F. Leputa, S.A. Medley, M. Romanovschi
    STFC/RAL/ISIS, Chilton, Didcot, Oxon, United Kingdom
  • A. Kurup
    Imperial College of Science and Technology, Department of Physics, London, United Kingdom
 
  The ISIS Neutron and Muon Source accelerators have been controlled using Vsystem running on OpenVMS / Itaniums, while beamlines and instruments are controlled using EPICS. We outline the work in migrating accelerator controls to EPICS using the PVAccess protocol with a mixture of conventional EPICS IOCs and custom Python-based IOCs primarily deployed in containers on Linux servers. The challenges in maintaining operations with two control systems running in parallel are discussed, including work in migrating data archives and maintaining their continuity. Semi-automated conversion of the existing Vsystem HMIs to EPICS and the creation of new EPICS control screens required by the Target Station 1 upgrade are reported. The existing organisation of our controls network and the constraints this imposes on remote access via EPICS and the solution implemented are described. The successful deployment of an end-to-end EPICS system to control the post-upgrade Target Station 1 PLCs at ISIS is discussed as a highlight of the migration.  
poster icon Poster TUPDP108 [0.510 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-TUPDP108  
About • Received ※ 02 October 2023 — Accepted ※ 04 December 2023 — Issued ※ 17 December 2023  
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
TUPDP111 Software and Firmware-Logic Design for the PIP-II Machine Protection System Mode and Configuration Control at Fermilab controls, interface, linac, FPGA 832
 
  • L.R. Carmichael, M.R. Austin, E.R. Harms, R. Neswold, A. Prosser, A. Warner, J.Y. Wu
    Fermilab, Batavia, Illinois, USA
 
  Funding: This manuscript has been authored by Fermi Research Alliance, LLC under Contract No. DE-AC02-07CH11359 with the U.S. Department of Energy, Office of Science, Office of High Energy Physics
The PIP-II Machine Protection System (MPS) requires a dedicated set of tools for configuration control and management of the machine modes and beam modes of the accelerator. The protection system reacts to signals from various elements of the machine according to rules established in a setup database filtered by the program Mode Controller. This is achieved in accordance with commands from the operator and governed by the firmware logic of the MPS. This paper describes the firmware logic, architecture, and implementation of the program mode controller in an EPICs based environment.
 
poster icon Poster TUPDP111 [2.313 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-TUPDP111  
About • Received ※ 03 October 2023 — Revised ※ 09 October 2023 — Accepted ※ 04 December 2023 — Issued ※ 12 December 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
TUPDP113 A Flexible EPICS Framework for Sample Alignment at Neutron Beamlines controls, EPICS, framework, neutron 836
 
  • J.P. Edelen, M.J. Henderson, M.C. Kilpatrick
    RadiaSoft LLC, Boulder, Colorado, USA
  • S. Calder, B. Vacaliuc
    ORNL RAD, Oak Ridge, Tennessee, USA
  • R.D. Gregory, G.S. Guyotte, C.M. Hoffmann, B.K. Krishna
    ORNL, Oak Ridge, Tennessee, USA
 
  Funding: This material is based upon work supported by the U.S. Department of Energy, Office of Science, Office of Basic Energy Science under Award Number DE-SC0021555.
RadiaSoft has been developing a flexible front-end framework, written in Python, for rapidly developing and testing automated sample alignment IOCs at Oak Ridge National Laboratory. We utilize YAML-formatted configuration files to construct a thin abstraction layer of custom classes which provide an internal representation of the external hardware within a controls system. The abstraction layer takes advantage of the PCASPy and PyEpics libraries in order to serve EPICS process variables & respond to read/write requests. Our framework allows users to build a new IOC that has access to information about the sample environment in addition to user-defined machine learning models. The IOC then monitors for user inputs, performs user-defined operations on the beamline, and reports on its status back to the control system. Our IOCs can be booted from the command line, and we have developed command line tools for rapidly running and testing alignment processes. These tools can also be accessed through an EPICS GUI or in separate Python scripts. This presentation provides an overview of our software structure and showcases its use at two beamlines at ORNL.
 
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-TUPDP113  
About • Received ※ 06 October 2023 — Revised ※ 22 October 2023 — Accepted ※ 04 December 2023 — Issued ※ 16 December 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
TUPDP114 Machine Learning Based Noise Reduction of Neutron Camera Images at ORNL neutron, network, timing, target 841
 
  • I.V. Pogorelov, J.P. Edelen, M.J. Henderson, M.C. Kilpatrick
    RadiaSoft LLC, Boulder, Colorado, USA
  • S. Calder, B. Vacaliuc
    ORNL RAD, Oak Ridge, Tennessee, USA
  • R.D. Gregory, G.S. Guyotte, C.M. Hoffmann, B.K. Krishna
    ORNL, Oak Ridge, Tennessee, USA
 
  Funding: This material is based upon work supported by the U.S. Department of Energy, Office of Science, Office of Basic Energy Science under Award Number DE-SC0021555.
Neutron cameras are utilized at the HB2A powder diffractometer to image the sample for alignment in the beam. Typically, neutron cameras are quite noisy as they are constantly being irradiated. Removal of this noise is challenging due to the irregular nature of the pixel intensity fluctuations and the tendency for it to change over time. RadiaSoft has developed a novel noise reduction method for neutron cameras that inscribes a lower envelope of the image signal. This process is then sped up using machine learning. Here we report on the results of our noise reduction method and describe our machine learning approach for speeding up the algorithm for use during operations.
 
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-TUPDP114  
About • Received ※ 07 October 2023 — Revised ※ 22 October 2023 — Accepted ※ 11 December 2023 — Issued ※ 16 December 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
TUPDP116 Machine Learning Based Sample Alignment at TOPAZ controls, alignment, network, neutron 851
 
  • M.J. Henderson, J.P. Edelen, M.C. Kilpatrick, I.V. Pogorelov
    RadiaSoft LLC, Boulder, Colorado, USA
  • S. Calder, B. Vacaliuc
    ORNL RAD, Oak Ridge, Tennessee, USA
  • R.D. Gregory, G.S. Guyotte, C.M. Hoffmann, B.K. Krishna
    ORNL, Oak Ridge, Tennessee, USA
 
  Funding: This material is based upon work supported by the U.S. Department of Energy, Office of Science, Office of Basic Energy Science under Award Number DE-SC0021555.
Neutron scattering experiments are a critical tool for the exploration of molecular structure in compounds. The TOPAZ single crystal diffractometer at the Spallation Neutron Source studies these samples by illuminating samples with different energy neutron beams and recording the scattered neutrons. During the experiments the user will change temperature and sample position in order to illuminate different crystal faces and to study the sample in different environments. Maintaining alignment of the sample during this process is key to ensuring high quality data are collected. At present this process is performed manually by beamline scientists. RadiaSoft in collaboration with the beamline scientists and engineers at ORNL has developed a new machine learning based alignment software automating this process. We utilize a fully-connected convolutional neural network configured in a U-net architecture to identify the sample center of mass. We then move the sample using a custom python-based EPICS IOC interfaced with the motors. In this talk we provide an overview of our machine learning tools and show our initial results aligning samples at ORNL.
 
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-TUPDP116  
About • Received ※ 06 October 2023 — Accepted ※ 05 December 2023 — Issued ※ 11 December 2023  
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
TUPDP117 Classification and Prediction of Superconducting Magnet Quenches power-supply, superconducting-magnet, GUI, experiment 856
 
  • J.A. Einstein-Curtis, J.P. Edelen, M.C. Kilpatrick, R. O’Rourke
    RadiaSoft LLC, Boulder, Colorado, USA
  • K.A. Drees, J.S. Laster, M. Valette
    BNL, Upton, New York, USA
 
  Funding: This material is based upon work supported by the U.S. Department of Energy, Office of Science, Office of High Energy Physics under Award Number DE-SC0021699.
Robust and reliable quench detection for superconducting magnets is increasingly important as facilities push the boundaries of intensity and operational runtime. RadiaSoft has been working with Brookhaven National Lab on quench detection and prediction for superconducting magnets installed in the RHIC storage rings. This project has analyzed several years of power supply and beam position monitor data to train automated classification tools and automated quench precursor determination based on input sequences. Classification was performed using supervised multilayer perceptron and boosted decision tree architectures, while models of the expected operation of the ring were developed using a variety of autoencoder architectures. We have continued efforts to maximize area under the receiver operating characteristic curve for the multiple classification problem of real quench, fake quench, and no-quench events. We have also begun work on long short-term memory (LSTM) and other recurrent architectures for quench prediction. Examinations of future work utilizing more robust architectures, such as variational autoencoders and Siamese models, as well as methods necessary for uncertainty quantification will be discussed.
 
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-TUPDP117  
About • Received ※ 08 October 2023 — Revised ※ 22 October 2023 — Accepted ※ 05 December 2023 — Issued ※ 07 December 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
TUPDP136 Control Systems Design for STS Accelerator controls, timing, target, LLRF 903
 
  • J. Yan, S.M. Hartman, K.-U. Kasemir
    ORNL, Oak Ridge, Tennessee, USA
 
  Funding: This manuscript has been authored by UT-Battelle, LLC, under contract DE-AC05-00OR22725 with the US Department of Energy (DOE).
The Second Target Station (STS) Project will expand the capabilities of the existing Spallation Neutron Source (SNS), with a suite of neutron instruments optimized for long wavelengths. A new accelerator transport line will be built to deliver one out of four SNS pulses to the new target station. The Integrated Control Systems (ICS) will provide remote control, monitoring, OPI, alarms, and archivers for the accelerator systems, such as magnets power supply, vacuum devices, and beam instrumentation. The ICS will upgrade the existing Linac LLRF controls to allow independent operation of the FTS and STS and support different power levels of the FTS and STS proton beam. The ICS accelerator controls are in the phase of preliminary design for the control systems of magnet power supply, vacuum, LLRF, Timing, Machine protection system (MPS), and computing and machine network. The accelerator control systems build upon the existing SNS Machine Control systems, use the SNS standard hardware and EPICS software, and take full advantage of the performance gains delivered by the PPU Project at SNS.
 
poster icon Poster TUPDP136 [2.403 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-TUPDP136  
About • Received ※ 27 September 2023 — Revised ※ 09 October 2023 — Accepted ※ 13 October 2023 — Issued ※ 22 October 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
TUPDP138 Exploratory Data Analysis on the RHIC Cryogenics System Compressor Dataset cryogenics, network, data-analysis, controls 907
 
  • Y. Gao, K.A. Brown, R.J. Michnoff, L.K. Nguyen, A.Z. Zarcone, B. van Kuik
    BNL, Upton, New York, USA
  • A.D. Tran
    FRIB, East Lansing, Michigan, USA
 
  Funding: Work supported by Brookhaven Science Associates, LLC under Contract No. DE-SC0012704 with the U.S. Department of Energy.
The Relativistic Heavy Ion Collider (RHIC) Cryogenic Refrigerator System is the cryogenic heart that allows RHIC superconducting magnets to operate. Parts of the refrigerator are two stages of compression composed of ten first and five second-stage compressors. Compressors are critical for operations. When a compressor faults, it can impact RHIC beam operations if a spare compressor is not brought online as soon as possible. The potential of applying machine learning to detect compressor problems before a fault occurs would greatly enhance Cryo operations, allowing an operator to switch to a spare compressor before a running compressor fails, minimizing impacts on RHIC operations. In this work, various data analysis results on historical compressor data are presented. It demonstrates an autoencoder-based method, which can catch early signs of compressor trips so that advance notices can be sent for the operators to take action.
 
poster icon Poster TUPDP138 [2.897 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-TUPDP138  
About • Received ※ 05 October 2023 — Revised ※ 22 October 2023 — Accepted ※ 30 November 2023 — Issued ※ 11 December 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
TUPDP139 The Pointing Stabilization Algorithm for the Coherent Electron Cooling Laser Transport at RHIC laser, gun, electron, controls 913
 
  • L.K. Nguyen
    BNL, Upton, New York, USA
 
  Funding: Work supported by Brookhaven Science Associates, LLC under Contract No. DE-SC0012704 with the U.S. Department of Energy.
Coherent electron cooling (CeC) is a novel cooling technique being studied in the Relativistic Heavy Ion Collider (RHIC) as a candidate for strong hadron cooling in the Electron-Ion Collider (EIC). The electron beam used for cooling is generated by laser light illuminating a photocathode after that light has traveled approximately 40 m from the laser output. This propagation is facilitated by three independent optical tables that move relative to one another in response to changes in time of day, weather, and season. The alignment drifts induced by these environmental changes, if left uncorrected, eventually render the electron beam useless for cooling. They are therefore mitigated by an active "slow" pointing stabilization system found along the length of the transport, copied from the system that transversely stabilized the Low Energy RHIC electron Cooling (LEReC) laser beam during the 2020 and 2021 RHIC runs. However, the system-specific optical configuration and laser operating conditions of the CeC experiment required an adapted algorithm to address inadequate beam position data and achieve greater dynamic range. The resulting algorithm was successfully demonstrated during the 2022 run of the CeC experiment and will continue to stabilize the laser transport for the upcoming run. A summary of the algorithm is provided.
 
poster icon Poster TUPDP139 [2.129 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-TUPDP139  
About • Received ※ 05 October 2023 — Revised ※ 09 October 2023 — Accepted ※ 29 November 2023 — Issued ※ 08 December 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
TUSDSC04 State Machine Operation of Complex Systems vacuum, linac, cryomodule, controls 929
 
  • P.M. Hanlet
    Fermilab, Batavia, Illinois, USA
 
  Operation of complex systems which depend on one or more other systems with many process variables often operate in more than one state. For each state there may be a variety of parameters of interest, and for each of these, one may require different alarm limits, different archiving needs, and have different critical parameters. Relying on operators to reliably change 10s-1000s of parameters for each system for each state is unreasonable. Not changing these parameters results in alarms being ignored or disabled, critical changes missed, and/or possible data archiving problems. To reliably manage the operation of complex systems, such as cryomodules (CMs), Fermilab is implementing state machines for each CM and an over-arching state machine for the PIP-II superconducting linac (SCL). The state machine transitions and operating parameters are stored/restored to/from a configuration database. Proper implementation of the state machines will not only ensure safe and reliable operation of the CMs, but will help ensure reliable data quality. A description of PIP-II SCL, details of the state machines, and lessons learned from limited use of the state machines in recent CM testing will be discussed.  
slides icon Slides TUSDSC04 [6.117 MB]  
poster icon Poster TUSDSC04 [1.031 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-TUSDSC04  
About • Received ※ 06 October 2023 — Revised ※ 23 October 2023 — Accepted ※ 11 December 2023 — Issued ※ 17 December 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WE1BCO03 Design of the HALF Control System controls, network, EPICS, timing 958
 
  • G. Liu, L.G. Chen, C. Li, X.K. Sun, K. Xuan, D.D. Zhang
    USTC/NSRL, Hefei, Anhui, People’s Republic of China
 
  The Hefei Advanced Light Facility (HALF) is a 2.2-GeV 4th synchrotron radiation light source, which is scheduled to start construction in Hefei, China in 2023. The HALF contains an injector and a 480-m diffraction limited storage ring, and 10 beamlines for phase one. The HALF control system is EPICS based with integrated application and data platforms for the entire facility including accelerator and beamlines. The unified infrastructure and network architecture are designed to build the control system. The infrastructure provides resources for the EPICS development and operation through virtualization technology, and provides resources for the storage and process of experimental data through distributed storage and computing clusters. The network is divided into the control network and the dedicated high-speed data network by physical separation, the control network is subdivided into multiple subnets by VLAN technology. Through estimating the scale of the control system, the 10Gbps control backbone network and the data network that can be expanded to 100Gbps can fully meet the communication requirements of the control system. This paper reports the control system architecture design and the development work of some key technologies in details.  
slides icon Slides WE1BCO03 [2.739 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-WE1BCO03  
About • Received ※ 02 October 2023 — Revised ※ 09 October 2023 — Accepted ※ 13 October 2023 — Issued ※ 26 October 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WE2BCO03 Ongoing Improvements to the Instrumentation and Control System at LANSCE controls, software, hardware, network 979
 
  • M. Pieck, C.D. Hatch, H.A. Watkins, E.E. Westbrook
    LANL, Los Alamos, New Mexico, USA
 
  Funding: This work was supported by the U.S. DOE through the Los Alamos National Laboratory (LANL). LANL is operated by Triad National Security, LLC, for the NNSA of U.S. DOE - Contract No. 89233218CNA000001
Recent upgrades to the Instrumentation and Control System at Los Alamos Neutron Science Center (LANSCE) have significantly improved its maintainability and performance. These changes were the first strategic steps towards a larger vision to standardize the hardware form factors and software methodologies. Upgrade efforts are being prioritized though a risk-based approach and funded at various levels. With a major recapitalization project finished in 2022 and modernization project scheduled to start possibly in 2025, current efforts focus on the continuation of upgrade efforts that started in the former and will be finished in the later time frame. Planning and executing these upgrades are challenging considering that some of the changes are architectural in nature, however, the functionality needs to be preserved while taking advantage of technology progressions. This is compounded by the fact that those upgrades can only be implemented during the annual 4-month outage. This paper will provide an overview of our vision, strategy, challenges, recent accomplishments, as well as future planned activities to transform our 50-year-old control system into a modern state-of-the-art design.
LA-UR-23-24389
 
slides icon Slides WE2BCO03 [9.626 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-WE2BCO03  
About • Received ※ 30 September 2023 — Revised ※ 09 October 2023 — Accepted ※ 19 November 2023 — Issued ※ 03 December 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WE2BCO05 Continuous Modernization of Control Systems for Research Facilities controls, network, EPICS, software 993
 
  • K. Vodopivec, K.S. White
    ORNL, Oak Ridge, Tennessee, USA
 
  Funding: This work was supported by the U.S. Department of Energy under contract DE-AC0500OR22725.
The Spallation Neutron Source at Oak Ridge National Laboratory has been in operation since 2006. In order to achieve high operating reliability and availability as mandated by the sponsor, all systems participating in the production of neutrons need to be maintained to the highest achievable standard. This includes SNS integrated control system, comprising of specialized hardware and software, as well as computing and networking infrastructure. While machine upgrades are extending the control system with new and modern components, the established part of control system requires continuous modernization efforts due to hardware obsolescence, limited lifetime of electronic components, and software updates that can break backwards compatibility. This article discusses challenges of sustaining control system operations through decades of facility lifecycle, and presents a methodology used at SNS for continuous control system improvements that was developed by analyzing operational data and experience.
 
slides icon Slides WE2BCO05 [1.484 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-WE2BCO05  
About • Received ※ 05 October 2023 — Revised ※ 12 October 2023 — Accepted ※ 08 December 2023 — Issued ※ 12 December 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WE2BCO07 15 Years of ALICE DCS detector, controls, experiment, interface 1002
 
  • P.Ch. Chochula, A. Augustinus, P.M. Bond, A.N. Kurepin, M. Lechman, D. Voscek
    CERN, Meyrin, Switzerland
  • O. Pinazza
    INFN-Bologna, Bologna, Italy
 
  The ALICE experiment studies ultra relativistic heavy ion collisions at the Large Hadron Collider at CERN. Its Detector Control System (DCS) has been ensuring the experiment safety and stability of data collection since 2008. A small central team at CERN coordinated the developments with collaborating institutes and defined the operational principles and tools. Although the basic architecture of the system remains valid, it has had to adapt to the changes and evolution of its components. The introduction of new detectors into ALICE has required the redesign of several parts of the system, especially the front-end electronics control, which triggered new developments. Now, the DCS enters the domain of data acquisition, and the controls data is interleaved with the physics data stream, sharing the same optical links. The processing of conditions data has moved from batch collection at the end of data-taking to constant streaming. The growing complexity of the system has led to a big focus on the operator environment, with efforts to minimize the risk of human errors. This presentation describes the evolution of the ALICE control system over the past 15 years and highlights the significant improvements made to its architecture. We discuss how the challenges of integrating components developed in tens of institutes worldwide have been mastered in ALICE.
This proposed contribution is complemented by poster submitted by Ombretta Pinazza who will explain the user interfaces deployed in ALICE.
 
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-WE2BCO07  
About • Received ※ 06 October 2023 — Revised ※ 11 October 2023 — Accepted ※ 14 December 2023 — Issued ※ 21 December 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WE3BCO01 Modular and Scalable Archiving for EPICS and Other Time Series Using ScyllaDB and Rust database, EPICS, FEL, MMI 1008
 
  • D. Werder, T. Humar
    PSI, Villigen PSI, Switzerland
 
  At PSI we currently run too many different products with the common goal of archiving timestamped data. This includes EPICS Channel Archiver as well as Archiver Appliance for EPICS IOC’s, a buffer storage for beam-synchronous data at SwissFEL, and more. This number of monolithic solutions is too large to maintain and overlaps in functionality. Each solution brings their own storage engine, file format and centralized design which is hard to scale. In this talk I report on how we factored the system into modular components with clean interfaces. At the core, the different storage engines and file formats have been replaced by ScyllaDB, which is an open source product with enterprise support and remarkable adoption in the industry. We gain from its distributed, fault-tolerant and scalable design. The ingest of data into ScyllaDB is factored into components according to the different type of protocols of the sources, e.g. Channel Access. Here we build upon the Rust language and achieve robust, maintainable and performant services. One interface to access and process the recorded data is the HTTP retrieval service. This service offers e.g. search among the channels by various criteria, full event data as well as aggregated and binned data in either json or binary formats. This service can also run user-defined data transformations and act as a source for Grafana for a first view into recorded channel data. Our setup for SwissFEL ingests the ~370k EPICS updates/s from ~220k PVs (scalar and waveform), having rates between 0.1 and 100 Hz.  
slides icon Slides WE3BCO01 [1.179 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-WE3BCO01  
About • Received ※ 04 October 2023 — Revised ※ 09 November 2023 — Accepted ※ 14 December 2023 — Issued ※ 14 December 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WE3BCO05 The CMS Detector Control Systems Archiving Upgrade database, controls, detector, software 1022
 
  • W. Karimeh
    CERN, Meyrin, Switzerland
 
  The CMS experiment relies on its Detector Control System (DCS) to monitor and control over 10 million channels, ensuring a safe and operable detector that is ready to take physics data. The data is archived in the CMS Oracle conditions database, which is accessed by operators, trigger and data acquisition systems. In the upcoming extended year-end technical stop of 2023/2024, the CMS DCS software will be upgraded to the latest WinCC-OA release, which will utilise the SQLite database and the Next Generation Archiver (NGA), replacing the current Raima database and RDB manager. Taking advantage of this opportunity, CMS has developed its own version of the NGA backend to improve its DCS database interface. This paper presents the CMS DCS NGA backend design and mechanism to improve the efficiency of the read-and-write data flow. This is achieved by simplifying the current Oracle conditions schema and introducing a new caching mechanism. The proposed backend will enable faster data access and retrieval, ultimately improving the overall performance of the CMS DCS.  
slides icon Slides WE3BCO05 [1.920 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-WE3BCO05  
About • Received ※ 06 October 2023 — Revised ※ 12 October 2023 — Accepted ※ 14 December 2023 — Issued ※ 14 December 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WE3AO01 Radiation-Tolerant Multi-Application Wireless IoT Platform for Harsh Environments radiation, network, controls, monitoring 1051
 
  • S. Danzeca, A. Masi, R. Sierra
    CERN, Meyrin, Switzerland
  • J.L.D. Luna Duran, A. Zimmaro
    European Organization for Nuclear Research (CERN), Geneva, Switzerland
 
  We introduce a radiation-tolerant multi-application wireless IoT platform, specifically designed for deployment in harsh environments such as particle accelerators. The platform integrates radiation-tolerant hardware with the possibility of covering different applications and use cases, including temperature and humidity monitoring, as well as simple equipment control functions. The hardware is capable of withstanding high levels of radiation and communicates wirelessly using LoRa technology, which reduces infrastructure costs and enables quick and easy deployment of operational devices. To validate the platform’s suitability for different applications, we have deployed a radiation monitoring version in the CERN particle accelerator complex and begun testing multi-purpose application devices in radiation test facilities. Our radiation-tolerant IoT platform, in conjunction with the entire network and data management system, opens up possibilities for different applications in harsh environments.  
slides icon Slides WE3AO01 [19.789 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-WE3AO01  
About • Received ※ 04 October 2023 — Revised ※ 23 October 2023 — Accepted ※ 08 December 2023 — Issued ※ 12 December 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WE3AO05 Helium Mass Flow System Integrated into EPICS for Online SRF Cavity Q Measurements cryomodule, cavity, controls, interface 1071
 
  • K. Jordan, G.R. Croke, J.P. Jayne, M.G. Tiefenback, C.M. Wilson
    JLab, Newport News, Virginia, USA
  • G.H. Biallas
    Hyperboloid LLC, Yorktown, Virginia, USA
  • D.P. Christian
    JLAB, Newport News, USA
 
  The SBIR funded Helium Mass Flow Monitor System, developed by Jefferson Lab and Hyperboloid LLC, is designed to measure the health of cavities in a Cryomodule in real-time. It addresses the problem of cavities with low Q₀, which generate excess heat and evaporation from the 2 K super-fluid helium bath used to cool the cavities. The system utilizes a unique meter that is based on a superconducting component. This device enables high-resolution measurements of the power dissipated in the cryomodule while the accelerator is operating. It can also measure individual Cavity Q₀s when the beam is turned off. The Linux-based control system is an integral part of this device, providing the necessary control and data processing capabilities. The initial implementation of the Helium Mass Flow Monitor System at Jefferson Lab was done using LabView, a couple of current sources & a nano-voltmeter. Once the device was proven to work at 2K the controls transitioned to a hand wired PCB & Raspberry Pi interfaced to the open-source Experimental Physics and Industrial Control System (EPICS) control system. The EE support group preferred to support a LabJack T7 over the rPi. 12 chassis were built and the system is being deployed as the cryogenic U-Tubes become available.  
slides icon Slides WE3AO05 [6.073 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-WE3AO05  
About • Received ※ 09 October 2023 — Revised ※ 12 October 2023 — Accepted ※ 14 December 2023 — Issued ※ 18 December 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
TH1BCO02 Development of Laser Accelerator Control System Based on EPICS controls, laser, EPICS, proton 1093
 
  • Y. Xia, K.C. Chen, L.W. Feng, Z. Guo, Q.Y. He, F.N. Li, C. Lin, Q. Wang, X.Q. Yan, M.X. Zang, J. Zhao
    PKU, Beijing, People’s Republic of China
  • J. Zhao
    Peking University, Beijing, Haidian District, People’s Republic of China
 
  Funding: State Key Laboratory of Nuclear Physics and Technology, and Key Laboratory of HEDP of the Ministry of Education, CAPT, Peking University, Beijing 100871, China;
China’s Ministry of Science and Technology supports Peking University in constructing a proton radiotherapy device based on a petawatt (PW) laser accelerator. The control system’s functionality and performance are vital for the accelerator’s reliability, stability, and efficiency. The PW laser accelerator control system has a three-layer distributed architecture, including device control, front-end (input/output) control and central control (data management, and human-machine interface) layers. The software platform primarily uses EPICS, supplemented by PLC, Python, and Java, while the hardware platform comprises industrial control computers, servers, and private cloud configurations. The control system incorporates various subsystems that manage the laser, target field, beamline, safety interlocks, conditions, synchronization, and functionalities related to data storage, display, and more. This paper presents a control system implementation suitable for laser accelerators, providing valuable insights for future laser accelerator control system development.
 
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-TH1BCO02  
About • Received ※ 04 October 2023 — Revised ※ 09 October 2023 — Accepted ※ 14 December 2023 — Issued ※ 15 December 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
TH1BCO06 The Karabo Control System controls, FEL, GUI, interface 1120
 
  • S. Hauf, N. Anakkappalla, J.T. Bin Taufik, V. Bondar, R. Costa, W. Ehsan, S.G. Esenov, G. Flucke, A. García-Tabarés Valdivieso, G. Giovanetti, D. Goeries, D.G. Hickin, I. Karpics, A. Klimovskaia, A. Parenti, A. Samadli, H. Santos, A. Silenzi, M.A. Smith, F. Sohn, M. Staffehl, C. Youngman
    EuXFEL, Schenefeld, Germany
 
  The Karabo distributed control system has been developed to address the challenging requirements of the European X-ray Free Electron Laser facility*, which include custom-made hardware, and high data rates and volumes. Karabo implements a broker-based SCADA environment**. Extensions to the core framework, called devices, provide control of hardware, monitoring, data acquisition and online processing on distributed hardware. Services for data logging and for configuration management exist. The framework exposes Python and C++ APIs, which enable developers to quickly respond to requirements within an efficient development environment. An AI driven device code generator facilitates prototyping. Karabo’s GUI features an intuitive, coding-free control panel builder. This allows non-software engineers to create synoptic control views. This contribution introduces the Karabo Control System out of the view of application users and software developers. Emphasis is given to Karabo’s asynchronous Python environment. We share experience of running the European XFEL using a clean-sheet developed control system, and discuss the availability of the system as free and open source software.
* Tschentscher, et al. Photon beam transport and scientific instruments at the European XFEL App. Sci.7.6(2017):592
** Hauf, et al. The Karabo distributed control system J.Sync. Rad.26.5(2019):1448ff
 
slides icon Slides TH1BCO06 [5.878 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-TH1BCO06  
About • Received ※ 06 October 2023 — Accepted ※ 03 December 2023 — Issued ※ 12 December 2023  
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
TH2AO03 An Update on the CERN Journey from Bare Metal to Orchestrated Containerization for Controls controls, network, software, ECR 1138
 
  • T. Oulevey, B. Copy, F. Locci, S.T. Page, C. Roderick, M. Vanden Eynden, J.-B. de Martel
    CERN, Meyrin, Switzerland
 
  At CERN, work has been undertaken since 2019 to transition from running Accelerator controls software on bare metal to running in an orchestrated, containerized environment. This will allow engineers to optimise infrastructure cost, to improve disaster recovery and business continuity, and to streamline DevOps practices along with better security. Container adoption requires developers to apply portable practices including aspects related to persistence integration, network exposure, and secrets management. It also promotes process isolation and supports enhanced observability. Building on containerization, orchestration platforms (such as Kubernetes) can be used to drive the life cycle of independent services into a larger scale infrastructure. This paper describes the strategies employed at CERN to make a smooth transition towards an orchestrated containerised environment and discusses the challenges based on the experience gained during an extended proof-of-concept phase.  
slides icon Slides TH2AO03 [0.480 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-TH2AO03  
About • Received ※ 06 October 2023 — Revised ※ 24 October 2023 — Accepted ※ 14 December 2023 — Issued ※ 19 December 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
TH2AO04 Developing Modern High-Level Controls APIs controls, software, hardware, MMI 1145
 
  • B. Urbaniec, L. Burdzanowski, S.G. Gennaro
    CERN, Meyrin, Switzerland
 
  The CERN Accelerator Controls are comprised of various high-level services that work together to provide a highly available, robust, and versatile means of controlling the Accelerator Complex. Each service includes an API (Application Programming Interface) which is used both for service-to-service interactions, as well as by end-user applications. These APIs need to support interactions from heterogeneous clients using a variety of programming languages including Java, Python, C++, or direct HTTP/REST calls. This presents several technical challenges, including aspects such as reliability, availability and scalability. API usability is another important factor with accents on ease of access and minimizing the exposure to Controls domain complexity. At the same time, there is the requirement to efficiently and safely cater for the inevitable need to evolve the APIs over time. This paper describes concrete technical and design solutions addressing these challenges, based on experience gathered over numerous years. To further support this, the paper presents examples of real-life telemetry data focused on latency and throughput, along with the corresponding analysis. The paper also describes on-going and future API development.  
slides icon Slides TH2AO04 [2.676 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-TH2AO04  
About • Received ※ 03 October 2023 — Revised ※ 12 October 2023 — Accepted ※ 17 December 2023 — Issued ※ 18 December 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
TH2AO05 Secure Role-Based Access Control for RHIC Complex controls, software, network, EPICS 1150
 
  • A. Sukhanov, J. Morris
    BNL, Upton, New York, USA
 
  Funding: Work supported by Brookhaven Science Associates, LLC under Contract No. DE-SC0012704 with the U.S. Department of Energy.
This paper describes the requirements, design, and implementation of Role-Based Access Control (RBAC) for RHIC Complex. The system is being designed to protect from accidental, unauthorized access to equipment of the RHIC Complex, but it also can provide significant protection against malicious attacks. The role assignment is dynamic. Roles are primarily based on user id but elevated roles may be assigned for limited periods of time. Protection at the device manager level may be provided for an entire server or for individual device parameters. A prototype version of the system has been deployed at RHIC complex since 2022. The authentication is performed on a dedicated device manager, which generates an encrypted token, based on user ID, expiration time, and role level. Device managers are equipped with an authorization mechanism, which supports three methods of authorization: Static, Local and Centralized. Transactions with token manager take place ’atomically’, during secured set() or get() requests. The system has small overhead: ~0.5 ms for token processing and ~1.5 ms for network round trip. Only python based device managers are participating in the prototype system. Testing has begun with C++ device managers, including those that run on VxWorks platforms. For easy transition, dedicated intermediate shield managers can be deployed to protect access to device managers which do not directly support authorization.
 
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-TH2AO05  
About • Received ※ 04 October 2023 — Revised ※ 14 November 2023 — Accepted ※ 19 December 2023 — Issued ※ 22 December 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
TH2BCO04 SAMbuCa: Sensors Acquisition and Motion Control Framework at CERN controls, hardware, framework, interface 1179
 
  • A. Masi, O.O. Andreassen, M. Arruat, M. Di Castro, R. Ferraro, I. Kozsar, E.W. Matheson, J.P. Palluel, P. Peronnard, J. Serrano, J. Tagg, F. Vaga, E. Van der Bij
    CERN, Meyrin, Switzerland
  • S. Danzeca, M. Donzé, S.F. Fargier, M. Gulin, E. Soria
    European Organization for Nuclear Research (CERN), Geneva, Switzerland
 
  Motion control systems at CERN often have challenging requirements, such as high precision in extremely radioactive environments with millisecond synchronization. These demanding specifications are particularly relevant for Beam Intercepting Devices (BIDs) such as the collimators of the Large Hadron Collider (LHC). Control electronics must be installed in safe areas, hundreds of meters away from the sensors and actuators while conventional industrial systems only work with cable lengths up to a few tens of meters. To address this, several years of R&D have been committed to developing a high precision motion control system. This has resulted in specialized radiation-hard actuators, new sensors, novel algorithms and actuator control solutions capable of operating in this challenging environment. The current LHC Collimator installation is based on off-the-shelf components from National Instruments. During the Long Shutdown 3 (LS3 2026-2028), the existing systems will be replaced by a new high-performance Sensors Acquisition and Motion Control system (SAMbuCa). SAMbuCa represents a complete, in-house developed, flexible and modular solution, able to cope with the demanding requirements of motion control at CERN, and incorporating the R&D achievements and operational experience of the last 15 years controlling more than 1200 axes at CERN. In this paper, the hardware and software architectures, their building blocks and design are described in detail.  
slides icon Slides TH2BCO04 [5.775 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-TH2BCO04  
About • Received ※ 05 October 2023 — Revised ※ 12 October 2023 — Accepted ※ 19 December 2023 — Issued ※ 20 December 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
THMBCMO07 Reflective Servers: Seamless Offloading of Resource Intensive Data Delivery interface, controls, hardware, software 1201
 
  • S.L. Clark, T. D’Ottavio, M. Harvey, J.P. Jamilkowski, J. Morris, S. Nemesure
    BNL, Upton, New York, USA
 
  Funding: Work supported by Brookhaven Science Associates, LLC under Contract No. DE-SC0012704 with the U.S. Department of Energy.
Brookhaven National Laboratory’s Collider-Accelerator Department houses over 550 Front-End Computers (FECs) of varying specifications and resource requirements. These FECs provide operations-critical functions to the complex, and uptime is a concern among the most resource constrained units. Asynchronous data delivery is widely used by applications to provide live feedback of current conditions but contributes significantly towards resource exhaustion of FECs. To provide a balance of performance and efficiency, the Reflective system has been developed to support unrestricted use of asynchronous data delivery with even the most resource constrained FECs in the complex. The Reflective system provides components which work in unison to offload responsibilities typically handled by core controls infrastructure to hosts with the resources necessary to handle heavier workloads. The Reflective system aims to be a drop-in component of the controls system, requiring few modifications and remaining completely transparent to users and applications alike.
 
slides icon Slides THMBCMO07 [0.963 MB]  
poster icon Poster THMBCMO07 [6.670 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-THMBCMO07  
About • Received ※ 04 October 2023 — Accepted ※ 08 December 2023 — Issued ※ 15 December 2023  
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
THMBCMO14 Development of the SKA Control System, Progress, and Challenges controls, software, TANGO, interface 1221
 
  • S. Vrcic, T. Juerges
    SKAO, Macclesfield, United Kingdom
 
  The SKA Project is a science mega-project whose mission is to build an astronomical observatory that comprises two large radio-telescopes: the SKA-Low Telescope, located in the Inyarrimanha Ilgari Bundara, the CSIRO Murchison Radio-astronomy Observatory in Western Australia, with the observing range 50 to 350 MHz, and the SKA Mid Telescope, located in the Karoo Region, South Africa, with the observing range 350 MHz to 15 GHz. The SKA Global Headquarters is in the Jodrell Bank Observatory, near Manchester, UK. When completed, the SKA Telescopes will surpass existing radio-astronomical facilities not only in the scientific criteria such as sensitivity, angular resolution, and survey speed, but also in the number of receptors and the range of the observing and processing modes. The Observatory, and each of the Telescopes, will be delivered in stages, thus supporting incremental development of the collecting area, signal and data processing capacity, and the observing and processing modes. Unlike scientific capability, which, in some cases, may be delivered in the late releases, the control system is required from the very beginning to support integration and verification. Development of the control system to support the first delivery of the Telescopes (Array Assembly 0.5) is well under way. This paper describes the SKA approach to the development of the Telescope Control System, and discusses opportunities and challenges resulting from the distributed development and staged approach to the Telescope construction.  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-THMBCMO14  
About • Received ※ 06 October 2023 — Revised ※ 12 October 2023 — Accepted ※ 12 December 2023 — Issued ※ 22 December 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
THMBCMO18 Advancements in Beamline Digital Twin at BESSYII simulation, software, experiment, MMI 1236
 
  • S. Vadilonga, G. Günther, S. Kazarski, R. Ovsyannikov, S.S. Sachse, W. Smith
    HZB, Berlin, Germany
 
  This presentation reports on the status of beamline digital twins at BESSY II. To provide a comprehensive beamline simulation experience we have leveraged BESSY II’s x-ray tracing program, RAY-UI[*], widely used for beamline design and commissioning and best adapted to the requirements of our soft X-ray source BESSY II. We created a Python API, RayPyNG, capable to convert our library of beamline configuration files produced by RAY-UI into Python objects[**]. This allows to embed beamline simulation into Bluesky[***], our experimental controls software ecosystem. All optical elements are mapped directly into the Bluesky device abstraction (Ophyd). Thus beamline operators can run simulations and operate real systems by a common interface, allowing to directly compare theory predictions with real-time results[****]. We will discuss the relevance of this digital twin for process tuning in terms of enhanced beamline performance and streamlined operations. We will shortly discuss alternatives to RAY-UI like other software packages and ML/AI surrogate models.
[*]https://doi.org/10.1063/1.5084665
[**]https://raypyng.readthedocs.io/
[***]https://doi.org/10.1080/08940886.2019.1608121
[****]https://raypyng-bluesky.readthedocs.io/
 
slides icon Slides THMBCMO18 [0.333 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-THMBCMO18  
About • Received ※ 06 October 2023 — Accepted ※ 11 December 2023 — Issued ※ 16 December 2023  
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
THPDP002 The Micro-Services of Cern’s Critical Current Test Benches controls, software, FPGA, power-supply 1295
 
  • C. Charrondière, A. Ballarino, C. Barth, J.F. Fleiter, P. Koziol, H. Reymond
    CERN, Meyrin, Switzerland
  • O.Ø. Andreassen, T. Boutboul, S.C. Hopkins
    European Organization for Nuclear Research (CERN), Geneva, Switzerland
 
  In order to characterize the critical-current density of low temperature superconductors such as niobium¿titanium (NbTi) and niobium¿tin (Nb₃Sn) or high temperature superconductors such as magnesium-diboride MgB₂ or Rare-earth Barium Copper Oxide REBCO tapes, a wide range of custom instruments and interfaces are used. The critical current of a superconductor depends on temperature, magnetic field, current and strain, requiring high precision measurements in the nano Volt range, well-synchronized instrumentation, and the possibility to quickly adapt and replace instrumentation if needed. The micro-service based application presented in this paper allows operators to measure a variety of analog signals, such as the temperature of the cryostats and sample under test, magnetic field, current passing through the sample, voltage across the sample, pressure, helium level etc. During the run, the software protects the sample from quenching, controlling the current passed through it using high-speed field programmable gate array (FPGA) systems on Linux Real-Time (RT) based PCI eXtensions controllers (PXIe). The application records, analyzes and reports to the external Oracle database all parameters related to the test. In this paper, we describe the development of the micro-service based control system, how the interlocks and protection functionalities work, and how we had to develop a multi-windowed scalable acquisition application that could be adapted to the many changes occurring in the test facility.  
poster icon Poster THPDP002 [6.988 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-THPDP002  
About • Received ※ 06 October 2023 — Revised ※ 10 October 2023 — Accepted ※ 13 October 2023 — Issued ※ 26 October 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
THPDP007 Rolling Out a New Platform for Information System Architecture at SOLEIL MMI, database, TANGO, software 1301
 
  • G. Abeillé, Y.-M. Abiven, B. Gagey
    SOLEIL, Gif-sur-Yvette, France
  • P. Grojean, F. Quillien, C. Rognon, V. Szyndler
    Emoxa, Boulogne-Billancourt, France
 
  SOLEIL Information System is a 20-year legacy with multiple software and IT solutions following constantly evolving business requirements. Lots of non-uniform and siloed information systems have been experienced increasing the IT complexity. The future of SOLEIL (SOLEIL II*) will be based on a new architecture embracing native support for continuous digital transformation and will enhance user experience. Redesigning an information system given synchrotron-based science challenges requires a homogeneous and flexible approach. A new organizational setup is starting with the implementation of a transversal architectural committee. Its missions will be to set the foundation of architecture design principles and to foster all projects’ teams to apply them. The committee will support the building of architectural specifications and will drive all architecture gate reviews. Interoperability is a key pillar for SOLEIL II. Therefore, a synchronous and asynchronous inter-processes communications is being built as a platform to connect existing systems and future ones; it is based both on an event broker and an API manager. An implementation has been developed to interconnect our existing operational tools (CMMS** and our ITSM*** portal). Our current use case is a brand new application dedicated to samples’ lifecycle interconnected with various existing business applications. This paper will detail our holistic approach for addressing the future evolution of our information system, made mandatory given the new requirements from SOLEIL II.
* SOLEIL II: Towards A Major Transformation of the Facility
** CMMS: Computerized Maintenance Management System
*** ITSM: Information Technology Service Management
 
poster icon Poster THPDP007 [1.397 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-THPDP007  
About • Received ※ 05 October 2023 — Revised ※ 25 October 2023 — Accepted ※ 13 December 2023 — Issued ※ 16 December 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
THPDP016 Full Stack Performance Optimizations for FAIR Operation controls, timing, hardware, storage-ring 1325
 
  • A. Schaller, H.C. Hüther, R. Mueller, A. Walter
    GSI, Darmstadt, Germany
 
  In the last beam times, operations reported a lack of performance and long waiting times when performing simple changes of the machines’ settings. To ensure performant operation of the future Facility for Antiproton and Ion Research (FAIR), the "Task Force Performance" (TFP) was formed in mid-2020, which aimed at optimizing all involved Control System components. Baseline measurements were recorded for different scenarios to compare and evaluate the steps taken by the TFP. These measurements contained data from all underlying systems, from hardware device data supply over network traffic up to user interface applications. Individual groups searched, detected and fixed performance bottlenecks in their components of the Control System stack, and the interfaces between these individual components were inspected as well. The findings are presented here.  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-THPDP016  
About • Received ※ 04 October 2023 — Revised ※ 29 November 2023 — Accepted ※ 13 December 2023 — Issued ※ 20 December 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
THPDP017 A Data Acquisition Middle Layer Server with Python Support for Linac Operation and Experiments Monitoring and Control cavity, FEL, controls, experiment 1330
 
  • V. Rybnikov, A. Sulc
    DESY, Hamburg, Germany
 
  This paper presents online anomaly detection on low-level radio frequency (LLRF) cavities running on FLASH/XFEL DAQ system*. The code is run by a DAQ Middle Layer (ML) server, which has on-line access to all collected data. The ML server executes a Python script that runs a pre-trained machine learning model on every shot in the FLASH/XFEL machine. We discuss the challenges associated with real-time anomaly detection due to high data rates generated by RF cavities, and introduce a DAQ system pipeline and algorithms used for online detection on arbitrary channels in our control system. The system’s performance is evaluated using real data from operational RF cavities. We also focus on the DAQ monitor server’s features and its implementation.
*A. Aghababyan et al., ’Multi-Processor Based Fast Data Acquisition for a Free Electron Laser and Experiments’, in IEEE Transactions on Nuclear Science, vol. 55, No. 1, pp. 256-260, February 2008
 
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-THPDP017  
About • Received ※ 02 October 2023 — Revised ※ 25 October 2023 — Accepted ※ 13 December 2023 — Issued ※ 20 December 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
THPDP023 Evolution of Control System and PLC Integration at the European XFEL PLC, controls, interface, FEL 1354
 
  • A. Samadli, T. Freyermuth, P. Gessler, G. Giovanetti, S. Hauf, D.G. Hickin, N. Mashayekh, A. Silenzi
    EuXFEL, Schenefeld, Germany
 
  The Karabo software framework* is a pluggable, distributed control system that offers rapid control feedback to meet the complex requirements of the European X-ray Free Electron Laser facility. Programmable Logic Controllers (PLC) using Beckhoff technology are the main hardware control interface system within the Karabo Control System. The communication between Karabo and PLC currently uses an in-house developed TCP/IP protocol using the same port for operational-related communications and self-description (the description of all available devices sent by PLC). While this simplifies the interface, it creates a notable load on the client and lacks certain features, such as a textual description of each command, property names coherent with the rest of the control system as well as state-awareness of available commands and properties**. To address these issues and to improve user experience, the new implementation will provide a comprehensive self-description, all delivered via a dedicated TCP port and serialized in a JSON format. A Python Asyncio implementation of the Karabo device responsible for message decoding, dispatching to and from the PLC, and establishing communication with relevant software devices in Karabo incorporates lessons learned from prior design decisions to support new updates and increase developer productivity.
* Hauf, et al. The Karabo distributed control system J.Sync. Rad.26.5(2019): 1448ff
** T. Freyermuth et al. Progression Towards Adaptability in the PLC Library at the EuXFEL, PCaPAC’22, pp. 102-106. 
 
poster icon Poster THPDP023 [0.338 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-THPDP023  
About • Received ※ 05 October 2023 — Revised ※ 25 October 2023 — Accepted ※ 13 December 2023 — Issued ※ 18 December 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
THPDP025 The Superconducting Undulator Control System for the European XFEL controls, undulator, power-supply, FEL 1362
 
  • M. Yakopov, S. Abeghyan, S. Casalbuoni, S. Karabekyan
    EuXFEL, Schenefeld, Germany
  • M.G. Gretenkord, D.P. Pieper
    Beckhoff Automation GmbH, Verl, Germany
  • A. Hobl, A.S. Sendner
    Bilfinger Noell GmbH, Wuerzburg, Germany
 
  The European XFEL development program includes the implementation of an afterburner based on superconducting undulator (SCU) technology for the SASE2 hard X-ray beamline. The design and production of the first SCU prototype, called PRE -SerieS prOtotype (S-PRESSO), together with the required control system, are currently underway. The architecture, key parameters, and detailed description of the functionality of the S-PRESSO control system are discussed in this paper.  
poster icon Poster THPDP025 [2.959 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-THPDP025  
About • Received ※ 06 October 2023 — Revised ※ 12 October 2023 — Accepted ※ 13 December 2023 — Issued ※ 15 December 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
THPDP031 Development of Beam Gate System Using the White Rabbit at SuperKEKB laser, kicker, controls, septum 1381
 
  • F. Ito, H. Kaji
    KEK, Ibaraki, Japan
  • Y. Iitsuka
    EJIT, Hitachi, Ibaraki, Japan
 
  Currently, SuperKEK has network-based systems such as trigger delivery, bucket selection, abort system, beam permission, and distributed DAQ, all of which are operated as separate systems. The White Rabbit (WR) has extraordinary multi-functionality when combined with the modules already developed, so it is possible that in the future all systems could be operated in a WR network. This would lead to a reduction in human, time, and financial costs. We constructed a beam gate, which is a part of the beam permission system, on a trial basis using WR. These trigger deliveries need to be interlocked. The trigger delivery to the electron gun has a specification that the next trigger delivery is turned ON/OFF after receiving the ON/OFF signal at any given timing. For the above reasons, the delay time from the receipt of the ON/OFF signal from the electron gun is not a fixed value, making it difficult to interlock with the trigger delivery of other devices. By turning on/off the trigger delivery using a precisely time-synchronized WR, the ON/OFF of the trigger delivery of all devices could be correctly interlocked.  
poster icon Poster THPDP031 [0.529 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-THPDP031  
About • Received ※ 09 October 2023 — Revised ※ 10 October 2023 — Accepted ※ 18 December 2023 — Issued ※ 18 December 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
THPDP047 ELK Stack Deployment with Ansible software, controls, GUI, distributed 1411
 
  • T. Gatsi, X.P. Baloyi, J.L. Lekganyane, R.L. Schwartz
    SARAO, Cape Town, South Africa
 
  The 64-dish MeerKAT radio telescope, constructed in South Africa, became the largest and most sensitive radio telescope in the Southern Hemisphere until integrated with the Square Kilometer Array (SKA). Our Control and Monitoring system for Radio Astronomy Project such as MeerKAT produces a lot of data and logs that require proper handling. Viewing and analysis to trace and track system issues and as well as investigate technical software issues require one to go back in time to look for event occurrence. We therefore deployed an ELK software stack ( Elasticsearch, Kibana, Logstash) using Ansible in order to have the capability to aggregate system process logs. We deploy the stack as a cluster comprising lxc containers running inside a Proxmox Virtual Environment using Ansible as a software deployment tool. Each container in the cluster performs cluster duties such as deciding where to place index shards and when to move them. Each container is a data node that makes up the heart of the cluster. We deploy the stack as a cluster for load balancing purposes. Logstash ingests ,transforms and sends the data to the Kibana Graphical User Interface for visualization. Elasticsearch indexes, analyzes, and searches the ingested data into Kibana and our Operations Team and other system users can visualize and analyze these logs on the Kibana GUI frontend.  
poster icon Poster THPDP047 [0.503 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-THPDP047  
About • Received ※ 03 October 2023 — Revised ※ 12 October 2023 — Accepted ※ 13 December 2023 — Issued ※ 19 December 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
THPDP057 SPS Beam Dump Enhancements on Tracking and Synchronization injection, kicker, controls, timing 1444
 
  • N. Voumard, N. Magnin, P. Van Trappen
    European Organization for Nuclear Research (CERN), Geneva, Switzerland
 
  During Long Shutdown 2 (LS2) at CERN, the SPS Beam Dumping System (SBDS) was completely renovated and relocated to SPS Point 5. This allowed to deploy at the SPS the Beam Energy Tracking System (BETS) and the Trigger Synchronization Unit (TSU), initially designed for and operational at the LHC Beam Dumping System (LBDS). The challenge encountered in this migration was the dynamic multi-cycle operation scheme with fast ramping cycles of the SPS in comparison to the long physics periods at stable energy of the LHC. This paper describes the modification of both BETS and TSU systems as well as the automatic arming sequence put in place, including the interactions with the SPS injection, the beam revolution frequency, and the Beam Interlock System (BIS).  
poster icon Poster THPDP057 [0.490 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-THPDP057  
About • Received ※ 05 October 2023 — Revised ※ 26 October 2023 — Accepted ※ 08 December 2023 — Issued ※ 12 December 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
THPDP061 Python Expert Applications for Large Beam Instrumentation Systems at CERN controls, software, detector, MMI 1460
 
  • J. Martínez Samblas, E. Calvo Giraldo, M. Gonzalez-Berges, M. Krupa
    CERN, Meyrin, Switzerland
 
  In recent years, beam diagnostics systems with increasingly large numbers of monitors, and systems handling vast amounts of data have been deployed at CERN. Their regular operation and maintenance poses a significant challenge. These systems have to run 24/7 when the accelerators are operating and the quality of the data they produce has to be guaranteed. This paper presents our experience developing applications in Python which are used to assure the readiness and availability of these large systems. The paper will first give a brief introduction to the different functionalities required, before presenting the chosen architectural design. Although the applications work mostly with online data, logged data is also used in some cases. For the implementation, standard Python libraries (e.g. PyQt, pandas, NumPy) have been used, and given the demanding performance requirements of these applications, several optimisations have had to be introduced. Feedback from users, collected during the first year’s run after CERN’s Long Shutdown period and the 2023 LHC commissioning, will also be presented. Finally, several ideas for future work will be described.  
poster icon Poster THPDP061 [2.010 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-THPDP061  
About • Received ※ 05 October 2023 — Revised ※ 26 October 2023 — Accepted ※ 13 December 2023 — Issued ※ 21 December 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
THPDP062 Controls Optimization for Energy Efficient Cooling and Ventilation at CERN controls, simulation, ECR, software 1465
 
  • D. Monteiro, R. Barillère, N. Bunijevac, I. Rühl
    CERN, Meyrin, Switzerland
 
  Cooling and air conditioning systems play a vital role for the operation of the accelerators and experimental complex of the European Organization for Nuclear Research (CERN). Without them, critical accelerator machinery would not operate reliably as many machines require a fine controlled thermodynamic environment. These operation conditions come with a significant energy consumption: about 12% (75 GWh) of electricity consumed by the Large Hadron Collider (LHC) during a regular run period is devoted to cooling and air conditioning. To align with global CERN objectives of minimizing its impact on the environment, the Cooling and Ventilation (CV) group, within the Engineering Department (EN), has been developing several initiatives focused on energy savings. A particular effort is led by the automation and controls section which has been looking at how controls and automation strategies can be optimized without requiring costly hardware changes. This paper addresses several projects of this nature, by presenting their methodology and results achieved to date. Some of them are particularly promising as real measurements revealed that electricity consumption was more than halved after implementation. Due to the pertinence of this effort in the current context of energy crisis, the paper also draws a careful reflection on how it is planned to be further pursued to provide more energy-efficient cooling and ventilation services at CERN.  
poster icon Poster THPDP062 [7.056 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-THPDP062  
About • Received ※ 05 October 2023 — Accepted ※ 08 December 2023 — Issued ※ 10 December 2023  
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
THPDP064 Selecting a Linux Operating System for CERN Accelerator Controls controls, Linux, software, hardware 1475
 
  • A. Radeva, J.M.E. Elyn, F. Locci, T. Oulevey, M. Vanden Eynden
    CERN, Meyrin, Switzerland
 
  Changing the operating system (OS) for large heterogeneous infrastructures in the research domain is complex. It requires great effort to prepare, migrate and validate the common generic components, followed by the specific corner cases. The trigger to change OS mainly comes from Industry and is based on multiple factors, such as OS end-of-life and the associated lack of security updates, as well as hardware end-of-life and incompatibilities between new hardware and old OS. At the time of writing, the CERN Accelerator Controls computing infrastructure consists of ~4000 heterogeneous systems (servers, consoles and front-ends) running CentOS 7. The effort to move to CentOS 7 was launched in 2014 and deployed operationally 2 years later. In 2022, a project was launched to select and prepare the next Linux OS for Controls servers and consoles. This paper describes the strategy behind the OS choice, and the challenges to be overcome in order to switch to it within the next 2 years, whilst respecting the operational accelerator schedule and factoring in the global hardware procurement delays. Details will be provided on the technical solutions implemented by the System Administration team to facilitate this process. In parallel, whilst embarking on moving away from running Controls services on dedicated bare metal platforms towards containerization and orchestration, an open question is whether the OS of choice, RHEL9, is the most suitable for the near future and if not what are the alternatives?  
poster icon Poster THPDP064 [9.129 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-THPDP064  
About • Received ※ 07 October 2023 — Revised ※ 27 October 2023 — Accepted ※ 02 December 2023 — Issued ※ 11 December 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
THPDP066 Visualization Tools to Monitor Structure and Growth of an Existing Control System detector, controls, software, experiment 1485
 
  • O. Pinazza, A. Augustinus, P.M. Bond, P.Ch. Chochula, A.N. Kurepin, M. Lechman, D. Voscek
    CERN, Meyrin, Switzerland
  • A.N. Kurepin
    RAS/INR, Moscow, Russia
 
  The ALICE experiment at the LHC has already been in operation for 15 years, and during its life several detectors have been replaced, new instruments installed, and some technologies changed. The control system has therefore also had to adapt, evolve and expand, sometimes departing from the symmetry and compactness of the original design. In a large collaboration, different groups contribute to the development of the control system of their detector. For the central coordination it is important to maintain the overview of the integrated control system to assure its coherence. Tools to visualize the structure and other critical aspects of the system can be of great help and can highlight problems or features of the control system such as deviations from the agreed architecture. This paper will present that existing tools, such as graphical widgets available in the public domain, or techniques typical of scientific analysis, can be adapted and help assess the coherence of the development, revealing any weaknesses and highlighting the interdependence of parts of the system. We show how we have used some of these techniques to analyse the coherence of the ALICE control system, and how this contributed to pointing out criticalities and key points.  
poster icon Poster THPDP066 [13.717 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-THPDP066  
About • Received ※ 04 October 2023 — Revised ※ 12 October 2023 — Accepted ※ 08 December 2023 — Issued ※ 13 December 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
THPDP067 Towards a Flexible and Secure Python Package Repository Service software, controls, network, interface 1489
 
  • I. Sinkarenko, B. Copy, P.J. Elson, F. Iannaccone, W.F. Koorn
    CERN, Meyrin, Switzerland
 
  The use of 3rd-party and internal software packages has become a crucial part of modern software development. Not only does it enable faster development, but it also facilitates sharing of common components, which is often necessary for ensuring correctness and robustness of developed software. To enable this workflow, a package repository is needed to store internal packages and provide a proxy to 3rd-party repository services. This is particularly important for systems that operate in constrained networks, as is common for accelerator control systems. Despite its benefits, installing arbitrary software from a 3rd-party package repository can pose security and operational risks. Therefore, it is crucial to implement effective security measures, such as usage logging, package moderation and security scanning. However, experience at CERN has shown off-the-shelf tools for running a flexible repository service for Python packages not to be satisfactory. For instance, the dependency confusion attack first published in 2021 has still not been fully addressed by the main open-source repository services. An in-house development was conducted to address this, using a modular approach to building a Python package repository that enables the creation of a powerful and security-friendly repository service using small components. This paper describes the components that exist, demonstrates their capabilities within CERN and discusses future plans. The solution is not CERN-specific and is likely to be relevant to other institutes facing comparable challenges.  
poster icon Poster THPDP067 [0.510 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-THPDP067  
About • Received ※ 05 October 2023 — Revised ※ 12 October 2023 — Accepted ※ 13 December 2023 — Issued ※ 16 December 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
THPDP068 Implementing High Performance & Highly Reliable Time Series Acquisition Software for the CERN-Wide Accelerator Data Logging Service network, controls, software, database 1494
 
  • M. Sobieszek, V. Baggiolini, R. Mucha, C. Roderick, P. Sowinski, J.P. Wozniak
    CERN, Meyrin, Switzerland
 
  The CERN Accelerator Data Logging Service (NXCALS) stores data generated by the accelerator infrastructure and beam related devices. This amounts to 3.5TB of data per day, coming from more than 2.5 million signals from heterogeneous systems at various frequencies. Around 85% of this data is transmitted through the Controls Middleware (CMW) infrastructure. To reliably gather such volumes of data, the acquisition system must be highly available, resilient and robust. It also has to be highly efficient and easily scalable, given the regularly growing data rates and volumes, particularly for the increases expected to be produced by the future High Luminosity LHC. This paper describes the NXCALS time series acquisition software, known as Data Sources. System architecture, design choices, and recovery solutions for various failure scenarios (e.g. network disruptions or cluster split-brain problems) will be covered. Technical implementation details will be discussed, covering the clustering of Akka Actors collecting data from tens of thousands of CMW devices and sharing the lessons learned. The NXCALS system has been operational since 2018 and has demonstrated the capability to fulfil all aforementioned characteristics, while also ensuring self-healing capabilities and no data losses during redeployments. The engineering challenge, architecture, lessons learned, and the implementation of this acquisition system are not CERN-specific and are therefore relevant to other institutes facing comparable challenges.  
poster icon Poster THPDP068 [2.960 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-THPDP068  
About • Received ※ 06 October 2023 — Revised ※ 12 October 2023 — Accepted ※ 13 October 2023 — Issued ※ 20 November 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
THPDP069 A Generic Real-Time Software in C++ for Digital Camera-Based Acquisition Systems at CERN software, network, controls, hardware 1499
 
  • A. Topaloudis, E. Bravin, S. Burger, S. Jackson, S. Mazzoni, E. Poimenidou, E. Senes
    CERN, Meyrin, Switzerland
 
  Until recently, most of CERN’s beam visualisation systems have been based on increasingly obsolescent analogue cameras. Hence, there is an on-going campaign to replace old or install new digital equivalents. There are many challenges associated with providing a homogenised solution for the data acquisition of the various visualization systems in an accelerator complex as diverse as CERN’s. However, a generic real-time software in C++ has been developed and already installed in several locations to control such systems. This paper describes the software and the additional tools that have also been developed to exploit the acquisition systems, including a Graphical User Interface (GUI) in Java/Swing and web fixed displays. Furthermore, it analyses the specific challenges of each use-case and the chosen solutions that resolve issues including any subsequent performance limitations.  
poster icon Poster THPDP069 [1.787 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-THPDP069  
About • Received ※ 05 October 2023 — Accepted ※ 08 December 2023 — Issued ※ 18 December 2023  
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
THPDP074 Phase-II Upgrade of the CMS Electromagnetic Calorimeter Detector Control and Safety Systems for the High Luminosity Large Hadron Collider detector, power-supply, controls, software 1516
 
  • R. Jiménez Estupiñán, G. Dissertori, L. Djambazov, N. Härringer, W. Lustermann, K. Stachon
    ETH, Zurich, Switzerland
  • P. Adzic, D. Jovanovic, M. Mijic, P. Milenovic
    University of Belgrade, Belgrade, Republic of Serbia
  • L. Cokic
    CERN, Meyrin, Switzerland
 
  Funding: Swiss National Science Foundation, Switzerland; Ministry of Education, Science and Technological Development, Serbia.
The Electromagnetic Calorimeter (ECAL) is a subdetector of the CMS experiment. Composed of a barrel and two endcaps, ECAL uses lead tungstate scintillating crystals to measure the energy of electrons and photons produced in high-energy collisions at the Large Hadron Collider (LHC). The LHC will undergo a major upgrade during the 2026-2029 period to build the High-Luminosity LHC (HL-LHC). The HL-LHC will allow for physics measurements with one order of magnitude larger luminosity during its Phase-2 operation. The higher luminosity implies a dramatic change of the environmental conditions for the detectors, which will also undergo a significant upgrade. The endcaps will be decommissioned and replaced with a new detector. The barrel will be upgraded with new front-end electronics. A Sniffer system will be installed to analyse the airflow from within the detector. New high voltage and water-cooled, radiation tolerant low voltage power supplies are under development. The ECAL barrel safety system will replace the existing one and the precision temperature monitoring system will be redesigned. From the controls point of view, the final barrel calorimeter will practically be a new detector. The large modification of the underlying hardware and software components will have a considerable impact in the architecture of the detector control system (DCS). In this document the upgrade plans and the preliminary design of the ECAL DCS to ensure reliable and efficient operation during the Phase-2 period are summarized.
 
poster icon Poster THPDP074 [1.906 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-THPDP074  
About • Received ※ 05 October 2023 — Revised ※ 10 October 2023 — Accepted ※ 13 October 2023 — Issued ※ 16 October 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
THPDP081 Exploring Ethernet-Based CAMAC Replacements at ATLAS controls, Ethernet, network, data-acquisition 1542
 
  • K.J. Bunnell, C. Dickerson, D.J. Novak, D. Stanton
    ANL, Lemont, Illinois, USA
 
  Funding: This work was supported by the US Department of Energy, Office of Nuclear Physics, under Contract No. DE-AC02-06CH11357. This research used resources of ANL’s ATLAS facility.
The Argonne Tandem Linear Accelerating System (ATLAS) facility at Argonne National Laboratory is researching ways at avoiding a crisis caused by the end-of-life issues with its 30 year-old CAMAC system. Replacement parts for CAMAC have long since been unavailable causing the potential for long periods of accelerator down times once the limited CAMAC spares are exhausted. ATLAS has recently upgraded the Ethernet in the facility from a 100-Mbps (max) to a 1-Gbps network. Therefore, an Ethernet-based data acquisition system is desirable. The data acquisition replacement requires reliability, speed, and longevity to be a viable upgrade to the facility. In addition, the transition from CAMAC to a modern data acquisition system will be done with minimal interruption of operations.
 
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-THPDP081  
About • Received ※ 10 October 2023 — Revised ※ 11 October 2023 — Accepted ※ 13 October 2023 — Issued ※ 20 October 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
THPDP082 Teaching an Old Accelerator New Tricks database, controls, experiment, linac 1545
 
  • D.J. Novak, K.J. Bunnell, C. Dickerson, D. Stanton
    ANL, Lemont, Illinois, USA
 
  Funding: This work was supported by the U.S. Department of Energy, under Contract No. DE-AC02-06CH11357. This research used resources of ANLs ATLAS facility, which is a DOE Office of Science User Facility.
The Argonne Tandem Linac Accelerator System (ATLAS) has been a National User Facility since 1985. In that time, many of the systems that help operators retrieve, modify, and store beamline parameters have not kept pace with the advancement of technology. Development of a new method of storing and retrieving beamline parameters resulted in the testing and installation of a time-series database as a potential replacement for the traditional relational database. InfluxDB was selected due to its self-hosted Open-Source version availability as well as the simplicity of installation and setup. A program was written to periodically gather all accelerator parameters in the control system and store them in the time-series database. This resulted in over 13,000 distinct data points, captured at 5-minute intervals. A second test captured 35 channels on a 1-minute cadence. Graphing of the captured data is being done on Grafana, an Open-Source version is available that co-exists well with InfluxDB as the back-end. Grafana made visualizing the data simple and flexible. The testing has allowed for the use of modern graphing tools to generate new insights into operating the accelerator, as well as opened the door to building large data sets suitable for Artificial Intelligence and Machine Learning applications.
 
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-THPDP082  
About • Received ※ 10 October 2023 — Revised ※ 11 October 2023 — Accepted ※ 06 December 2023 — Issued ※ 13 December 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
THPDP085 LANSCE’s Timing System Status and Future Plans timing, controls, hardware, distributed 1547
 
  • L.E. Walker, B.C. Atencio, S.A. Baily, D. Fratantonio, C.D. Hatch, M. Pieck, T. Ramakrishnan
    LANL, Los Alamos, New Mexico, USA
 
  Funding: This work was supported by the U.S. DOE through the Los Alamos National Laboratory (LANL). LANL is operated by Triad National Security, LLC, for the NNSA of U.S. DOE - Contract No. 89233218CNA000001
The Los Alamos Neutron Science Center (LANSCE) operates at a maximum repetition rate of 120 Hz. Timing gates are required for synchronization of the accelerator to provide beam acceleration along the LINAC and beam distribution to the five experimental areas. They are also provided to other devices with sensitive operating points relative to the machine cycle. Over the last 50 years of operations many new time sensitive pieces of equipment have been added. This has changed the demand on, and complexity of, the timing system. Further driven by equipment obsolescence issues, the timing system un-derwent many upgrades and revitalization efforts, with the most significant deployment starting in 2016. Due to these upgrade efforts, the timing system architecture design changed from a purely centralized system, to a distributed event-based one. The purpose of this paper is to detail the current state of the timing system, as a hy-brid system with the gate events being generated from a new timing master system, while still utilizing legacy distribution and fanout systems. Upgrades to the distribu-tion system are planned, but due to the required beam delivery schedule, they can only be deployed in sections during four-month annual maintenance cycles. The paper will also cover the off-the-shelf solutions that have been found for standardization, and the efforts towards a life cycle management process.
LA-UR-23-31123
 
poster icon Poster THPDP085 [3.311 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-THPDP085  
About • Received ※ 29 September 2023 — Revised ※ 11 October 2023 — Accepted ※ 06 December 2023 — Issued ※ 13 December 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
THPDP086 LCLS-II Cryomodule Isolation Vacuum Pump System controls, PLC, cryomodule, vacuum 1551
 
  • S.C. Alverson, D.K. Gill, S. Saraf
    SLAC, Menlo Park, California, USA
 
  Funding: Work supported by the U.S. Department of Energy under contract number DE-AC02-76SF00515
The LCLS-II Project at SLAC National Accelerator is a major upgrade to the lab’s Free Electron Laser (FEL) facility adding a new injector and superconducting linac. In order to support this new linac, a vacuum pumping scheme was needed to isolate the liquid helium lines cooling the RF cavities inside the cryomodules from outside ambient heat as well as to exhaust any leaking helium gas. Carts were built with support for both roughing and high vacuum pumps and read back diagnostics. Additionally, a Programmable Logic Controller (PLC) was then configured to automate the pump down sequence and provide interlocks in the case of a vacuum burst. The design was made modular such that it can be manually relocated easily to other sections of the linac if needed depending on vacuum conditions.
* https://lcls.slac.stanford.edu/lcls-ii
 
poster icon Poster THPDP086 [18.556 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-THPDP086  
About • Received ※ 03 October 2023 — Revised ※ 27 October 2023 — Accepted ※ 06 December 2023 — Issued ※ 15 December 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
FR1BCO01 Status of the European Spallation Source Controls EPICS, controls, PLC, timing 1600
 
  • T. Korhonen
    ESS, Lund, Sweden
 
  The European Spallation Source has made substantial progress in the recent years. Similarly, the control system has taken shape and has gone through the first commissioning and is now in production use. While there are still features and services in preparation, the central features are already in place. The talk will give an overview of the areas where the control system is used, our use and experience with the central technologies like MTCA.4 and EPICS 7, plus an overview of the next steps. The talk will also look at what was planned and reported in ICALEPCS 2015 and how our system of today compares with them, and the evolution from green field project to an operating organization.  
slides icon Slides FR1BCO01 [2.354 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-FR1BCO01  
About • Received ※ 06 October 2023 — Revised ※ 12 October 2023 — Accepted ※ 12 December 2023 — Issued ※ 15 December 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
FR1BCO02 Controls at the Fermilab PIP-II Superconducting Linac controls, EPICS, software, cryomodule 1607
 
  • D.J. Nicklaus, P.M. Hanlet
    Fermilab, Batavia, Illinois, USA
 
  PIP-II is an 800 MeV superconducting RF linac under development at Fermilab. As the new first stage in our accelerator chain, it will deliver high-power beam to multiple experiments simultaneously and thus drive Fermilab’s particle physics program for years to come. In a pivot for Fermilab, controls for PIP-II are based on EPICS instead of ACNET, the legacy control system for accelerators at the lab. This paper discusses the status of the EPICS controls work for PIP-II. We describe the EPICS tools selected for our system and the experience of operators new to EPICS. We introduce our continuous integration / continuous development environment. We also describe some efforts at cooperation between EPICS and ACNET, or efforts to move towards a unified interface that can apply to both control systems.  
slides icon Slides FR1BCO02 [4.528 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-FR1BCO02  
About • Received ※ 04 October 2023 — Revised ※ 12 October 2023 — Accepted ※ 10 December 2023 — Issued ※ 11 December 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
FR2BCO05 Magnet Information Management System Based on Web Application for the KEK e⁻/e⁺ Injector Linac database, linac, controls, software 1669
 
  • M. Satoh, Y. Enomoto
    KEK, Ibaraki, Japan
  • T. Kudou
    Mitsubishi Electric System & Service Co., Ltd, Tsukuba, Japan
 
  The KEK injector linac provides e⁻/e⁺ beam to four independent storage rings and a positron damping ring. An accurate information management system of the accelerator components is very important since it is utilized for the beam tuning model. Especially, the incorrect magnet database may cause large deterioration in the quality of beam emittance. In KEK linac, a text-based database system has been used for the information management of magnet system in the long time. It comprises several independent text files which are master information to generate the EPICS database files and the other configuration files required for many linac control software. In this management scheme, it is not easy to access and update any information for the common user except control software expert. For this reason, a new web application-based magnet information management system was developed with the Angular and PHP framework. In the new system, the magnet information can be easily extracted and modified through any web browser for any user. In this paper, we report the new magnet information management system in detail.  
slides icon Slides FR2BCO05 [2.146 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-FR2BCO05  
About • Received ※ 09 October 2023 — Revised ※ 11 October 2023 — Accepted ※ 20 November 2023 — Issued ※ 18 December 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)