Keyword: network
Paper Title Other Keywords Page
MO1BCO02 ITER Controls Approaching One Million Integrated EPICS Process Variables controls, software, MMI, operation 6
 
  • A. Wallander, B. Bauvir
    ITER Organization, St. Paul lez Durance, France
 
  The ITER Tokamak is currently being assembled in southern France. In parallel, the supporting systems have completed installation and are under commissioning or operation. Over the last couple of years the electrical distribution, building services, liquid & gas, cooling water, reactive power compensation and cryoplant have been integrated, adding up to close to one million process variables. Those systems are operated, or under commissioning, from a temporary main control room or local control rooms close to the equipment using an integrated infrastructure. The ITER control system is therefore in production. As the ITER procurement is 90% in-kind, a major challenge has been the integration of the various systems provided by suppliers from the ITER members. Standardization, CODAC Core System software distribution, training and coaching have all played a positive role. Nevertheless, the integration has been more difficult than foreseen and the central team has been forced to rework much of the delivered software. In this paper we report on the current status of the ITER integrated control system with emphasize on lessons learned from integration of in-kind contributions.  
slides icon Slides MO1BCO02 [3.521 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-MO1BCO02  
About • Received ※ 27 September 2023 — Revised ※ 07 October 2023 — Accepted ※ 15 November 2023 — Issued ※ 07 December 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
MO2AO06 Neutron From a Distance: Remote Access to Experiments experiment, GUI, controls, software 95
 
  • P. Mutti, F. Cecillon, C. Cocho, A. Elaazzouzi, Y. Le Goc, J. Locatelli, H. Ortiz
    ILL, Grenoble, France
 
  Large-scale experimental facilities such as the ILL are designed to accommodate thousands of international visitors each year. Despite the annual influx of visitors, there has always been interest in options that don’t require users to travel to ILL. Remote access to instruments and datasets would unlock scientific opportunities for those less able to travel and contribute to global challenges like pandemics and global warming. Remote access systems can also increase the efficiency of experiments. For measurements that last a long time scientists can check regularly on the progress of the data taking from a distance, adjusting the instrument remotely if needed. Based on the VISA platform, the remote access becomes a cloud-based application which requires only a web browser and an internet connection. NOMAD Remote provides the same experience for users at home as though they were carrying out their experiment at the facility. VISA makes it easy for the experimental team to collaborate by allowing users and instrument scientists to share the same environment in real time. NOMAD Remote, an extension of the ILL instrument control software, enables researchers to take control of all instruments with continued hands-on support from local experts. Developed in-house, NOMAD Remote is a ground-breaking advance in remote access to neutron techniques. It allows full control of the extensive range of experimental environments with the highest security standards for data, and access to the instrument is carefully prioritised and authenticated.  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-MO2AO06  
About • Received ※ 31 October 2023 — Revised ※ 10 October 2023 — Accepted ※ 14 November 2023 — Issued ※ 09 December 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
MO3AO03 Commissioning and Optimization of the SIRIUS Fast Orbit Feedback controls, feedback, power-supply, operation 123
 
  • D.O. Tavares, M.S. Aguiar, F.H. Cardoso, E.P. Coelho, G.R. Cruz, A.F. Giachero, L. Lin, S.R. Marques, A.C.S. Oliveira, G.S. Ramirez, É.N. Rolim, L.M. Russo, F.H. de Sá
    LNLS, Campinas, Brazil
 
  The Sirius Fast Orbit Feedback System (FOFB) entered operation for users in November 2022. The system design aimed at minimizing the overall feedback loop delay, understood as the main performance bottleneck in typical FOFB systems. Driven by this goal, the loop update rate was chosen as high as possible, real-time processing was entirely done in FPGAs, BPMs and corrector power supplies were tightly integrated to the feedback controllers in MicroTCA crates, a small number of BPMs was included in the feedback loop and a dedicated network engine was used. These choices targeted a disturbance rejection crossover frequency of 1 kHz. To deal with the DC currents that build up in the fast orbit corrector power supplies, a method to transfer the DC control effort to the Slow Orbit Feedback System (SOFB) running in parallel was implemented. This contribution gives a brief overview of the system architecture and modelling, and reports on its commissioning, system identification and feedback loop optimization during its first year of operation.  
slides icon Slides MO3AO03 [78.397 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-MO3AO03  
About • Received ※ 06 October 2023 — Revised ※ 09 October 2023 — Accepted ※ 14 November 2023 — Issued ※ 03 December 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
MO3BCO03 Control System Development at the South African Isotope Facility controls, target, EPICS, PLC 160
 
  • J.K. Abraham, H. Anderson
    iThemba LABS, Somerset West, South Africa
  • W. Duckitt
    Stellenbosch University, Matieland, South Africa
 
  The South African Isotope Facility (SAIF) at iThemba LABS is well into its commissioning phase. The intention of SAIF is to free up our existing Separated Sector Cyclotron to do more physics research and to increase our radioisotope production and research capacity. An EPICS based control system, primarily utilising EtherCAT hardware, has been developed that spans the control of beamline equipment, target handling and bombardment stations, vault clearance and ARMS systems. Various building and peripheral services like cooling water and gases, HVAC and UPS have also been integrated into the control system via Modbus and OPCUA to allow for seamless control and monitoring. An overview of the SAIF facility and the EPICS based control system is presented. The control strategies, hardware and various EPICS and web based software and tools utilised are presented.  
slides icon Slides MO3BCO03 [3.511 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-MO3BCO03  
About • Received ※ 06 October 2023 — Revised ※ 11 October 2023 — Accepted ※ 14 November 2023 — Issued ※ 12 December 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
MO4AO05 Development of a Timing and Data Link for EIC Common Hardware Platform timing, FPGA, alignment, site 228
 
  • P. Bachek, T. Hayes, J. Mead, K. Mernick, G. Narayan, F. Severino
    BNL, Upton, New York, USA
 
  Funding: Contract Number DE-AC02-98CH10886 with the auspices of the US Department of Energy
Modern timing distribution systems benefit from high configurability and the bidirectional transfer of timing data. The Electron Ion Collider (EIC) Common Hardware Platform (CHP) will integrate the functions of the existing RHIC Real Time Data Link (RTDL), Event Link, and Beam Sync Link, along with the Low-Level RF (LLRF) system Update Link (UL), into a common high speed serial link. One EIC CHP carrier board sup-ports up to eight external 8 Gbps high speed links via SFP+ modules, as well as up to six 8 Gbps high speed links to each of two daughterboards. A daughterboard will be designed for the purpose of timing data link distribution for use with the CHP. This daughterboard will have two high speed digital crosspoint switches and a Xilinx Artix Ultrascale+ FPGA onboard with GTY transceivers. One of these will be dedicated for a high-speed control and data link directly between the onboard FPGA and the carrier FPGA. The remaining GTY transceivers will be routed through the crosspoint switches. The daughterboard will support sixteen external SFP+ ports for timing distribution infrastructure with some ports dedicated for transmit only link fanout. The timing data link will support bidirectional data transfer including sending data or events from a downstream device back upstream. This flexibility will be achieved by routing the SFP+ ports through the crosspoint switches which allows the timing link datapaths to be forwarded directly through the daughterboard to the carrier and into the FPGA on the daughterboard in many different configurations.
 
slides icon Slides MO4AO05 [1.236 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-MO4AO05  
About • Received ※ 05 October 2023 — Revised ※ 07 October 2023 — Accepted ※ 23 November 2023 — Issued ※ 07 December 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
TU1BCO04 Laser Focal Position Correction Using FPGA-Based ML Models controls, laser, FPGA, simulation 262
 
  • J.A. Einstein-Curtis, S.J. Coleman, N.M. Cook, J.P. Edelen
    RadiaSoft LLC, Boulder, Colorado, USA
  • S.K. Barber, C.E. Berger, J. van Tilborg
    LBNL, Berkeley, California, USA
 
  Funding: This material is based upon work supported by the U.S. Department of Energy, Office of Science, Office of High Energy Physics under Award Number DE-SC 00259037.
High repetition-rate, ultrafast laser systems play a critical role in a host of modern scientific and industrial applications. We present a diagnostic and correction scheme for controlling and determining laser focal position by utilizing fast wavefront sensor measurements from multiple positions to train a focal position predictor. This predictor and additional control algorithms have been integrated into a unified control interface and FPGA-based controller on beamlines at the Bella facility at LBNL. An optics section is adjusted online to provide the desired correction to the focal position on millisecond timescales by determining corrections for an actuator in a telescope section along the beamline. Our initial proof-of-principle demonstrations leveraged pre-compiled data and pre-trained networks operating ex-situ from the laser system. A framework for generating a low-level hardware description of ML-based correction algorithms on FPGA hardware was coupled directly to the beamline using the AMD Xilinx Vitis AI toolchain in conjunction with deployment scripts. Lastly, we consider the use of remote computing resources, such as the Sirepo scientific framework*, to actively update these correction schemes and deploy models to a production environment.
* M.S. Rakitin et al., "Sirepo: an open-source cloud-based software interface for X-ray source and optics simulations" Journal of Synchrotron Radiation25, 1877-1892 (Nov 2018).
 
slides icon Slides TU1BCO04 [1.876 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-TU1BCO04  
About • Received ※ 06 October 2023 — Accepted ※ 14 November 2023 — Issued ※ 18 December 2023  
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
TU1BCO06 Disentangling Beam Losses in The Fermilab Main Injector Enclosure Using Real-Time Edge AI FPGA, real-time, controls, operation 273
 
  • K.J. Hazelwood, J.M.S. Arnold, M.R. Austin, J.R. Berlioz, P.M. Hanlet, M.A. Ibrahim, A.T. Livaudais-Lewis, J. Mitrevski, V.P. Nagaslaev, A. Narayanan, D.J. Nicklaus, G. Pradhan, A.L. Saewert, B.A. Schupbach, K. Seiya, R.M. Thurman-Keup, N.V. Tran
    Fermilab, Batavia, Illinois, USA
  • J.YC. Hu, J. Jiang, H. Liu, S. Memik, R. Shi, A.M. Shuping, M. Thieme, C. Xu
    Northwestern University, EVANSTON, USA
  • A. Narayanan
    Northern Illinois University, DeKalb, Illinois, USA
 
  The Fermilab Main Injector enclosure houses two accelerators, the Main Injector and Recycler Ring. During normal operation, high intensity proton beams exist simultaneously in both. The two accelerators share the same beam loss monitors (BLM) and monitoring system. Deciphering the origin of any of the 260 BLM readings is often difficult. The (Accelerator) Real-time Edge AI for Distributed Systems project, or READS, has developed an AI/ML model, and implemented it on fast FPGA hardware, that disentangles mixed beam losses and attributes probabilities to each BLM as to which machine(s) the loss originated from in real-time. The model inferences are then streamed to the Fermilab accelerator controls network (ACNET) where they are available for operators and experts alike to aid in tuning the machines.  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-TU1BCO06  
About • Received ※ 06 October 2023 — Revised ※ 11 October 2023 — Accepted ※ 15 November 2023 — Issued ※ 06 December 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
TU2BCO01 Database’s Disaster Recovery Meets a Ransomware Attack database, target, software, GUI 280
 
  • M.A. Zambrano
    SKAO, Macclesfield, United Kingdom
  • V. Gonzalez
    ALMA Observatory, Santiago, Chile
 
  Cyberattacks are a growing threat to organizations around the world, including observatories. These attacks can cause significant disruption to operations and can be costly to recover from. This paper provides an overview of the history of cyberattacks, the motivations of attackers, and the organization of cybercrime groups. It also discusses the steps that can be taken to quickly restore a key component of any organization, the database, and the lessons learned during the recovery process. The paper concludes by identifying some areas for improvement in cybersecurity, such as the need for better training for employees, more secure networks, and more robust data backup and recovery procedures.  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-TU2BCO01  
About • Received ※ 05 October 2023 — Revised ※ 11 October 2023 — Accepted ※ 16 November 2023 — Issued ※ 16 December 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
TU2BCO04 Accelerator Systems Cyber Security Activities at SLAC controls, EPICS, simulation, operation 292
 
  • G.R. White, A.L. Edelen
    SLAC, Menlo Park, California, USA
 
  Funding: Work supported in part by the U.S. Department of Energy under contract number DE-AC02-76SF00515.
We describe four cyber security related activities of SLAC and collaborations. First, from a broad review of accelerator computing cyber and mission reliability, our analysis method, findings and outcomes. Second, lab-wide and accelerator penetration testing, in particular methods to control, coordinate, and trap, potentially hazardous scans. Third, a summary gap analysis of recent US regulatory orders from common practice at accelerators, and our plans to address these in collaboration with the US Dept. of Energy. Finally, summary attack vectors of EPICS, and technical plans to add authentication and encryption to EPICS itself.
 
slides icon Slides TU2BCO04 [1.677 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-TU2BCO04  
About • Received ※ 04 October 2023 — Revised ※ 13 October 2023 — Accepted ※ 15 November 2023 — Issued ※ 17 December 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
TUMBCMO05 PyDM Development Update EPICS, framework, interface, feedback 349
 
  • J.J. Bellister, Y.G. Yazar
    SLAC, Menlo Park, California, USA
 
  PyDM is a PyQt-based framework for building user interfaces for control systems. It provides a no-code, drag-and-drop system to make simple screens, as well as a straightforward Python framework to build complex applications. Recent updates include expanded EPICS PVAccess support using the P4P module. A new widget has been added for displaying data received from NTTables. Performance improvements have been implemented to enhance the loading time of displays, particularly those that heavily utilize template repeaters. Additionally, improved documentation and tutorial materials, accompanied by a sample template application, make it easier for users to get started.  
slides icon Slides TUMBCMO05 [0.345 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-TUMBCMO05  
About • Received ※ 06 October 2023 — Revised ※ 08 October 2023 — Accepted ※ 13 October 2023 — Issued ※ 24 October 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
TUMBCMO24 A New Real-Time Processing Platform for the Elettra 2.0 Storage Ring feedback, power-supply, controls, real-time 419
 
  • G. Gaio, A.I. Bogani, M. Cautero, L. Pivetta, G. Scalamera, I. Trovarelli
    Elettra-Sincrotrone Trieste S.C.p.A., Basovizza, Italy
  • L. Anastasio
    University of L’Aquila, L’Aquila, Italy
 
  Processing synchronous data is essential to implement efficient control schemes. A new framework based on Linux and DPDK will be used to acquire and process sensors and control actuators at very high repetition rate for Elettra 2.0. As part of the ongoing project, the actual fast orbit feedback subsystem is going to be re-implemented with this new technology. Moreover the communication performance with the new power converters for the new storage ring is presented.  
slides icon Slides TUMBCMO24 [0.683 MB]  
poster icon Poster TUMBCMO24 [0.218 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-TUMBCMO24  
About • Received ※ 02 October 2023 — Revised ※ 08 October 2023 — Accepted ※ 06 December 2023 — Issued ※ 08 December 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
TUMBCMO25 Operational Controls for Robots Integrated in Accelerator Complexes controls, operation, framework, interface 423
 
  • S.F. Fargier, M. Donzé
    European Organization for Nuclear Research (CERN), Geneva, Switzerland
  • M. Di Castro
    CERN, Meyrin, Switzerland
 
  The fourth industrial revolution, the current trend of automation and data interconnection in industrial technologies, is becoming an essential tool to boost maintenance and availability for space applications, warehouse logistics, particle accelerators and for harsh environments in general. The main pillars of Industry 4.0 are Internet of Things (IoT), Wireless Sensors, Cloud Computing, Artificial Intelligence (AI), Machine Learning and Robotics. We are finding more and more way to interconnect existing processes using technology as a connector between machines, operations, equipment and people. Facility maintenance and operation is becoming more streamlined with earlier notifications, simplifying the control and monitor of the operations. Core to success and future growth in this field is the use of robots to perform various tasks, particularly those that are repetitive, unplanned or dangerous, which humans either prefer to avoid or are unable to carry out due to hazards, size constraints, or the extreme environments in which they take place. To be operated in a reliable way within particle accelerator complexes, robot controls and interfaces need to be included in the accelerator control frameworks, which is not obvious when movable systems are operating within a harsh environment. In this paper, the operational controls for robots at CERN is presented. Current robot controls at CERN will be detailed and the use case of the Train Inspection Monorail robot control will be presented.  
slides icon Slides TUMBCMO25 [47.070 MB]  
poster icon Poster TUMBCMO25 [2.228 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-TUMBCMO25  
About • Received ※ 05 October 2023 — Revised ※ 29 November 2023 — Accepted ※ 11 December 2023 — Issued ※ 16 December 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
TUMBCMO39 Enhanced Maintenance and Availability of Handling Equipment using IIoT Technologies controls, operation, monitoring, framework 462
 
  • E. Blanco Viñuela, A.G. Garcia Fernandez, D. Lafarge, G. Thomas, J-C. Tournier
    CERN, Meyrin, Switzerland
 
  CERN currently houses 6000 handling equipment units categorized into 40 different families, such as electric overhead travelling cranes (EOT), hoists, trucks, and forklifts. These assets are spread throughout the CERN campus, on the surface (indoor and outdoor), as well as in underground tunnels and experimental caverns. Partial access to some areas, a large area to cover, thousands of units, radiation, and diverse needs among handling equipment makes maintenance a cumbersome task. Without automatic monitoring solutions, the handling engineering team must conduct periodic on-site inspections to identify equipment in need of regulatory maintenance, leading to unnecessary inspections in hard-to-reach environments for underused equipment but also reliability risks for overused equipment between two technical visits. To overcome these challenges, a remote monitoring solution was introduced to extend the equipment lifetime and perform optimal maintenance. This paper describes the implementation of a remote monitoring solution integrating IIoT (Industrial Internet of Things) technologies with the existing CERN control infrastructure and frameworks for control systems (UNICOS and WinCC OA). At the present time, over 600 handling equipment units are being monitored successfully and this number will grow thanks to the scalability this solution offers.  
slides icon Slides TUMBCMO39 [0.560 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-TUMBCMO39  
About • Received ※ 03 October 2023 — Accepted ※ 28 November 2023 — Issued ※ 19 December 2023  
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
TUPDP012 Tango at LULI TANGO, laser, controls, GUI 509
 
  • S. Marchand, J.M. Bruneau, L. Ennelin, S.M. Minolli, M. Sow
    LULI, Palaiseaux, France
 
  Funding: CNRS, École polytechnique, CEA, Sorbonne Université
Apollon, LULI2000 and HERA are three Research Infrastructures of the Centre national de la recherche scientifique (CNRS), École polytechnique (X), Commissariat à l’Énergie Atomique et aux Energies Alternatives (CEA) and Sorbonne University (SU). Now in past-commissioning phase, Apollon is a four beam laser, multi-petawatt laser facility fitted with instrumentation technologies on the cutting edge with two experimental areas (short–up to 1m–and long focal–up to 20m, 32m in the future). To monitor the laser beam characteristics through the interaction chambers, more than 300 devices are distributed in the facility and controlled through a Tango bus. This poster presents primarily a synthetic view of the Apollon facility, from network to hardware and from virtual machines to software under Tango architecture. We can here have an overview of the different types of devices which are running on the facility and some GUIs developed with the exploitation team to insure the best possible way of running the lasers. While developments are still currently under work for this facility, upgrading the systems of LULI2000 from one side and HERA from the other side are underway by the Control-Command & Supervision team and would follow the same specifications to offer shared protocols and knowledge.
 
poster icon Poster TUPDP012 [2.267 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-TUPDP012  
About • Received ※ 12 October 2023 — Revised ※ 09 November 2023 — Accepted ※ 17 December 2023 — Issued ※ 19 December 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
TUPDP016 Migrating from Alarm Handler to Phoebus Alarm-Server at BESSY II controls, EPICS, GUI, ISOL 526
 
  • M. Gotz, T. Birke
    HZB, Berlin, Germany
 
  The BESSY II lightsource has been in operation at Helmholtz-Center Berlin (HZB) for 25 years and is expected to be operated for more than the next decade. The EPICS Alarm Handler (alh) has served as the basis for a reliable alarm system for BESSY II as well as other facilities and laboratories operated by HZB. To preempt software obsolescence and enable a centralized architecture for other Alarm Handlers running throughout HZB, the alarm system is being migrated to the alarm-service developed within the Control System Studio/Phoebus ecosystem. To facilitate operation of the Alarm Handler, while evaluating the new system, tools were developed to automate creation of the Phoebus alarm-service configuration files in the control systems’ build process. Additionally, tools and configurations were devised to mirror the old system’s key features in the new one. This contribution presents the tools developed and the infrastructure deployed to use the Phoebus alarm-service at HZB.  
poster icon Poster TUPDP016 [0.343 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-TUPDP016  
About • Received ※ 29 September 2023 — Accepted ※ 06 December 2023 — Issued ※ 11 December 2023  
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
TUPDP043 Final Design of Control and Data Acquisition System for the ITER Heating Neutral Beam Injector Test Bed controls, experiment, data-acquisition, power-supply 612
 
  • L. Trevisan, A.F. Luchetta, G. Manduchi, G. Martini, A. Rigoni, C. Taliercio
    Consorzio RFX, Padova, Italy
  • N. Cruz
    IPFN, Lisbon, Portugal
  • C. Labate, F. Paolucci
    F4E, Barcelona, Spain
 
  Funding: This work has been carried out within the framework of the EUROfusion Consortium funded by the European Union via Euratom Research and Training Programme (Grant Agreement No 101052200 - EUROfusion)
Tokamaks use heating neutral beam (HNB) injectors to reach fusion conditions and drive the plasma current. ITER, the large international tokamak, will have three high-energy, high-power (1MeV, 16.5MW) HNBs. MITICA, the ITER HNB prototype, is being built at the ITER Neutral Beam Test Facility, Italy, to develop and test the ITER HNB, whose requirements are far beyond the current HNB technology. MITICA operates in a pulsed way with pulse duration up to 3600s and 25% duty cycle. It requires a complex control and data acquisition system (CODAS) to provide supervisory and plant control, monitoring, fast real-time control, data acquisition and archiving, data access, and operator interface. The control infrastructure consists of two parts: central and plant system CODAS. The former provides high-level resources such as servers and a central archive for experimental data. The latter manages the MITICA plant units, i.e., components that generally execute a specific function, such as power supply, vacuum pumping, or scientific parameter measurements. CODAS integrates various technologies to implement the required functions and meet the associated requirements. Our paper presents the CODAS requirement and architecture based on the experience gained with SPIDER, the ITER full-size beam source in operation since 2018. It focuses on the most challenging topics, such as synchronization, fast real-time control, software development for long-lasting experiments, system commissioning, and integration.
 
poster icon Poster TUPDP043 [0.621 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-TUPDP043  
About • Received ※ 05 October 2023 — Accepted ※ 10 December 2023 — Issued ※ 19 December 2023  
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
TUPDP049 15 Years of the J-PARC Main Ring Control System Operation and Its Future Plan controls, operation, EPICS, software 639
 
  • S. Yamada
    J-PARC, KEK & JAEA, Ibaraki-ken, Japan
 
  The accelerator control system of the J-PARC MR started operation in 2008. Most of the components of the control computers, such as servers, disks, operation terminals, front-end computers and software, which were introduced during the construction phase, have gone through one or two generational changes in the last 15 years. Alongside, the policies for the operation of control computers have changed. This paper reviews the renewal of those components and discusses the philosophy behind the configuration and operational policy. It is also discusses the approach to matters that did not exist at the beginning of the project, such as virtualization or cyber security.  
poster icon Poster TUPDP049 [0.489 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-TUPDP049  
About • Received ※ 05 October 2023 — Revised ※ 25 October 2023 — Accepted ※ 14 December 2023 — Issued ※ 17 December 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
TUPDP065 Introduction to the Control System of the PAL-XFEL Beamlines FEL, controls, experiment, EPICS 655
 
  • G.S. Park, S-M. Hwang, M.Z. Jeong, W.U. Kang, C.Y. Lim
    PAL, Pohang, Republic of Korea
 
  The PAL-XFEL beamlines are composed of two different types of beamlines: a hard X-ray beamline and a soft X-ray beamline. The hard X-ray beamline generates free electron lasers with pulse energies ranging from 2-15 keV, pulse lengths of 10-35 fs, and arrival time errors of less than 20 fs from 4-11 GeV electron beams for X-ray Scattering & Spectroscopy (XSS) and Nano Crystallography & Coherent Imaging (NCI) experiments. On the other hand, the soft X-ray beamline generates free electron lasers with photon energies ranging from 0.25-1.25 keV, and with more than 1012 photons, along with 3 GeV electron beams for soft X-ray Scattering & Spectroscopy (SSS) experiments. To conduct experiments using the XFEL, precise beam alignment, diagnostics, and control of experimental devices are necessary. The devices of the three beamlines are composed of control systems based on the Experimental Physics and Industrial Control System (EPICS), which is a widely-used open-source software framework for distributed control systems. The beam diagnostic devices include QBPM (Quad Beam Position Monitor), photodiode, Pop-in monitor, and inline spectrometer, among others. Additionally, there are other systems such as CRL (Compound Refractive Lenses), KB mirror (Kirkpatrick-Baez mirror), attenuator, and vacuum that are used in the PAL-XFEL beamlines. We would like to introduce the control system, event timing, and network configuration for PAL-XFEL experiments.  
poster icon Poster TUPDP065 [1.116 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-TUPDP065  
About • Received ※ 10 October 2023 — Revised ※ 11 October 2023 — Accepted ※ 13 October 2023 — Issued ※ 29 October 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
TUPDP069 AVN Radio Telescope Conversion Software Systems controls, software, interface, monitoring 661
 
  • R.L. Schwartz, R.E. Ebrahim, P.J. Pretorius
    SARAO, Cape Town, South Africa
 
  The African VLBI Network (AVN) is a proposed network of Radio Telescopes involving 8 partner countries across the African continent. The AVN project aims to convert redundant satellite data communications ground stations, where viable, to Radio Telescopes. One of the main objectives of AVN is human capital development in Science, Engineering, Technology and Mathematics (STEM) with regards to radio astronomy in SKA (Square Kilometer Array) African Partner countries. This paper will outline the software systems used for control and monitoring of a single radio telescope. The control and monitoring software consists of the User Interface, Antenna Control System, Receiver Control System and monitoring of all proprietary and off-the-shelf (OTS) components. All proprietary and OTS interfaces are converted to the open protocol (KATCP).  
poster icon Poster TUPDP069 [10.698 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-TUPDP069  
About • Received ※ 20 September 2023 — Revised ※ 09 October 2023 — Accepted ※ 13 October 2023 — Issued ※ 28 October 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
TUPDP073 CAN Monitoring Software for an Antenna Positioner Emulator software, controls, monitoring, hardware 673
 
  • V. van Tonder
    SARAO, Cape Town, South Africa
 
  Funding: South African Radio Astronomy Observatory
The original Controller Area Network (CAN) protocol, was developed for control and monitoring within vehicular systems. It has since been expanded and today, the Open CAN bus protocol is a leading protocol used within servo-control systems for telescope positioning systems. Development of a CAN bus monitoring component is currently underway. This component forms part of a greater software package, designed for an Antenna Positioner Emulator (APE), which is under construction. The APE will mimic movement of a MeerKAT antenna, in both the azimuth and elevation axes, as well as the positioning of the receiver indexer. It will be fitted with the same servo-drives and controller hardware as MeerKAT, however there will be no main dish, sub-reflector, or receiver. The APE monitoring software will receive data from a variety of communication protocols used by different devices within the MeerKAT control system, these include: CAN, Profibus, EnDAT, Resolver and Hiperface data. The monitoring software will run on a BeagleBone Black (BBB) fitted with an ARM processor. Local and remote logging capabilities are provided along with a user interface to initiate the reception of data. The CAN component makes use of the standard SocketCAN driver which is shipped as part of the linux kernel. Initial laboratory tests have been conducted using a CAN system bus adapter that transmits previously captured telescope data. The bespoke CAN receiver hardware connects in-line on the CAN bus and produces the data to a BBB, where the monitoring software logs the data.
 
poster icon Poster TUPDP073 [1.521 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-TUPDP073  
About • Received ※ 06 October 2023 — Revised ※ 20 October 2023 — Accepted ※ 14 December 2023 — Issued ※ 18 December 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
TUPDP105 The SLS 2.0 Beamline Control System Upgrade Strategy controls, experiment, EPICS, MMI 807
 
  • T. Celcer, X. Yao, E. Zimoch
    PSI, Villigen PSI, Switzerland
 
  After more than 20 years of successful operation the SLS facility will undergo a major upgrade, replacing the entire storage ring, which will result in a significantly improved beam emittance and brightness. In order to make use of improved beam characteristics, beamline upgrades will also play a crucial part in the SLS 2.0 project. However, offering our users an optimal beamtime experience will strongly depend on our ability to leverage the beamline control and data acquisition tools to a new level. Therefore, it is necessary to upgrade and modernize the majority of our current control system stack. This article provides an overview of the planned beamline control system upgrade from the technical as well as project management perspective. A portfolio of selected technical solutions for the main control system building blocks will be discussed. Currently, the controls HW in SLS is based on the VME platform, running the VxWorks operating system. Digital/analog I/O, a variety of motion solutions, scalers, high voltage power supplies, and timing and event system are all provided using this platform. A sensible migration strategy is being developed for each individual system, along with the overall strategy to deliver a modern high-level experiment orchestration environment. The article also focuses on the challenges of the phased upgrade, coupled with the unavoidable coexistence with existing VME-based legacy systems due to time, budget, and resource constraints.  
poster icon Poster TUPDP105 [4.148 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-TUPDP105  
About • Received ※ 04 October 2023 — Revised ※ 11 October 2023 — Accepted ※ 05 December 2023 — Issued ※ 15 December 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
TUPDP108 Progress of the EPICS Transition at the Isis Accelerators EPICS, controls, operation, PLC 817
 
  • I.D. Finch, B.R. Aljamal, K.R.L. Baker, R. Brodie, J.-L. Fernández-Hernando, G.D. Howells, M.F. Leputa, S.A. Medley, M. Romanovschi
    STFC/RAL/ISIS, Chilton, Didcot, Oxon, United Kingdom
  • A. Kurup
    Imperial College of Science and Technology, Department of Physics, London, United Kingdom
 
  The ISIS Neutron and Muon Source accelerators have been controlled using Vsystem running on OpenVMS / Itaniums, while beamlines and instruments are controlled using EPICS. We outline the work in migrating accelerator controls to EPICS using the PVAccess protocol with a mixture of conventional EPICS IOCs and custom Python-based IOCs primarily deployed in containers on Linux servers. The challenges in maintaining operations with two control systems running in parallel are discussed, including work in migrating data archives and maintaining their continuity. Semi-automated conversion of the existing Vsystem HMIs to EPICS and the creation of new EPICS control screens required by the Target Station 1 upgrade are reported. The existing organisation of our controls network and the constraints this imposes on remote access via EPICS and the solution implemented are described. The successful deployment of an end-to-end EPICS system to control the post-upgrade Target Station 1 PLCs at ISIS is discussed as a highlight of the migration.  
poster icon Poster TUPDP108 [0.510 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-TUPDP108  
About • Received ※ 02 October 2023 — Accepted ※ 04 December 2023 — Issued ※ 17 December 2023  
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
TUPDP114 Machine Learning Based Noise Reduction of Neutron Camera Images at ORNL neutron, timing, target, operation 841
 
  • I.V. Pogorelov, J.P. Edelen, M.J. Henderson, M.C. Kilpatrick
    RadiaSoft LLC, Boulder, Colorado, USA
  • S. Calder, B. Vacaliuc
    ORNL RAD, Oak Ridge, Tennessee, USA
  • R.D. Gregory, G.S. Guyotte, C.M. Hoffmann, B.K. Krishna
    ORNL, Oak Ridge, Tennessee, USA
 
  Funding: This material is based upon work supported by the U.S. Department of Energy, Office of Science, Office of Basic Energy Science under Award Number DE-SC0021555.
Neutron cameras are utilized at the HB2A powder diffractometer to image the sample for alignment in the beam. Typically, neutron cameras are quite noisy as they are constantly being irradiated. Removal of this noise is challenging due to the irregular nature of the pixel intensity fluctuations and the tendency for it to change over time. RadiaSoft has developed a novel noise reduction method for neutron cameras that inscribes a lower envelope of the image signal. This process is then sped up using machine learning. Here we report on the results of our noise reduction method and describe our machine learning approach for speeding up the algorithm for use during operations.
 
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-TUPDP114  
About • Received ※ 07 October 2023 — Revised ※ 22 October 2023 — Accepted ※ 11 December 2023 — Issued ※ 16 December 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
TUPDP115 Machine Learning for Compact Industrial Accelerators cavity, controls, simulation, industrial-accelerators 846
 
  • J.P. Edelen, J.A. Einstein-Curtis, M.J. Henderson, M.C. Kilpatrick
    RadiaSoft LLC, Boulder, Colorado, USA
  • J.A. Diaz Cruz, A.L. Edelen
    SLAC, Menlo Park, California, USA
 
  Funding: This material is based upon work supported by the DOE Accelerator R&D and Production under Award Number DE-SC0023641.
The industrial and medical accelerator industry is an ever-growing field with advancements in accelerator technology enabling its adoption for new applications. As the complexity of industrial accelerators grows so does the need for more sophisticated control systems to regulate their operation. Moreover, the environment for industrial and medical accelerators is often harsh and noisy as opposed to the more controlled environment of a laboratory-based machine. This environment makes control more challenging. Additionally, instrumentation for industrial accelerators is limited making it difficult at times to identify and diagnose problems when they occur. RadiaSoft has partnered with SLAC to develop new machine learning methods for control and anomaly detection for industrial accelerators. Our approach is to develop our methods using simulation models followed by testing on experimental systems. Here we present initial results using simulations of a room temperature s-band system.
 
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-TUPDP115  
About • Received ※ 06 October 2023 — Accepted ※ 05 December 2023 — Issued ※ 18 December 2023  
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
TUPDP116 Machine Learning Based Sample Alignment at TOPAZ controls, alignment, neutron, operation 851
 
  • M.J. Henderson, J.P. Edelen, M.C. Kilpatrick, I.V. Pogorelov
    RadiaSoft LLC, Boulder, Colorado, USA
  • S. Calder, B. Vacaliuc
    ORNL RAD, Oak Ridge, Tennessee, USA
  • R.D. Gregory, G.S. Guyotte, C.M. Hoffmann, B.K. Krishna
    ORNL, Oak Ridge, Tennessee, USA
 
  Funding: This material is based upon work supported by the U.S. Department of Energy, Office of Science, Office of Basic Energy Science under Award Number DE-SC0021555.
Neutron scattering experiments are a critical tool for the exploration of molecular structure in compounds. The TOPAZ single crystal diffractometer at the Spallation Neutron Source studies these samples by illuminating samples with different energy neutron beams and recording the scattered neutrons. During the experiments the user will change temperature and sample position in order to illuminate different crystal faces and to study the sample in different environments. Maintaining alignment of the sample during this process is key to ensuring high quality data are collected. At present this process is performed manually by beamline scientists. RadiaSoft in collaboration with the beamline scientists and engineers at ORNL has developed a new machine learning based alignment software automating this process. We utilize a fully-connected convolutional neural network configured in a U-net architecture to identify the sample center of mass. We then move the sample using a custom python-based EPICS IOC interfaced with the motors. In this talk we provide an overview of our machine learning tools and show our initial results aligning samples at ORNL.
 
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-TUPDP116  
About • Received ※ 06 October 2023 — Accepted ※ 05 December 2023 — Issued ※ 11 December 2023  
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
TUPDP138 Exploratory Data Analysis on the RHIC Cryogenics System Compressor Dataset cryogenics, operation, data-analysis, controls 907
 
  • Y. Gao, K.A. Brown, R.J. Michnoff, L.K. Nguyen, A.Z. Zarcone, B. van Kuik
    BNL, Upton, New York, USA
  • A.D. Tran
    FRIB, East Lansing, Michigan, USA
 
  Funding: Work supported by Brookhaven Science Associates, LLC under Contract No. DE-SC0012704 with the U.S. Department of Energy.
The Relativistic Heavy Ion Collider (RHIC) Cryogenic Refrigerator System is the cryogenic heart that allows RHIC superconducting magnets to operate. Parts of the refrigerator are two stages of compression composed of ten first and five second-stage compressors. Compressors are critical for operations. When a compressor faults, it can impact RHIC beam operations if a spare compressor is not brought online as soon as possible. The potential of applying machine learning to detect compressor problems before a fault occurs would greatly enhance Cryo operations, allowing an operator to switch to a spare compressor before a running compressor fails, minimizing impacts on RHIC operations. In this work, various data analysis results on historical compressor data are presented. It demonstrates an autoencoder-based method, which can catch early signs of compressor trips so that advance notices can be sent for the operators to take action.
 
poster icon Poster TUPDP138 [2.897 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-TUPDP138  
About • Received ※ 05 October 2023 — Revised ※ 22 October 2023 — Accepted ※ 30 November 2023 — Issued ※ 11 December 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WE1BCO02 Data Management Infrastructure for European XFEL experiment, FEL, data-management, hardware 952
 
  • J. Malka, S. Aplin, D. Boukhelef, K. Filippakopoulos, L.G. Maia, T. Piszczek, Mr. Previtali, J. Szuba, K. Wrona
    EuXFEL, Schenefeld, Germany
  • S. Dietrich, MA. Gasthuber, J. Hannappel, M. Karimi, Y. Kemp, R. Lueken, T. Mkrtchyan, K. Ohrenberg, F. Schlünzen, P. Suchowski, C. Voss
    DESY, Hamburg, Germany
 
  Effective data management is crucial to ensure research data is easily accessible and usable. We will present design and implementation of the European XFEL data management infrastructure supporting high level data management services. The system architecture comprises four layers of storage systems, each designed to address specific challenges. The first layer, referred to as online, is designed as a fast cache to accommodate extreme high rates (up to 15GB/s) of data generated during experiment at single scientific instrument. The second layer, called high-performance storage, provides necessary capabilities for data processing both during and after experiments. The layers are incorporated into a single infiniband fabric and connected through a 4km long 1Tb/s link. This allows fast data transfer from the European XFEL experiment hall to the DESY computing center. The third layer, mass-storage, extends the capacity of data storage system to allow mid-term data access for detailed analysis. Finally, the tape archive, provides data safety and long-term archive (5-10years). The high performance and mass storage systems are connected to computing clusters. This allows users to perform near-online and offline data analysis or alternatively export data outside of the European XFEL facility. The data management infrastructure at the European XFEL has the capacity to accept and process up to 2PB of data per day, which demonstrates the remarkable capabilities of all the sub-services involved in this process.  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-WE1BCO02  
About • Received ※ 06 October 2023 — Revised ※ 23 October 2023 — Accepted ※ 08 December 2023 — Issued ※ 12 December 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WE1BCO03 Design of the HALF Control System controls, EPICS, timing, operation 958
 
  • G. Liu, L.G. Chen, C. Li, X.K. Sun, K. Xuan, D.D. Zhang
    USTC/NSRL, Hefei, Anhui, People’s Republic of China
 
  The Hefei Advanced Light Facility (HALF) is a 2.2-GeV 4th synchrotron radiation light source, which is scheduled to start construction in Hefei, China in 2023. The HALF contains an injector and a 480-m diffraction limited storage ring, and 10 beamlines for phase one. The HALF control system is EPICS based with integrated application and data platforms for the entire facility including accelerator and beamlines. The unified infrastructure and network architecture are designed to build the control system. The infrastructure provides resources for the EPICS development and operation through virtualization technology, and provides resources for the storage and process of experimental data through distributed storage and computing clusters. The network is divided into the control network and the dedicated high-speed data network by physical separation, the control network is subdivided into multiple subnets by VLAN technology. Through estimating the scale of the control system, the 10Gbps control backbone network and the data network that can be expanded to 100Gbps can fully meet the communication requirements of the control system. This paper reports the control system architecture design and the development work of some key technologies in details.  
slides icon Slides WE1BCO03 [2.739 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-WE1BCO03  
About • Received ※ 02 October 2023 — Revised ※ 09 October 2023 — Accepted ※ 13 October 2023 — Issued ※ 26 October 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WE2BCO03 Ongoing Improvements to the Instrumentation and Control System at LANSCE controls, software, hardware, operation 979
 
  • M. Pieck, C.D. Hatch, H.A. Watkins, E.E. Westbrook
    LANL, Los Alamos, New Mexico, USA
 
  Funding: This work was supported by the U.S. DOE through the Los Alamos National Laboratory (LANL). LANL is operated by Triad National Security, LLC, for the NNSA of U.S. DOE - Contract No. 89233218CNA000001
Recent upgrades to the Instrumentation and Control System at Los Alamos Neutron Science Center (LANSCE) have significantly improved its maintainability and performance. These changes were the first strategic steps towards a larger vision to standardize the hardware form factors and software methodologies. Upgrade efforts are being prioritized though a risk-based approach and funded at various levels. With a major recapitalization project finished in 2022 and modernization project scheduled to start possibly in 2025, current efforts focus on the continuation of upgrade efforts that started in the former and will be finished in the later time frame. Planning and executing these upgrades are challenging considering that some of the changes are architectural in nature, however, the functionality needs to be preserved while taking advantage of technology progressions. This is compounded by the fact that those upgrades can only be implemented during the annual 4-month outage. This paper will provide an overview of our vision, strategy, challenges, recent accomplishments, as well as future planned activities to transform our 50-year-old control system into a modern state-of-the-art design.
LA-UR-23-24389
 
slides icon Slides WE2BCO03 [9.626 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-WE2BCO03  
About • Received ※ 30 September 2023 — Revised ※ 09 October 2023 — Accepted ※ 19 November 2023 — Issued ※ 03 December 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WE2BCO05 Continuous Modernization of Control Systems for Research Facilities controls, EPICS, software, operation 993
 
  • K. Vodopivec, K.S. White
    ORNL, Oak Ridge, Tennessee, USA
 
  Funding: This work was supported by the U.S. Department of Energy under contract DE-AC0500OR22725.
The Spallation Neutron Source at Oak Ridge National Laboratory has been in operation since 2006. In order to achieve high operating reliability and availability as mandated by the sponsor, all systems participating in the production of neutrons need to be maintained to the highest achievable standard. This includes SNS integrated control system, comprising of specialized hardware and software, as well as computing and networking infrastructure. While machine upgrades are extending the control system with new and modern components, the established part of control system requires continuous modernization efforts due to hardware obsolescence, limited lifetime of electronic components, and software updates that can break backwards compatibility. This article discusses challenges of sustaining control system operations through decades of facility lifecycle, and presents a methodology used at SNS for continuous control system improvements that was developed by analyzing operational data and experience.
 
slides icon Slides WE2BCO05 [1.484 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-WE2BCO05  
About • Received ※ 05 October 2023 — Revised ※ 12 October 2023 — Accepted ※ 08 December 2023 — Issued ※ 12 December 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WE2BCO06 EPICS Deployment at Fermilab EPICS, controls, Linux, software 997
 
  • P.M. Hanlet, J.S. Diamond, M. Gonzalez, K.S. Martin
    Fermilab, Batavia, Illinois, USA
 
  Fermilab has traditionally not been an EPICS house, as such expertise in EPICS is limited and scattered. However, PIP-II will be using EPICS for its control system. Furthermore, when PIP-II is operating, it must to interface with the existing, though modernized (see ACORN) legacy control system. We have developed and deployed a software pipeline that addresses these needs and presents to developers a tested and robust software framework, including template IOCs from which new developers can quickly gain experience. In this presentation, we will discuss the motivation for this work, the implementation of a continuous integration/continuous deployment pipeline, testing, template IOCs, and the deployment of user applications. We will also discuss how this is used with the current PIP-II teststand and lessons learned.  
slides icon Slides WE2BCO06 [2.860 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-WE2BCO06  
About • Received ※ 06 October 2023 — Revised ※ 23 October 2023 — Accepted ※ 11 December 2023 — Issued ※ 17 December 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WE3AO01 Radiation-Tolerant Multi-Application Wireless IoT Platform for Harsh Environments radiation, controls, monitoring, operation 1051
 
  • S. Danzeca, A. Masi, R. Sierra
    CERN, Meyrin, Switzerland
  • J.L.D. Luna Duran, A. Zimmaro
    European Organization for Nuclear Research (CERN), Geneva, Switzerland
 
  We introduce a radiation-tolerant multi-application wireless IoT platform, specifically designed for deployment in harsh environments such as particle accelerators. The platform integrates radiation-tolerant hardware with the possibility of covering different applications and use cases, including temperature and humidity monitoring, as well as simple equipment control functions. The hardware is capable of withstanding high levels of radiation and communicates wirelessly using LoRa technology, which reduces infrastructure costs and enables quick and easy deployment of operational devices. To validate the platform’s suitability for different applications, we have deployed a radiation monitoring version in the CERN particle accelerator complex and begun testing multi-purpose application devices in radiation test facilities. Our radiation-tolerant IoT platform, in conjunction with the entire network and data management system, opens up possibilities for different applications in harsh environments.  
slides icon Slides WE3AO01 [19.789 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-WE3AO01  
About • Received ※ 04 October 2023 — Revised ※ 23 October 2023 — Accepted ※ 08 December 2023 — Issued ※ 12 December 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WE3AO06 Deployment and Operation of the Remotely Operated Accelerator Monitor (ROAM) Robot controls, software, radiation, hardware 1077
 
  • T.C. Thayer, N. Balakrishnan, M.A. Montironi, A. Ratti
    SLAC, Menlo Park, California, USA
 
  Funding: Work supported in part by the U.S. Department of Energy under contract number DE-AC02-76SF00515.
Monitoring the harsh environment within an operating accelerator is a notoriously challenging problem. High radiation, lack of space, poor network connectivity, or extreme temperatures are just some of the challenges that often make ad-hoc, fixed sensor networks the only viable option. In an attempt to increase the flexibility of deploying different types of sensors on an as-needed basis, we have built upon the existing body of work in the field and developed a robotic platform to be used as a mobile sensor platform. The robot is constructed with the objective of minimizing costs and development time, strongly leveraging the use of Commercial-Off-The-Shelf (COTS) hardware and open-source software (ROS). Although designed to be remotely operated by a user, the robot control system incorporates sensors and algorithms for autonomous obstacle detection and avoidance. We have deployed the robot to a number of missions within the SLAC LCLS accelerator complex with the double objective of collecting data to assist accelerator operations and of gaining experience on how to improve the robustness and reliability of the platform. In this work we describe our deployment scenarios, challenges encountered, solutions implemented and future improvement plans.
 
slides icon Slides WE3AO06 [4.578 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-WE3AO06  
About • Received ※ 05 October 2023 — Accepted ※ 08 December 2023 — Issued ※ 16 December 2023  
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
TH1BCO01 Five years of EPICS 7 - Status Update and Roadmap EPICS, controls, site, status 1087
 
  • R. Lange
    ITER Organization, St. Paul lez Durance, France
  • L.R. Dalesio, M.A. Davidsaver, G.S. McIntyre
    Osprey DCS LLC, Ocean City, USA
  • S.M. Hartman, K.-U. Kasemir
    ORNL, Oak Ridge, Tennessee, USA
  • A.N. Johnson, S. Veseli
    ANL, Lemont, Illinois, USA
  • H. Junkes
    FHI, Berlin, Germany
  • T. Korhonen, S.C.F. Rose
    ESS, Lund, Sweden
  • M.R. Kraimer
    Self Employment, Private address, USA
  • K. Shroff
    BNL, Upton, New York, USA
  • G.R. White
    SLAC, Menlo Park, California, USA
 
  Funding: Work supported in part by the U.S. Department of Energy under contracts DE-AC02-76SF00515 and DE-AC05-00OR22725.
After its first release in 2017, EPICS version 7 has been introduced into production at several sites. The central feature of EPICS 7, the support of structured data through the new pvAccess network protocol, has been proven to work in large production systems. EPICS 7 facilitates the implementation of new functionality, including developing AI/ML applications in controls, managing large data volumes, interfacing to middle-layer services, and more. Other features like support for the IPv6 protocol and enhancements to access control have been implemented. Future work includes integrating a refactored API into the core distribution, adding modern network security features, as well as developing new and enhancing existing services that take advantage of these new capabilities. The talk will give an overview of the status of deployments, new additions to the EPICS Core, and an overview of its planned future development.
 
slides icon Slides TH1BCO01 [0.562 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-TH1BCO01  
About • Received ※ 04 October 2023 — Revised ※ 12 October 2023 — Accepted ※ 19 November 2023 — Issued ※ 24 November 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
TH1BCO04 Asynchronous Execution of Tango Commands in the SKA Telescope Control System: An Alternative to the Tango Async Device TANGO, controls, status, GUI 1108
 
  • B.A. Ojur, A.J. Venter
    SARAO, Cape Town, South Africa
  • D. Devereux
    CSIRO, Clayton, Australia
  • D. Devereux, S.N. Twum, S. Vrcic
    SKAO, Macclesfield, United Kingdom
 
  Equipment controlled by the Square Kilometre Array (SKA) Control System will have a TANGO interface for control and monitoring. Commands on TANGO device servers have a 3000 milliseconds window to complete their execution and return to the client. This timeout places a limitation on some commands used on SKA TANGO devices which take longer than the 3000 milliseconds window to complete; the threshold is more stricter in the SKA Control System (CS) Guidelines. Such a command, identified as a Long Running Command (LRC), needs to be executed asynchronously to circumvent the timeout. TANGO has support for an asynchronous device which allows commands to be executed slower than 3000 milliseconds by using a coroutine to put the task on an event loop. During the exploration of this, a decision was made to implement a custom approach in our base repository which all devices depend on. In this approach, every command annotated as ¿long running¿ is handed over to a thread to complete the task and its progress is tracked through attributes. These attributes report the queued commands along with their progress, status and results. The client is provided with a unique identifier which can be used to track the execution of the LRC and take further action based on the outcome of that command. LRCs can be aborted safely using a custom TANGO command. We present the reference design and implementation of the Long Running Commands for the SKA Controls System.  
slides icon Slides TH1BCO04 [0.674 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-TH1BCO04  
About • Received ※ 06 October 2023 — Revised ※ 24 October 2023 — Accepted ※ 20 December 2023 — Issued ※ 22 December 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
TH2AO01 Log Anomaly Detection on EuXFEL Nodes FEL, embedded, GUI, monitoring 1126
 
  • A. Sulc, A. Eichler, T. Wilksen
    DESY, Hamburg, Germany
 
  Funding: This work was supported by HamburgX grant LFF-HHX-03 to the Center for Data and Computing in Natural Sciences (CDCS) from the Hamburg Ministry of Science, Research, Equalities and Districts.
This article introduces a method to detect anomalies in the log data generated by control system nodes at the European XFEL accelerator. The primary aim of this proposed method is to offer operators a comprehensive understanding of the availability, status, and problems specific to each node. This information is vital for ensuring the smooth operation. The sequential nature of logs and the absence of a rich text corpus that is specific to our nodes pose a significant limitation for traditional and learning-based approaches for anomaly detection. To overcome this limitation, we propose a method that uses word embedding and models individual nodes as a sequence of these vectors that commonly co-occur, using a Hidden Markov Model (HMM). We score individual log entries by computing a probability ratio between the probability of the full log sequence including the new entry and the probability of just the previous log entries, without the new entry. This ratio indicates how probable the sequence becomes when the new entry is added. The proposed approach can detect anomalies by scoring and ranking log entries from EuXFEL nodes where entries that receive high scores are potential anomalies that do not fit the routine of the node. This method provides a warning system to alert operators about these irregular log events that may indicate issues.
 
slides icon Slides TH2AO01 [1.420 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-TH2AO01  
About • Received ※ 30 September 2023 — Accepted ※ 08 December 2023 — Issued ※ 13 December 2023  
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
TH2AO03 An Update on the CERN Journey from Bare Metal to Orchestrated Containerization for Controls controls, software, operation, ECR 1138
 
  • T. Oulevey, B. Copy, F. Locci, S.T. Page, C. Roderick, M. Vanden Eynden, J.-B. de Martel
    CERN, Meyrin, Switzerland
 
  At CERN, work has been undertaken since 2019 to transition from running Accelerator controls software on bare metal to running in an orchestrated, containerized environment. This will allow engineers to optimise infrastructure cost, to improve disaster recovery and business continuity, and to streamline DevOps practices along with better security. Container adoption requires developers to apply portable practices including aspects related to persistence integration, network exposure, and secrets management. It also promotes process isolation and supports enhanced observability. Building on containerization, orchestration platforms (such as Kubernetes) can be used to drive the life cycle of independent services into a larger scale infrastructure. This paper describes the strategies employed at CERN to make a smooth transition towards an orchestrated containerised environment and discusses the challenges based on the experience gained during an extended proof-of-concept phase.  
slides icon Slides TH2AO03 [0.480 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-TH2AO03  
About • Received ※ 06 October 2023 — Revised ※ 24 October 2023 — Accepted ※ 14 December 2023 — Issued ※ 19 December 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
TH2AO05 Secure Role-Based Access Control for RHIC Complex controls, operation, software, EPICS 1150
 
  • A. Sukhanov, J. Morris
    BNL, Upton, New York, USA
 
  Funding: Work supported by Brookhaven Science Associates, LLC under Contract No. DE-SC0012704 with the U.S. Department of Energy.
This paper describes the requirements, design, and implementation of Role-Based Access Control (RBAC) for RHIC Complex. The system is being designed to protect from accidental, unauthorized access to equipment of the RHIC Complex, but it also can provide significant protection against malicious attacks. The role assignment is dynamic. Roles are primarily based on user id but elevated roles may be assigned for limited periods of time. Protection at the device manager level may be provided for an entire server or for individual device parameters. A prototype version of the system has been deployed at RHIC complex since 2022. The authentication is performed on a dedicated device manager, which generates an encrypted token, based on user ID, expiration time, and role level. Device managers are equipped with an authorization mechanism, which supports three methods of authorization: Static, Local and Centralized. Transactions with token manager take place ’atomically’, during secured set() or get() requests. The system has small overhead: ~0.5 ms for token processing and ~1.5 ms for network round trip. Only python based device managers are participating in the prototype system. Testing has begun with C++ device managers, including those that run on VxWorks platforms. For easy transition, dedicated intermediate shield managers can be deployed to protect access to device managers which do not directly support authorization.
 
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-TH2AO05  
About • Received ※ 04 October 2023 — Revised ※ 14 November 2023 — Accepted ※ 19 December 2023 — Issued ※ 22 December 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
TH2AO06 SKA Tango Operator TANGO, controls, device-server, software 1155
 
  • M. Di Carlo, M. Dolci
    INAF - OAAB, Teramo, Italy
  • P. Harding, U.Y. Yilmaz
    SKAO, Macclesfield, United Kingdom
  • J.B. Morgado
    Universidade do Porto, Faculdade de Ciências, Porto, Portugal
  • P. Osorio
    Atlar Innovation, Pampilhosa da Serra, Portugal
 
  Funding: INAF
The Square Kilometre Array (SKA) is an international effort to build two radio interferometers in South Africa and Australia, forming one Observatory monitored and controlled from global headquarters (GHQ) based in the United Kingdom at Jodrell Bank. The software for the monitoring and control system is developed based on the TANGO-controls framework, which provide a distributed architecture for driving software and hardware using CORBA distributed objects that represent devices that communicate with ZeroMQ events internally. This system runs in a containerised environment managed by Kubernetes (k8s). k8s provides primitive resource types for the abstract management of compute, network and storage, as well as a comprehensive set of APIs for customising all aspects of cluster behaviour. These capabilities are encapsulated in a framework (Operator SDK) which enables the creation of higher order resources types assembled out of the k8s primitives (\verb|Pods|, \verb|Services|, \verb|PersistentVolumes|), so that abstract resources can be managed as first class citizens within k8s. These methods of resource assembly and management have proven useful for reconciling some of the differences between the TANGO world and that of Cloud Native computing, where the use of Custom Resource Definitions (CRD) (i.e., Device Server and DatabaseDS) and a supporting Operator developed in the k8s framework has given rise to better usage of TANGO-controls in k8s.
 
slides icon Slides TH2AO06 [2.622 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-TH2AO06  
About • Received ※ 27 September 2023 — Revised ※ 24 October 2023 — Accepted ※ 14 December 2023 — Issued ※ 21 December 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
THMBCMO23 Development of a New Timing System for ISIS timing, hardware, target, controls 1247
 
  • R.A. Washington
    STFC/RAL/ISIS, Chilton, Didcot, Oxon, United Kingdom
 
  The timing system at the ISIS Neutron and Muon source has been operating in its current iteration since 2008. Machine timing is handled by the Central Timing Distributor (CTD) which transmits various timing signals to ISIS accelerator equipment over RS-422 compliant timing buses. The nature of these timing signals has not changed since ISIS first delivered neutrons in 1984, and this paper will look at how an event-based timing system can be employed in the next generation of timing system for ISIS. A new timing system should allow for the distribution of events, triggers and timestamps, provide an increase in timing resolution and be fully backwards compatible with the current timing frame. The new Digitised Waveform System (DWS) at ISIS supports White Rabbit (WR). There is an available WR network which can be used to investigate a new timing system based on WR technology. Conclusions will be drawn from installing this new system in parallel with the current timing system; a comparison between the systems, alternatives, and next steps will be discussed.  
slides icon Slides THMBCMO23 [0.798 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-THMBCMO23  
About • Received ※ 05 October 2023 — Revised ※ 10 October 2023 — Accepted ※ 13 December 2023 — Issued ※ 17 December 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
THPDP020 Management of EPICS IOCs in a Distributed Network Environment Using Salt EPICS, controls, monitoring, hardware 1340
 
  • E. Blomley, J. Gethmann, A.-S. Müller, M. Schuh
    KIT, Karlsruhe, Germany
  • S. Marsching
    Aquenos GmbH, Baden-Baden, Germany
 
  An EPICS-based control system typically consists of many individual IOCs, which can be distributed across many computers in a network. Managing hundreds of deployed IOCs, keeping track of where they are running, and providing operators with basic interaction capabilities can easily become a maintenance nightmare. At the Institute for Beam Physics and Technology (IBPT) of the Karlsruhe Institute of Technology (KIT), we operate separate networks for our accelerators KARA and FLUTE and use the Salt Project to manage the IT infrastructure. Custom Salt states take care of deploying our IOCs across multiple servers directly from the code repositories, integrating them into the host operating system and monitoring infrastructure. In addition, this allows the integration into our GUI in order to enable operators to monitor and control the process for each IOC without requiring any specific knowledge of where and how that IOC is deployed. Therefore, we can maintain and scale to any number of IOCs on any numbers of hosts nearly effortless. This paper presents the design of this system, discusses the tools and overall setup required to make it work, and shows off the integration into our GUI and monitoring systems.  
poster icon Poster THPDP020 [0.431 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-THPDP020  
About • Received ※ 04 October 2023 — Accepted ※ 10 December 2023 — Issued ※ 14 December 2023  
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
THPDP032 Introduction of the Ethernet-Based Field Networks to Inter-Device Communication for RIBF Control System EPICS, Ethernet, controls, PLC 1384
 
  • A. Uchiyama, N. Fukunishi, M. Komiyama
    RIKEN Nishina Center, Wako, Japan
 
  Internet Protocol (IP) networks are widely used to remotely control measurement instruments and controllers. In addition to proprietary protocols, common commands such as the standard commands for programmable instruments (SCPI) are used by manufacturers of measuring instruments. Many IP-network-based devices have been used in RIBF control systems constructed using the experimental physics and industrial control system (EPICS); these are commercial devices designed and developed independently. EPICS input/output controllers (IOCs) usually establish socket communications to send commands to IP-network-based devices. However, in the RIBF control system, reconnection between the EPICS IOC and the device is often not established after the loss of socket communication due to an unexpected power failure of the device or network switch. In this case, it is often difficult to determine whether the socket connection to the EPICS IOC is broken even after checking the communication by pinging. Using Ethernet as the field network in the physical layer between the device and EPICS IOC can solve these problems. Therefore, we are considering the introduction of field networks such as EtherCAT and Ethernet/IP, which use Ethernet in the physical layer. In the implementation of the prototype system, EPICS IOCs and devices are connected via EtherCAT and Soft PLCs are run on the machine running EPICS IOCs for sequence control.  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-THPDP032  
About • Received ※ 06 October 2023 — Revised ※ 11 October 2023 — Accepted ※ 11 December 2023 — Issued ※ 15 December 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
THPDP056 Consolidation of the Power Trigger Controllers of the LHC Beam Dumping System controls, FPGA, software, power-supply 1439
 
  • L. Strobino, N. Magnin, N. Voumard
    European Organization for Nuclear Research (CERN), Geneva, Switzerland
 
  The Power Trigger Controller (PTC) of the LHC Beam Dumping System (LBDS) is in charge of the control and supervision of the Power Trigger Units (PTU), which are used to trigger the conduction of the 50 High-Voltage Pulsed Generators (HVPG) of the LBDS kicker magnets. This card is integrated in an Industrial Control System (ICS) and has the double role of controlling the PTU operating mode and monitoring its status, and of supervising the LBDS triggering and re-triggering systems. As part of the LBDS consolidation during the LHC Long Shutdown 2 (LS2), a new PTC card was designed, based on a System-on-Chip (SoC) implemented in an FPGA. The FPGA contains an ARM Cortex-M3 softcore processor and all the required peripherals to communicate with onboard ADCs and DACs (3rd-party IPs or custom-made ones) as well as with an interchangeable fieldbus communication module, allowing the board to be integrated in various types of industrial control networks in view of future evolution. This new architecture is presented together with the advantages in terms of modularity and reusability for future projects.  
poster icon Poster THPDP056 [3.146 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-THPDP056  
About • Received ※ 05 October 2023 — Accepted ※ 08 December 2023 — Issued ※ 15 December 2023  
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
THPDP067 Towards a Flexible and Secure Python Package Repository Service software, controls, operation, interface 1489
 
  • I. Sinkarenko, B. Copy, P.J. Elson, F. Iannaccone, W.F. Koorn
    CERN, Meyrin, Switzerland
 
  The use of 3rd-party and internal software packages has become a crucial part of modern software development. Not only does it enable faster development, but it also facilitates sharing of common components, which is often necessary for ensuring correctness and robustness of developed software. To enable this workflow, a package repository is needed to store internal packages and provide a proxy to 3rd-party repository services. This is particularly important for systems that operate in constrained networks, as is common for accelerator control systems. Despite its benefits, installing arbitrary software from a 3rd-party package repository can pose security and operational risks. Therefore, it is crucial to implement effective security measures, such as usage logging, package moderation and security scanning. However, experience at CERN has shown off-the-shelf tools for running a flexible repository service for Python packages not to be satisfactory. For instance, the dependency confusion attack first published in 2021 has still not been fully addressed by the main open-source repository services. An in-house development was conducted to address this, using a modular approach to building a Python package repository that enables the creation of a powerful and security-friendly repository service using small components. This paper describes the components that exist, demonstrates their capabilities within CERN and discusses future plans. The solution is not CERN-specific and is likely to be relevant to other institutes facing comparable challenges.  
poster icon Poster THPDP067 [0.510 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-THPDP067  
About • Received ※ 05 October 2023 — Revised ※ 12 October 2023 — Accepted ※ 13 December 2023 — Issued ※ 16 December 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
THPDP068 Implementing High Performance & Highly Reliable Time Series Acquisition Software for the CERN-Wide Accelerator Data Logging Service controls, operation, software, database 1494
 
  • M. Sobieszek, V. Baggiolini, R. Mucha, C. Roderick, P. Sowinski, J.P. Wozniak
    CERN, Meyrin, Switzerland
 
  The CERN Accelerator Data Logging Service (NXCALS) stores data generated by the accelerator infrastructure and beam related devices. This amounts to 3.5TB of data per day, coming from more than 2.5 million signals from heterogeneous systems at various frequencies. Around 85% of this data is transmitted through the Controls Middleware (CMW) infrastructure. To reliably gather such volumes of data, the acquisition system must be highly available, resilient and robust. It also has to be highly efficient and easily scalable, given the regularly growing data rates and volumes, particularly for the increases expected to be produced by the future High Luminosity LHC. This paper describes the NXCALS time series acquisition software, known as Data Sources. System architecture, design choices, and recovery solutions for various failure scenarios (e.g. network disruptions or cluster split-brain problems) will be covered. Technical implementation details will be discussed, covering the clustering of Akka Actors collecting data from tens of thousands of CMW devices and sharing the lessons learned. The NXCALS system has been operational since 2018 and has demonstrated the capability to fulfil all aforementioned characteristics, while also ensuring self-healing capabilities and no data losses during redeployments. The engineering challenge, architecture, lessons learned, and the implementation of this acquisition system are not CERN-specific and are therefore relevant to other institutes facing comparable challenges.  
poster icon Poster THPDP068 [2.960 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-THPDP068  
About • Received ※ 06 October 2023 — Revised ※ 12 October 2023 — Accepted ※ 13 October 2023 — Issued ※ 20 November 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
THPDP069 A Generic Real-Time Software in C++ for Digital Camera-Based Acquisition Systems at CERN software, operation, controls, hardware 1499
 
  • A. Topaloudis, E. Bravin, S. Burger, S. Jackson, S. Mazzoni, E. Poimenidou, E. Senes
    CERN, Meyrin, Switzerland
 
  Until recently, most of CERN’s beam visualisation systems have been based on increasingly obsolescent analogue cameras. Hence, there is an on-going campaign to replace old or install new digital equivalents. There are many challenges associated with providing a homogenised solution for the data acquisition of the various visualization systems in an accelerator complex as diverse as CERN’s. However, a generic real-time software in C++ has been developed and already installed in several locations to control such systems. This paper describes the software and the additional tools that have also been developed to exploit the acquisition systems, including a Graphical User Interface (GUI) in Java/Swing and web fixed displays. Furthermore, it analyses the specific challenges of each use-case and the chosen solutions that resolve issues including any subsequent performance limitations.  
poster icon Poster THPDP069 [1.787 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-THPDP069  
About • Received ※ 05 October 2023 — Accepted ※ 08 December 2023 — Issued ※ 18 December 2023  
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
THPDP070 Building, Deploying and Provisioning Embedded Operating Systems at PSI Linux, controls, EPICS, hardware 1505
 
  • D. Anicic
    PSI, Villigen PSI, Switzerland
 
  In the scope of the Swiss Light Source (SLS) upgrade project, SLS 2.0, at Paul Scherrer Institute (PSI) two New Processing Platforms (NPP), both running RT Linux, have been added to the portfolio of existing VxWorks and Linux VME systems. At the lower end we have picked a variety of boards, all based on the Xilinx Zynq UltraScale+ MPSoC. Even though these devices have less processing power, due to the built-in FPGA and Real-time CPU (RPU) they can deliver strict, hard RT performance. For high-throughput, soft-RT applications we went for Intel Xeon based single-board PCs in the CPCI-S form factor. All platforms are operated as diskless systems. For the Zynq systems we have decided on building in-house a Yocto Kirkstone Linux distribution, whereas for the Xeon PCs we employ off-the-shelf Debian 10 Buster. In addition to these new NPP systems, in the scope of our new EtherCAT-based Motion project, we have decided to use small x8664 servers, which will run the same Debian distribution as NPP. In this contribution we present the selected Operating Systems (OS) and discuss how we build, deploy and provision them to the diskless clients.  
poster icon Poster THPDP070 [0.758 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-THPDP070  
About • Received ※ 02 October 2023 — Accepted ※ 13 October 2023 — Issued ※ 19 October 2023  
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
THPDP081 Exploring Ethernet-Based CAMAC Replacements at ATLAS controls, Ethernet, data-acquisition, operation 1542
 
  • K.J. Bunnell, C. Dickerson, D.J. Novak, D. Stanton
    ANL, Lemont, Illinois, USA
 
  Funding: This work was supported by the US Department of Energy, Office of Nuclear Physics, under Contract No. DE-AC02-06CH11357. This research used resources of ANL’s ATLAS facility.
The Argonne Tandem Linear Accelerating System (ATLAS) facility at Argonne National Laboratory is researching ways at avoiding a crisis caused by the end-of-life issues with its 30 year-old CAMAC system. Replacement parts for CAMAC have long since been unavailable causing the potential for long periods of accelerator down times once the limited CAMAC spares are exhausted. ATLAS has recently upgraded the Ethernet in the facility from a 100-Mbps (max) to a 1-Gbps network. Therefore, an Ethernet-based data acquisition system is desirable. The data acquisition replacement requires reliability, speed, and longevity to be a viable upgrade to the facility. In addition, the transition from CAMAC to a modern data acquisition system will be done with minimal interruption of operations.
 
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-THPDP081  
About • Received ※ 10 October 2023 — Revised ※ 11 October 2023 — Accepted ※ 13 October 2023 — Issued ※ 20 October 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
THPDP088 ATCA-Based Beam Line Data Software for SLAC’s LCLS-II Timing System software, EPICS, Linux, FPGA 1560
 
  • D. Alnajjar, M.P. Donadio, K.H. Kim, M. Weaver
    SLAC, Menlo Park, California, USA
 
  Funding: Work supported by US DOE contract DE-AC02-76SF00515
Among the several acquisition services available with SLAC’s high beam rate accelerator, all of which are contemplated in the acquisition service EPICS support package, resides the new Advanced Telecommunications Computing Architecture (ATCA) Beam Line Data (BLD) service. BLD runs on top of SLAC’s common platform software and firmware, and communicates with several high-performance systems (i.e. MPS, BPM, LLRF, timing, etc.) in LCLS, running on a 7-slot ATCA crate. Once linked with an ATCA EPICS IOC and with the proper commands called in the IOC shell, it initializes the BLD FPGA logic and the upper software stack, and makes PVs available allowing the control of the BLD data acquisition rates, and the starting of the BLD data acquisition. This service permits the forwarding of acquired data to configured IP addresses and ports in the format of multicast network packets. Up to four BLD rates can be configured simultaneously, each accessible at its configured IP destination, and with a maximum rate of 1MHz. Users interested in acquiring any of the four BLD rates will need to register in the corresponding IP destination for receiving a copy of the multicast packet on their respective receiver software. BLD has allowed data to be transmitted over multicast packets for over a decade at SLAC, but always at a maximum rate of 120 Hz. The present work focuses on bringing this service to the high beam rate high-performance systems using ATCAs, allowing the reuse of many legacy in-house-developed client software infrastructures.
 
poster icon Poster THPDP088 [1.060 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-THPDP088  
About • Received ※ 03 October 2023 — Accepted ※ 06 December 2023 — Issued ※ 17 December 2023  
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
FR1BCO04 The Controls and Science IT Project for the SLS 2.0 Upgrade controls, storage-ring, EPICS, experiment 1616
 
  • A. Ashton, H.-H. Braun, S. Fries, X. Yao, E. Zimoch
    PSI, Villigen PSI, Switzerland
 
  Operation of the Swiss Light Source (SLS) at the Paul Scherrer Institue (PSI) in Switzerland began in 2000 and it quickly became one of the most successful synchrotron radiation facilities worldwide, providing academic and industry users with a suite of excellent beamlines covering a wide range of methods and applications. To maintain the SLS at the forefront of synchrotron user facilities and to exploit all of the improvement opportunities, PSI prepared a major upgrade project for SLS, named SLS 2.0. The Controls and Science IT (CaSIT) subproject was established to help instigate a project management structure to facilitate new concepts, increased communication, and clarify budgetary responsibility. This article focusses on the progress being made to exploit the current technological opportunities offered by a break in operations whilst taking into consideration future growth opportunities and realistic operational support within an academic research facility.  
slides icon Slides FR1BCO04 [6.389 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-FR1BCO04  
About • Received ※ 05 October 2023 — Revised ※ 10 October 2023 — Accepted ※ 20 November 2023 — Issued ※ 17 December 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)