Hardware
Control System Infrastructure
Paper Title Page
WE1BCO01 VME2E: VME to Ethernet - Common Hardware Platform for legacy VME Module Upgrade 949
 
  • J.P. Jamilkowski
    Brookhaven National Laboratory (BNL), Electron-Ion Collider, Upton, New York, USA
  • Y. Tian
    BNL, Upton, New York, USA
 
  Funding: DOE Office of Science
VME architecture was developed in late 1970s. It has proved to be a rugged control system hardware platform for the last four decades. Today the VME hardware platform is facing four challenges from 1) backplane communication speed bottleneck; 2) computing power limits from centralized computing infrastructure; 3) obsolescence and cost issues to support a real-time operating system; 4) obsolescence issues of the legacy VME hardware. The next generation hardware platform such as ATCA and microTCA requires fundamental changes in hardware and software. It also needs large investment. For many legacy system upgrades, this approach is not applicable. We will discuss an open-source hardware platform, VME2E (VME to Ethernet), which allows the one-to-one replacement of legacy VME module without disassembling of the existing VME system. The VME2E has the VME form factor. It can be installed the existing VME chassis, but without use the VME backplane to communicate with the front-end computer and therefore solves the first three challenges listed above. The VME2E will only take advantage of two good benefits from a VME system: stable power supply which VME2E module will get from the backplane, and the cooling environment. The VME2E will have the most advanced 14nm Xilinx FPGA SOM with GigE for parallel computing and high speed communication. It has a high pin count (HPC) FPGA mezzanine connector (FMC) to benefit the IO daughter boards supply of the FMC ecosystem. The VME2E is designed as a low cost, open-source common platform for legacy VME upgrade.
 
slides icon Slides WE1BCO01 [1.141 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-WE1BCO01  
About • Received ※ 06 October 2023 — Revised ※ 09 October 2023 — Accepted ※ 19 November 2023 — Issued ※ 22 November 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WE1BCO02 Data Management Infrastructure for European XFEL 952
 
  • J. Malka, S. Aplin, D. Boukhelef, K. Filippakopoulos, L.G. Maia, T. Piszczek, Mr. Previtali, J. Szuba, K. Wrona
    EuXFEL, Schenefeld, Germany
  • S. Dietrich, MA. Gasthuber, J. Hannappel, M. Karimi, Y. Kemp, R. Lueken, T. Mkrtchyan, K. Ohrenberg, F. Schlünzen, P. Suchowski, C. Voss
    DESY, Hamburg, Germany
 
  Effective data management is crucial to ensure research data is easily accessible and usable. We will present design and implementation of the European XFEL data management infrastructure supporting high level data management services. The system architecture comprises four layers of storage systems, each designed to address specific challenges. The first layer, referred to as online, is designed as a fast cache to accommodate extreme high rates (up to 15GB/s) of data generated during experiment at single scientific instrument. The second layer, called high-performance storage, provides necessary capabilities for data processing both during and after experiments. The layers are incorporated into a single infiniband fabric and connected through a 4km long 1Tb/s link. This allows fast data transfer from the European XFEL experiment hall to the DESY computing center. The third layer, mass-storage, extends the capacity of data storage system to allow mid-term data access for detailed analysis. Finally, the tape archive, provides data safety and long-term archive (5-10years). The high performance and mass storage systems are connected to computing clusters. This allows users to perform near-online and offline data analysis or alternatively export data outside of the European XFEL facility. The data management infrastructure at the European XFEL has the capacity to accept and process up to 2PB of data per day, which demonstrates the remarkable capabilities of all the sub-services involved in this process.  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-WE1BCO02  
About • Received ※ 06 October 2023 — Revised ※ 23 October 2023 — Accepted ※ 08 December 2023 — Issued ※ 12 December 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WE1BCO03 Design of the HALF Control System 958
 
  • G. Liu, L.G. Chen, C. Li, X.K. Sun, K. Xuan, D.D. Zhang
    USTC/NSRL, Hefei, Anhui, People’s Republic of China
 
  The Hefei Advanced Light Facility (HALF) is a 2.2-GeV 4th synchrotron radiation light source, which is scheduled to start construction in Hefei, China in 2023. The HALF contains an injector and a 480-m diffraction limited storage ring, and 10 beamlines for phase one. The HALF control system is EPICS based with integrated application and data platforms for the entire facility including accelerator and beamlines. The unified infrastructure and network architecture are designed to build the control system. The infrastructure provides resources for the EPICS development and operation through virtualization technology, and provides resources for the storage and process of experimental data through distributed storage and computing clusters. The network is divided into the control network and the dedicated high-speed data network by physical separation, the control network is subdivided into multiple subnets by VLAN technology. Through estimating the scale of the control system, the 10Gbps control backbone network and the data network that can be expanded to 100Gbps can fully meet the communication requirements of the control system. This paper reports the control system architecture design and the development work of some key technologies in details.  
slides icon Slides WE1BCO03 [2.739 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-WE1BCO03  
About • Received ※ 02 October 2023 — Revised ※ 09 October 2023 — Accepted ※ 13 October 2023 — Issued ※ 26 October 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WE1BCO04 The LCLS-II Experiment System Vacuum Controls Architecture 962
 
  • M. Ghaly, T.A. Wallace
    SLAC, Menlo Park, California, USA
 
  Funding: This work is supported by Department of Energy contract DE-AC02-76SF00515.
The LCLS-II Experiment System Vacuum Controls Architecture is a collection of vacuum system design templates, interlock logics, supported components (eg. gauges, pumps, valves), interface I/O, and associated software libraries which implement a baseline functionality and simulation. The architecture also includes a complement of engineering and deployment tools including cable test boxes or hardware simulators, as well as some automatic configuration tools. Vacuum controls at LCLS spans from rough vacuum in complex pumping manifolds, protection of highly-sensitive x-ray optics using fast shutters, maintenance of ultra-high vacuum in experimental sample delivery setups, and beyond. Often, the vacuum standards for LCLS systems exceeds what most vendors are experienced with. The system must maintain high-availability, while also remaining flexible and handling ongoing modifications. This paper will review the comprehensive architecture, the requirements of the LCLS systems, and introduce how to use it for new vacuum system designs. The architecture is meant to influence all phases of a vacuum system lifecycle, and ideally could become a shared project for installations beyond LCLS-II.
 
slides icon Slides WE1BCO04 [3.154 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-WE1BCO04  
About • Received ※ 31 October 2023 — Revised ※ 20 November 2023 — Accepted ※ 08 December 2023 — Issued ※ 12 December 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WE1BCO05
High Accuracy and Cost-Efficient Ethernet-Based Timing System for the IFMIF-DONES Facility  
 
  • J. Díaz, I.C. Casero, C. Megías
    UGR, Granada, Spain
 
  Funding: This work has been carried out within EUROfusion, funded by the European Union via the Euratom Research and Training Programme (Grant Agreement No 101052200).
This article presents the timing system design of the IFMIF-DONES facility, which aims to develop materials that can withstand the harsh conditions of a fusion reactor while maintaining their structural integrity and functional properties. A key goal is to achieve high availability, which requires strong resiliency and redundancy measures throughout the plant design. The timing system design starts with a master clock composed of a stable master oscillator combined with GNSS receiver and clock disciplining equipment. They generate a local time scale and reference frequency with high stability. Three different Ethernet-based protocols are then combined, including NTP, IEEE-1588-2008 & 2019 High Accuracy profile (White-Rabbit) for time transfer purposes. NTP is used for generic computers and industrial devices that lack significant timing constraints, while IEEE-1588-2008 is used for industrial devices that require 1us accuracy or better. Both techniques can be implemented using off-the-shelf equipment and operate well over networks with moderate bandwidth utilization. The White-Rabbit protocol is used for devices that require highly accurate timing and can achieve sub-ns accuracy. It is typically designed for small, dedicated networks for timing only. This contribution describes the design of this timing system, highlighting how the best trade-off between cost and performance can be achieved through Ethernet technologies and how resiliency methods are implemented.
Department of Computer engineering, automation and robotics, University of Granada, Spain.
 
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WE1BCO07 The LCLS-II Precision Timing Control System 966
 
  • T.K. Johnson, M.C. Browne, C.B. Pino
    SLAC, Menlo Park, California, USA
 
  The LCLS-II precision timing system is responsible for the synchronization of optical lasers with the LCLS-II XFEL. The system uses both RF and optical references for synchronization. In contrast to previous systems used at LCLS the optical lasers are shared resources, and must be managed during operations. The timing system consists of three primary functionalities: RF reference distribution, optical reference distribution, and a phase-locked loop (PLL). This PLL may use either the RF or the optical reference as a feedback source. The RF allows for phase comparisons over a relatively wide range, albeit with limited resolution, while the optical reference enables very fine phase comparison (down to attoseconds), but with limited operational range. These systems must be managed using high levels of automation. Much of this automation is done via high-level applications developed in EPICS. The beamline users are presented with relatively simple interfaces that streamline operation and abstract much of the system complexity away. The system provides both PyDM GUIs as well as python interfaces to enable time delay scanning in the LCLS-II DAQ.  
slides icon Slides WE1BCO07 [3.734 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-WE1BCO07  
About • Received ※ 06 November 2023 — Revised ※ 09 November 2023 — Accepted ※ 14 December 2023 — Issued ※ 20 December 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
THMBCMO21 Development of Standard MicroTCA Deployment at ESS 1238
 
  • F. Chicken, J.J. Jamróz, J.P.S. Martins
    ESS, Lund, Sweden
 
  At the European Spallation Source, over 300 MicroTCA systems will be deployed over the accelerator, target area and instruments. Covering integrations for RF, Beam Instrumentation, Machine Protection and Timing Distribution systems, ESS has developed a method to standardise the deployment of the basic MicroTCA system configuration using a combination of Python scripts and Ansible playbooks with a view to ensure long-term maintainability of the systems and future upgrades. By using Python scripts to setup, the Micro Carrier Hub (MCH) registering it on the network and update the firmware to our chosen version, and Ansible playbooks to register the Concurrent Technologies CPU on the ESS network and install the chosen Linux OS before a second playbook installs the ESS EPICs Environment (E3) ensures all new systems have identical setup procedures and have all the necessary packages before the on-site integration is started.  
slides icon Slides THMBCMO21 [0.686 MB]  
poster icon Poster THMBCMO21 [2.560 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-THMBCMO21  
About • Received ※ 05 October 2023 — Revised ※ 25 October 2023 — Accepted ※ 11 December 2023 — Issued ※ 16 December 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
THPDP016 Full Stack Performance Optimizations for FAIR Operation 1325
 
  • A. Schaller, H.C. Hüther, R. Mueller, A. Walter
    GSI, Darmstadt, Germany
 
  In the last beam times, operations reported a lack of performance and long waiting times when performing simple changes of the machines’ settings. To ensure performant operation of the future Facility for Antiproton and Ion Research (FAIR), the "Task Force Performance" (TFP) was formed in mid-2020, which aimed at optimizing all involved Control System components. Baseline measurements were recorded for different scenarios to compare and evaluate the steps taken by the TFP. These measurements contained data from all underlying systems, from hardware device data supply over network traffic up to user interface applications. Individual groups searched, detected and fixed performance bottlenecks in their components of the Control System stack, and the interfaces between these individual components were inspected as well. The findings are presented here.  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-THPDP016  
About • Received ※ 04 October 2023 — Revised ※ 29 November 2023 — Accepted ※ 13 December 2023 — Issued ※ 20 December 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
THPDP020 Management of EPICS IOCs in a Distributed Network Environment Using Salt 1340
 
  • E. Blomley, J. Gethmann, A.-S. Müller, M. Schuh
    KIT, Karlsruhe, Germany
  • S. Marsching
    Aquenos GmbH, Baden-Baden, Germany
 
  An EPICS-based control system typically consists of many individual IOCs, which can be distributed across many computers in a network. Managing hundreds of deployed IOCs, keeping track of where they are running, and providing operators with basic interaction capabilities can easily become a maintenance nightmare. At the Institute for Beam Physics and Technology (IBPT) of the Karlsruhe Institute of Technology (KIT), we operate separate networks for our accelerators KARA and FLUTE and use the Salt Project to manage the IT infrastructure. Custom Salt states take care of deploying our IOCs across multiple servers directly from the code repositories, integrating them into the host operating system and monitoring infrastructure. In addition, this allows the integration into our GUI in order to enable operators to monitor and control the process for each IOC without requiring any specific knowledge of where and how that IOC is deployed. Therefore, we can maintain and scale to any number of IOCs on any numbers of hosts nearly effortless. This paper presents the design of this system, discusses the tools and overall setup required to make it work, and shows off the integration into our GUI and monitoring systems.  
poster icon Poster THPDP020 [0.431 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-THPDP020  
About • Received ※ 04 October 2023 — Accepted ※ 10 December 2023 — Issued ※ 14 December 2023  
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
THPDP022 Adaptable Control System for the Photon Beamlines at the European XFEL: Integrating New Devices and Technologies for Advanced Research 1349
 
  • B. Rio, M. Dommach, D. Finze, M. Petrich, H. Sinn, V. Strauch, A. Trapp, J.R. Villanueva Guerrero
    EuXFEL, Schenefeld, Germany
 
  The European XFEL is an X-ray free-electron laser (FEL) facility located in Schenefeld, in the vicinity of Hamburg, Germany. With a total length of 3.4 kilometers, the facility provides seven scientific instruments with extremely intense X-ray flashes ranging from the soft to the hard X-ray regime. The dimension of the beam transport and the technologies used to make this X-ray FEL unique have led to the design and buildup of a challenging and adaptable control system based on a Programmable Logic Controller (PLC). Six successful years of user operation, which started in September 2017, have required constant development of the beam transport in order to provide new features and improvements for the scientific community to perform their research activities. The framework of this contribution is focused on the photon beamline, which starts at the undulator section and guides the X-ray beam to the scientific instruments. In this scope, the control system topology and this adaptability to integrate new devices through the PLC Management System (PLCMS) are described. In 2022, a new distribution mirror was installed in the SASE3 beam transport system to provide photon beams to the seventh and newest scientific instrument, named Soft X-ray Port (SXP). To make the scope of this paper more practical, this new installation is used as an example. The integration in the actual control system of the vacuum devices, optic elements, and interlock definition are described.  
poster icon Poster THPDP022 [0.776 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-THPDP022  
About • Received ※ 05 October 2023 — Revised ※ 25 October 2023 — Accepted ※ 10 December 2023 — Issued ※ 14 December 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
THPDP025 The Superconducting Undulator Control System for the European XFEL 1362
 
  • M. Yakopov, S. Abeghyan, S. Casalbuoni, S. Karabekyan
    EuXFEL, Schenefeld, Germany
  • M.G. Gretenkord, D.P. Pieper
    Beckhoff Automation GmbH, Verl, Germany
  • A. Hobl, A.S. Sendner
    Bilfinger Noell GmbH, Wuerzburg, Germany
 
  The European XFEL development program includes the implementation of an afterburner based on superconducting undulator (SCU) technology for the SASE2 hard X-ray beamline. The design and production of the first SCU prototype, called PRE -SerieS prOtotype (S-PRESSO), together with the required control system, are currently underway. The architecture, key parameters, and detailed description of the functionality of the S-PRESSO control system are discussed in this paper.  
poster icon Poster THPDP025 [2.959 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-THPDP025  
About • Received ※ 06 October 2023 — Revised ※ 12 October 2023 — Accepted ※ 13 December 2023 — Issued ※ 15 December 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
THPDP032 Introduction of the Ethernet-Based Field Networks to Inter-Device Communication for RIBF Control System 1384
 
  • A. Uchiyama, N. Fukunishi, M. Komiyama
    RIKEN Nishina Center, Wako, Japan
 
  Internet Protocol (IP) networks are widely used to remotely control measurement instruments and controllers. In addition to proprietary protocols, common commands such as the standard commands for programmable instruments (SCPI) are used by manufacturers of measuring instruments. Many IP-network-based devices have been used in RIBF control systems constructed using the experimental physics and industrial control system (EPICS); these are commercial devices designed and developed independently. EPICS input/output controllers (IOCs) usually establish socket communications to send commands to IP-network-based devices. However, in the RIBF control system, reconnection between the EPICS IOC and the device is often not established after the loss of socket communication due to an unexpected power failure of the device or network switch. In this case, it is often difficult to determine whether the socket connection to the EPICS IOC is broken even after checking the communication by pinging. Using Ethernet as the field network in the physical layer between the device and EPICS IOC can solve these problems. Therefore, we are considering the introduction of field networks such as EtherCAT and Ethernet/IP, which use Ethernet in the physical layer. In the implementation of the prototype system, EPICS IOCs and devices are connected via EtherCAT and Soft PLCs are run on the machine running EPICS IOCs for sequence control.  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-THPDP032  
About • Received ※ 06 October 2023 — Revised ※ 11 October 2023 — Accepted ※ 11 December 2023 — Issued ※ 15 December 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
THPDP075
Full Scale System Test of Prototype Digitised Waveform System at ISIS  
 
  • K. Koh, R.A. Washington
    STFC/RAL/ISIS, Chilton, Didcot, Oxon, United Kingdom
 
  A digitised waveform system (DWS) is in development at the ISIS Neutron and Muon source to replace the existing Analogue Waveform Switching (AWS) system used for monitoring signals from distributed equipment. While the existing system multiplexes analogue signals into oscilloscopes, the proposed DWS will digitise the signals near their source for display with a PyQt based application. A proof-of-concept was previously commissioned with 48 channels and demonstrated the system’s feasibility. Further work has since been undertaken to scale the system up to the full capacity of 480 signals. In the full test system, acquisitions are timestamped using precise timing provided by a White Rabbit network. Additionally, a configuration service for the system was implemented. The scaled-up test system is compared with the existing AWS. Whilst it is primarily designed as a replacement for the AWS, the modular architecture makes it flexible enough to fulfil different potential purposes and work with different technologies. This last point is discussed with respect to other proposed upgrades for the ISIS accelerator controls system.  
poster icon Poster THPDP075 [0.845 MB]  
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
THPDP080 Gateware and Software for ALS-U Instrumentation 1536
 
  • L.M. Russo, A. Amodio, M.J. Chin, W.E. Norum, K.S. Penney, G.J. Portmann, J.M. Weber
    LBNL, Berkeley, California, USA
 
  Funding: Work supported by the Director, Office of Science, Office of Basic Energy Sciences, of the U.S. Department of Energy under Contract No. DE-AC02-05CH11231.
The Advanced Light Source Upgrade (ALS-U) is a diffraction-limited light source upgrade project under development at the Lawrence Berkeley National Laboratory. The Instrumentation team is responsible for developing hardware, gateware, embedded software and control system integration for diagnostics projects, including Beam Position Monitor (BPM), Fast Orbit Feedback (FOFB), High Speed Digitizer (HSD), Beam Current Monitor (BCM), as well as Fast Machine Protection System (FMPS) and Timing. This paper describes the gateware and software approach to these projects, its challenges, tests and integration plans for the novel accumulation and storage rings and transfer lines.
 
poster icon Poster THPDP080 [4.586 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-THPDP080  
About • Received ※ 04 October 2023 — Revised ※ 27 October 2023 — Accepted ※ 08 December 2023 — Issued ※ 15 December 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
THPDP081 Exploring Ethernet-Based CAMAC Replacements at ATLAS 1542
 
  • K.J. Bunnell, C. Dickerson, D.J. Novak, D. Stanton
    ANL, Lemont, Illinois, USA
 
  Funding: This work was supported by the US Department of Energy, Office of Nuclear Physics, under Contract No. DE-AC02-06CH11357. This research used resources of ANL’s ATLAS facility.
The Argonne Tandem Linear Accelerating System (ATLAS) facility at Argonne National Laboratory is researching ways at avoiding a crisis caused by the end-of-life issues with its 30 year-old CAMAC system. Replacement parts for CAMAC have long since been unavailable causing the potential for long periods of accelerator down times once the limited CAMAC spares are exhausted. ATLAS has recently upgraded the Ethernet in the facility from a 100-Mbps (max) to a 1-Gbps network. Therefore, an Ethernet-based data acquisition system is desirable. The data acquisition replacement requires reliability, speed, and longevity to be a viable upgrade to the facility. In addition, the transition from CAMAC to a modern data acquisition system will be done with minimal interruption of operations.
 
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-THPDP081  
About • Received ※ 10 October 2023 — Revised ※ 11 October 2023 — Accepted ※ 13 October 2023 — Issued ※ 20 October 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
THPDP090 LCLS-II Accelerator Vacuum Control System Design, Installation and Checkout 1564
 
  • S. Saraf, S.C. Alverson, S. Karimian, C. Lai, S. Nguyen
    SLAC, Menlo Park, California, USA
 
  Funding: Work supported by the U.S. Department of Energy under contract number DE-AC02-76SF00515
The LCLS-II Project at SLAC National Accelerator Laboratory has constructed a new superconducting accelerator which occupies the first kilometer of SLAC’s original 2-mile-long linear accelerator tunnel. The LCLS-II Vacuum System consists of a combination of particle free(PF) and non-particle free vacuum(non-PF) areas and multiple independent and interdependent systems, including the beamline vacuum, RF system vacuum, cryogenic system vacuum and support systems vacuum. The Vacuum Control System incorporates controls and monitoring of a variety of gauges, pumps, valves and Hiden RGAs. The design uses a Programmable Logic Controller (PLC) to perform valve interlocking functions to isolate bad vacuum areas. In PF areas, a voting scheme has been implemented for slow and fast shutter interlock logic to prevent spurious trips. Additional auxiliary control functions and high-level monitoring of vacuum components is reported to global control system via an Experimental Physics and Industrial Control System (EPICS) input output controller (IOC). This paper will discuss the design as well as the phased approach to installation and successful checkout of LCLS-II Vacuum Control System.
https://lcls.slac.stanford.edu/lcls-ii
 
poster icon Poster THPDP090 [1.787 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-THPDP090  
About • Received ※ 06 October 2023 — Revised ※ 10 October 2023 — Accepted ※ 19 December 2023 — Issued ※ 21 December 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
THPDP102 Machine Protection System at SARAF 1573
 
  • A. Gaget, J. Dumas
    CEA-IRFU, Gif-sur-Yvette, France
  • A. Chancé, F. Gougnaud, T.J. Joannem, A. Lotode, S. Monnereau, V. Nadot
    CEA-DRF-IRFU, France
  • H. Isakov, A. Perry, E. Reinfeld, I. Shmuely, N. Tamim, L. Weissman
    Soreq NRC, Yavne, Israel
 
  CEA Saclay Irfu is in charge of the major part of the control system of the SARAF-LINAC accelerator based at Soreq in Israel. This scope also includes the Machine Protection System. This system prevents any damage in the accelerator by shutting down the beam in case of detection of risky incidents like interceptive diagnostics in the beam or vacuum or cooling defects. So far, the system has been used successfully up to the MEBT. It will be tested soon for the super conducting Linac consisting of 4 cryomodules and 27 cavities. This Machine Protection System relies on three sets: the MRF timing system that is the messenger of the "shut beam" messages coming from any devices, IOxOS MTCA boards with custom FPGA developments that monitor the Section Beam Current Transmission along the accelerator and a Beam Destination Master that manages the beam destination required. This Destination Master is based on a master PLC. It permanently monitors Siemens PLCs that are in charge of the "slow" detection for fields such as vacuum, cryogenic and cooling system. The paper describes the architecture of this protection system and the exchanges between these three main parts.  
poster icon Poster THPDP102 [2.104 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-THPDP102  
About • Received ※ 04 October 2023 — Revised ※ 10 October 2023 — Accepted ※ 06 December 2023 — Issued ※ 18 December 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)