System Modelling
Artificial Intelligence & Machine Learning
Paper Title Page
TU1BCO01 A Workflow for Training and Deploying Machine Learning Models to EPICS 244
 
  • M.F. Leputa, K.R.L. Baker, M. Romanovschi
    STFC/RAL/ISIS, Chilton, Didcot, Oxon, United Kingdom
 
  The transition to EPICS as the control system for the ISIS Neutron and Muon Source accelerators is an opportunity to more easily integrate machine learning into operations. But developing high quality machine learning (ML) models is insufficient. Integration into critical operations requires good development practices to ensure stability and reliability during deployment and to allow robust and easy maintenance. For these reasons we implemented a workflow for training and deploying models that utilize off-the-shelf, industry-standard tools such as MLflow. Our experience of how adoption of these tools can make developer’s lives easier during the training phase of a project is discussed. We describe how these tools may be used in an automated deployment pipeline to allow the ML model to interact with our EPICS ecosystem through Python-based IOCs within a containerized environment. This reduces the developer effort required to produce GUIs to interact with the models within the ISIS Main Control Room as tools familiar to operators, such as Phoebus, may be used.  
slides icon Slides TU1BCO01 [3.370 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-TU1BCO01  
About • Received ※ 05 October 2023 — Accepted ※ 12 October 2023 — Issued ※ 19 October 2023  
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
TU1BCO02 Integrating System Knowledge in Unsupervised Anomaly Detection Algorithms for Simulation-Based Failure Prediction of Electronic Circuits 249
 
  • F. Waldhauser, H. Boukabache, D. Perrin, S. Roesler
    CERN, Meyrin, Switzerland
  • M. Dazer
    Universität Stuttgart, Stuttgart, Germany
 
  Funding: This work has been sponsored by the Wolfgang Gentner Programme of the German Federal Ministry of Education and Research (grant no. 13E18CHA).
Machine learning algorithms enable failure prediction of large-scale, distributed systems using historical time-series datasets. Although unsupervised learning algorithms represent a possibility to detect an evolving variety of anomalies, they do not provide links between detected data events and system failures. Additional system knowledge is required for machine learning algorithms to determine the nature of detected anomalies, which may represent either healthy system behavior or failure precursors. However, knowledge on failure behavior is expensive to obtain and might only be available upon pre-selection of anomalous system states using unsupervised algorithms. Moreover, system knowledge obtained from evaluation of system states needs to be appropriately provided to the algorithms to enable performance improvements. In this paper, we will present an approach to efficiently configure the integration of system knowledge into unsupervised anomaly detection algorithms for failure prediction. The methodology is based on simulations of failure modes of electronic circuits. Triggering system failures based on synthetically generated failure behaviors enables analysis of the detectability of failures and generation of different types of datasets containing system knowledge. In this way, the requirements for type and extend of system knowledge from different sources can be determined, and suitable algorithms allowing the integration of additional data can be identified.
 
slides icon Slides TU1BCO02 [2.541 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-TU1BCO02  
About • Received ※ 02 October 2023 — Accepted ※ 12 October 2023 — Issued ※ 25 October 2023  
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
TU1BCO03 Systems Modelling, AI/ML Algorithms Applied to Control Systems 257
 
  • S.A. Mnisi
    SARAO, Cape Town, South Africa
 
  Funding: National Research Foundation (South Africa)
The 64 receptor (with 20 more being built) radio telescope in the Karoo, South Africa, comprises a large number of devices and components connected to the Control-and-Monitoring (CAM) system via the Karoo Array Telescope Communication Protocol (KATCP). KATCP is used extensively for internal communications between CAM components and other subsystems. A KATCP interface exposes requests and sensors; sampling strategies are set on sensors, ranging from several updates per second to infrequent on-change updates. The sensor samples are of different types, from small integers to text fields. The samples and associated timestamps are permanently stored and made available for scientists, engineers and operators to query and analyze. This is a presentation on how to apply Machine Learning tools which utilize data-driven algorithms and statistical models to analyze sensor data sets and then draw inferences from identified patterns or make predictions based on them. The algorithms learn from the sensor data as they run against it, unlike traditional rules-based analytics systems that follow explicit instructions. Since this involves data preprocessing, we will go through how the MeerKAT telescope data storage infrastructure (called Katstore) manages the voluminous variety, velocity and volume of this data.
 
slides icon Slides TU1BCO03 [1.647 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-TU1BCO03  
About • Received ※ 06 October 2023 — Revised ※ 09 November 2023 — Accepted ※ 14 December 2023 — Issued ※ 21 December 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
TU1BCO04 Laser Focal Position Correction Using FPGA-Based ML Models 262
 
  • J.A. Einstein-Curtis, S.J. Coleman, N.M. Cook, J.P. Edelen
    RadiaSoft LLC, Boulder, Colorado, USA
  • S.K. Barber, C.E. Berger, J. van Tilborg
    LBNL, Berkeley, California, USA
 
  Funding: This material is based upon work supported by the U.S. Department of Energy, Office of Science, Office of High Energy Physics under Award Number DE-SC 00259037.
High repetition-rate, ultrafast laser systems play a critical role in a host of modern scientific and industrial applications. We present a diagnostic and correction scheme for controlling and determining laser focal position by utilizing fast wavefront sensor measurements from multiple positions to train a focal position predictor. This predictor and additional control algorithms have been integrated into a unified control interface and FPGA-based controller on beamlines at the Bella facility at LBNL. An optics section is adjusted online to provide the desired correction to the focal position on millisecond timescales by determining corrections for an actuator in a telescope section along the beamline. Our initial proof-of-principle demonstrations leveraged pre-compiled data and pre-trained networks operating ex-situ from the laser system. A framework for generating a low-level hardware description of ML-based correction algorithms on FPGA hardware was coupled directly to the beamline using the AMD Xilinx Vitis AI toolchain in conjunction with deployment scripts. Lastly, we consider the use of remote computing resources, such as the Sirepo scientific framework*, to actively update these correction schemes and deploy models to a production environment.
* M.S. Rakitin et al., "Sirepo: an open-source cloud-based software interface for X-ray source and optics simulations" Journal of Synchrotron Radiation25, 1877-1892 (Nov 2018).
 
slides icon Slides TU1BCO04 [1.876 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-TU1BCO04  
About • Received ※ 06 October 2023 — Accepted ※ 14 November 2023 — Issued ※ 18 December 2023  
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
TU1BCO05 Model Driven Reconfiguration of LANSCE Tuning Methods 267
 
  • C.E. Taylor, P.M. Anisimov, S.A. Baily, E.-C. Huang, H.L. Leffler, L. Rybarcyk, A. Scheinker, H.A. Watkins, E.E. Westbrook, D.D. Zimmermann
    LANL, Los Alamos, New Mexico, USA
 
  Funding: National Nuclear Security Administration (NNSA)
This work presents a review of the shift in tuning methods employed at the Los Alamos Neutron Science Center (LANSCE). We explore the tuning categories and methods employed in four key sections of the accelerator, namely the Low-Energy Beam Transport (LEBT), the Drift Tube Linac (DTL), the side-Coupled Cavity Linac (CCL), and the High-Energy Beam Transport (HEBT). The study additionally presents the findings of employing novel software tools and algorithms to enhance each domain’s beam quality and performance. This study showcases the efficacy of integrating model-driven and model-independent tuning techniques, along with acceptance and adaptive tuning strategies, to enhance the optimization of beam delivery to experimental facilities. The research additionally addresses the prospective strategies for augmenting the control system and diagnostics of LANSCE.
*R.W. Garnett, J. Phys.: Conf. Ser. 1021 012001
**A. Scheinker, Rev. ST Accel. Beams 16 102803 2013
***R. Keller, Proc of Part Accel Conf
****M. Oothoudt, Proc of Part Accel Conf, 2003, v4
 
slides icon Slides TU1BCO05 [2.886 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-TU1BCO05  
About • Received ※ 06 October 2023 — Revised ※ 08 October 2023 — Accepted ※ 12 December 2023 — Issued ※ 13 December 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
TU1BCO06 Disentangling Beam Losses in The Fermilab Main Injector Enclosure Using Real-Time Edge AI 273
 
  • K.J. Hazelwood, J.M.S. Arnold, M.R. Austin, J.R. Berlioz, P.M. Hanlet, M.A. Ibrahim, A.T. Livaudais-Lewis, J. Mitrevski, V.P. Nagaslaev, A. Narayanan, D.J. Nicklaus, G. Pradhan, A.L. Saewert, B.A. Schupbach, K. Seiya, R.M. Thurman-Keup, N.V. Tran
    Fermilab, Batavia, Illinois, USA
  • J.YC. Hu, J. Jiang, H. Liu, S. Memik, R. Shi, A.M. Shuping, M. Thieme, C. Xu
    Northwestern University, EVANSTON, USA
  • A. Narayanan
    Northern Illinois University, DeKalb, Illinois, USA
 
  The Fermilab Main Injector enclosure houses two accelerators, the Main Injector and Recycler Ring. During normal operation, high intensity proton beams exist simultaneously in both. The two accelerators share the same beam loss monitors (BLM) and monitoring system. Deciphering the origin of any of the 260 BLM readings is often difficult. The (Accelerator) Real-time Edge AI for Distributed Systems project, or READS, has developed an AI/ML model, and implemented it on fast FPGA hardware, that disentangles mixed beam losses and attributes probabilities to each BLM as to which machine(s) the loss originated from in real-time. The model inferences are then streamed to the Fermilab accelerator controls network (ACNET) where they are available for operators and experts alike to aid in tuning the machines.  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-TU1BCO06  
About • Received ※ 06 October 2023 — Revised ※ 11 October 2023 — Accepted ※ 15 November 2023 — Issued ※ 06 December 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
TUMBCMO13 Applications of Artificial Intelligence in Laser Accelerator Control System 372
 
  • F.N. Li, K.C. Chen, Z. Guo, Q.Y. He, C. Lin, Q. Wang, Y. Xia, M.X. Zang
    PKU, Beijing, People’s Republic of China
 
  Funding: the National Natural Science Foundation of China (Grants No. 11975037, NO. 61631001 and No. 11921006), and the National Grand Instrument Project (No. 2019YFF01014400 and No. 2019YFF01014404).
Ultra-intense laser-plasma interactions can produce TV/m acceleration gradients, making them promising for compact accelerators. Peking University is constructing a proton radiotherapy system prototype based on PW laser accelerators, but transient processes challenge stability control, critical for medical applications. This work demonstrates artificial intelligence’s (AI) application in laser accelerator control systems. To achieve micro-precision alignment between the ultra-intense laser and target, we propose an automated positioning program using the YOLO algorithm. This real-time method employs a convolutional neural network, directly predicting object locations and class probabilities from input images. It enables precise, automatic solid target alignment in about a hundred milliseconds, reducing experimental preparation time. The YOLO algorithm is also integrated into the safety interlocking system for anti-tailing, allowing quick emergency response. The intelligent control system also enables convenient, accurate beam tuning. We developed high-performance virtual accelerator software using "OpenXAL" and GPU-accelerated multi-particle beam transport simulations. The software allows real-time or custom parameter simulations and features control interfaces compatible with optimization algorithms. By designing tailored objective functions, desired beam size and distribution can be achieved in a few iterations.
 
slides icon Slides TUMBCMO13 [1.162 MB]  
poster icon Poster TUMBCMO13 [1.011 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-TUMBCMO13  
About • Received ※ 04 October 2023 — Revised ※ 12 October 2023 — Accepted ※ 23 November 2023 — Issued ※ 23 November 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
TUMBCMO14 Initial Test of a Machine Learning Based SRF Cavity Active Resonance Control 379
 
  • F.Y. Wang, J. Cruz
    SLAC, Menlo Park, California, USA
 
  We’ll introduce a high precision active motion controller based on machine learning (ML) technology and electric piezo actuator. The controller will be used for SRF cavity active resonance control, where a data-driven model for system motion dynamics will be developed first, and a model predictive controller (MPC) will be built accordingly. Simulation results as well as initial test results with real SRF cavities will be presented in the paper.  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-TUMBCMO14  
About • Received ※ 03 October 2023 — Revised ※ 14 November 2023 — Accepted ※ 27 November 2023 — Issued ※ 09 December 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
TUMBCMO15 Enhancing Electronic Logbooks Using Machine Learning 382
 
  • J. Maldonado, S.L. Clark, W. Fu, S. Nemesure
    BNL, Upton, New York, USA
 
  Funding: Work supported by Brookhaven Science Associates, LLC under Contract No. DE-SC0012704
The electronic logbook (elog) system used at Brookhaven National Laboratory’s Collider-Accelerator Department (C-AD) allows users to customize logbook settings, including specification of favorite logbooks. Using machine learning techniques, customizations can be further personalized to provide users with a view of entries that match their specific interests. We will utilize natural language processing (NLP), optical character recognition (OCR), and topic models to augment the elog system. NLP techniques will be used to process and classify text entries. To analyze entries including images with text, such as screenshots of controls system applications, we will apply OCR. Topic models will generate entry recommendations that will be compared to previously tested language processing models. We will develop a command line interface tool to ease automation of NLP tasks in the controls system and create a web interface to test entry recommendations. This technique will create recommendations for each user, providing custom sets of entries and possibly eliminate the need for manual searching.
 
slides icon Slides TUMBCMO15 [0.905 MB]  
poster icon Poster TUMBCMO15 [4.697 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-TUMBCMO15  
About • Received ※ 04 October 2023 — Revised ※ 12 October 2023 — Accepted ※ 24 November 2023 — Issued ※ 10 December 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
TUPDP020 Summary Report on Machine Learning-Based Applications at the Synchrotron Light Source Delta 537
 
  • D. Schirmer, S. Khan, A. Radha Krishnan
    DELTA, Dortmund, Germany
 
  In recent years, several control system applications using machine learning (ML) techniques have been developed and tested to automate the control and optimization of the 1.5 GeV synchrotron radiation source DELTA. These applications cover a wide range of tasks, including electron beam position correction, working point control, chromaticity adjustment, injection process optimization, as well as CHG-spectra (coherent harmonic generation) analysis. Various machine learning techniques have been used to implement these projects. This report provides an overview of these projects, summarizes the current results, and indicates ideas for future improvements.  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-TUPDP020  
About • Received ※ 04 October 2023 — Accepted ※ 06 December 2023 — Issued ※ 13 December 2023  
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
TUPDP102 Leveraging Local Intelligence to Industrial Control Systems through Edge Technologies 793
 
  • A. Patil, F. Ghawash, B. Schofield, F. Varela
    CERN, Meyrin, Switzerland
  • D. Daniel, K. Kaufmann, A.S. Sündermann
    SAGÖ, Vienna, Austria
  • C. Kern
    Siemens AG, Corporate Technology, München, Germany
 
  Industrial processes often use advanced control algorithms such as Model Predictive Control (MPC) and Machine Learning (ML) to improve performance and efficiency. However, deploying these algorithms can be challenging, particularly when they require significant computational resources and involve complex communication protocols between different control system components. To address these challenges, we showcase an approach leveraging industrial edge technologies to deploy such algorithms. An edge device is a compact and powerful computing device placed at the network’s edge, close to the process control. It executes the algorithms without extensive communication with other control system components, thus reducing latency and load on the central control system. We also employ an analytics function platform to manage the life cycle of the algorithms, including modifications and replacements, without disrupting the industrial process. Furthermore, we demonstrate a use case where an MPC algorithm is run on an edge device to control a Heating, Ventilation, and Air Conditioning (HVAC) system. An edge device running the algorithm can analyze data from temperature sensors, perform complex calculations, and adjust the operation of the HVAC system accordingly. In summary, our approach of utilizing edge technologies enables us to overcome the limitations of traditional approaches to deploying advanced control algorithms in industrial settings, providing more intelligent and efficient control of industrial processes.  
poster icon Poster TUPDP102 [3.321 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-TUPDP102  
About • Received ※ 06 October 2023 — Revised ※ 21 October 2023 — Accepted ※ 05 December 2023 — Issued ※ 12 December 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
TUPDP114 Machine Learning Based Noise Reduction of Neutron Camera Images at ORNL 841
 
  • I.V. Pogorelov, J.P. Edelen, M.J. Henderson, M.C. Kilpatrick
    RadiaSoft LLC, Boulder, Colorado, USA
  • S. Calder, B. Vacaliuc
    ORNL RAD, Oak Ridge, Tennessee, USA
  • R.D. Gregory, G.S. Guyotte, C.M. Hoffmann, B.K. Krishna
    ORNL, Oak Ridge, Tennessee, USA
 
  Funding: This material is based upon work supported by the U.S. Department of Energy, Office of Science, Office of Basic Energy Science under Award Number DE-SC0021555.
Neutron cameras are utilized at the HB2A powder diffractometer to image the sample for alignment in the beam. Typically, neutron cameras are quite noisy as they are constantly being irradiated. Removal of this noise is challenging due to the irregular nature of the pixel intensity fluctuations and the tendency for it to change over time. RadiaSoft has developed a novel noise reduction method for neutron cameras that inscribes a lower envelope of the image signal. This process is then sped up using machine learning. Here we report on the results of our noise reduction method and describe our machine learning approach for speeding up the algorithm for use during operations.
 
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-TUPDP114  
About • Received ※ 07 October 2023 — Revised ※ 22 October 2023 — Accepted ※ 11 December 2023 — Issued ※ 16 December 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
TUPDP115 Machine Learning for Compact Industrial Accelerators 846
 
  • J.P. Edelen, J.A. Einstein-Curtis, M.J. Henderson, M.C. Kilpatrick
    RadiaSoft LLC, Boulder, Colorado, USA
  • J.A. Diaz Cruz, A.L. Edelen
    SLAC, Menlo Park, California, USA
 
  Funding: This material is based upon work supported by the DOE Accelerator R&D and Production under Award Number DE-SC0023641.
The industrial and medical accelerator industry is an ever-growing field with advancements in accelerator technology enabling its adoption for new applications. As the complexity of industrial accelerators grows so does the need for more sophisticated control systems to regulate their operation. Moreover, the environment for industrial and medical accelerators is often harsh and noisy as opposed to the more controlled environment of a laboratory-based machine. This environment makes control more challenging. Additionally, instrumentation for industrial accelerators is limited making it difficult at times to identify and diagnose problems when they occur. RadiaSoft has partnered with SLAC to develop new machine learning methods for control and anomaly detection for industrial accelerators. Our approach is to develop our methods using simulation models followed by testing on experimental systems. Here we present initial results using simulations of a room temperature s-band system.
 
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-TUPDP115  
About • Received ※ 06 October 2023 — Accepted ※ 05 December 2023 — Issued ※ 18 December 2023  
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
TUPDP116 Machine Learning Based Sample Alignment at TOPAZ 851
 
  • M.J. Henderson, J.P. Edelen, M.C. Kilpatrick, I.V. Pogorelov
    RadiaSoft LLC, Boulder, Colorado, USA
  • S. Calder, B. Vacaliuc
    ORNL RAD, Oak Ridge, Tennessee, USA
  • R.D. Gregory, G.S. Guyotte, C.M. Hoffmann, B.K. Krishna
    ORNL, Oak Ridge, Tennessee, USA
 
  Funding: This material is based upon work supported by the U.S. Department of Energy, Office of Science, Office of Basic Energy Science under Award Number DE-SC0021555.
Neutron scattering experiments are a critical tool for the exploration of molecular structure in compounds. The TOPAZ single crystal diffractometer at the Spallation Neutron Source studies these samples by illuminating samples with different energy neutron beams and recording the scattered neutrons. During the experiments the user will change temperature and sample position in order to illuminate different crystal faces and to study the sample in different environments. Maintaining alignment of the sample during this process is key to ensuring high quality data are collected. At present this process is performed manually by beamline scientists. RadiaSoft in collaboration with the beamline scientists and engineers at ORNL has developed a new machine learning based alignment software automating this process. We utilize a fully-connected convolutional neural network configured in a U-net architecture to identify the sample center of mass. We then move the sample using a custom python-based EPICS IOC interfaced with the motors. In this talk we provide an overview of our machine learning tools and show our initial results aligning samples at ORNL.
 
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-TUPDP116  
About • Received ※ 06 October 2023 — Accepted ※ 05 December 2023 — Issued ※ 11 December 2023  
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
TUPDP117 Classification and Prediction of Superconducting Magnet Quenches 856
 
  • J.A. Einstein-Curtis, J.P. Edelen, M.C. Kilpatrick, R. O’Rourke
    RadiaSoft LLC, Boulder, Colorado, USA
  • K.A. Drees, J.S. Laster, M. Valette
    BNL, Upton, New York, USA
 
  Funding: This material is based upon work supported by the U.S. Department of Energy, Office of Science, Office of High Energy Physics under Award Number DE-SC0021699.
Robust and reliable quench detection for superconducting magnets is increasingly important as facilities push the boundaries of intensity and operational runtime. RadiaSoft has been working with Brookhaven National Lab on quench detection and prediction for superconducting magnets installed in the RHIC storage rings. This project has analyzed several years of power supply and beam position monitor data to train automated classification tools and automated quench precursor determination based on input sequences. Classification was performed using supervised multilayer perceptron and boosted decision tree architectures, while models of the expected operation of the ring were developed using a variety of autoencoder architectures. We have continued efforts to maximize area under the receiver operating characteristic curve for the multiple classification problem of real quench, fake quench, and no-quench events. We have also begun work on long short-term memory (LSTM) and other recurrent architectures for quench prediction. Examinations of future work utilizing more robust architectures, such as variational autoencoders and Siamese models, as well as methods necessary for uncertainty quantification will be discussed.
 
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-TUPDP117  
About • Received ※ 08 October 2023 — Revised ※ 22 October 2023 — Accepted ※ 05 December 2023 — Issued ※ 07 December 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
TUPDP138 Exploratory Data Analysis on the RHIC Cryogenics System Compressor Dataset 907
 
  • Y. Gao, K.A. Brown, R.J. Michnoff, L.K. Nguyen, A.Z. Zarcone, B. van Kuik
    BNL, Upton, New York, USA
  • A.D. Tran
    FRIB, East Lansing, Michigan, USA
 
  Funding: Work supported by Brookhaven Science Associates, LLC under Contract No. DE-SC0012704 with the U.S. Department of Energy.
The Relativistic Heavy Ion Collider (RHIC) Cryogenic Refrigerator System is the cryogenic heart that allows RHIC superconducting magnets to operate. Parts of the refrigerator are two stages of compression composed of ten first and five second-stage compressors. Compressors are critical for operations. When a compressor faults, it can impact RHIC beam operations if a spare compressor is not brought online as soon as possible. The potential of applying machine learning to detect compressor problems before a fault occurs would greatly enhance Cryo operations, allowing an operator to switch to a spare compressor before a running compressor fails, minimizing impacts on RHIC operations. In this work, various data analysis results on historical compressor data are presented. It demonstrates an autoencoder-based method, which can catch early signs of compressor trips so that advance notices can be sent for the operators to take action.
 
poster icon Poster TUPDP138 [2.897 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-TUPDP138  
About • Received ※ 05 October 2023 — Revised ※ 22 October 2023 — Accepted ※ 30 November 2023 — Issued ※ 11 December 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)