Keyword: status
Paper Title Other Keywords Page
MO4BCO02 Lessons from Using Python GraphQL Libraries to Develop an EPICS PV Server for Web UIs EPICS, controls, ECR, factory 191
 
  • R.J. Auger-Williams
    OSL, St Ives, Cambridgeshire, United Kingdom
  • A.L. Alexander, T.M. Cobb, M.J. Gaughran, A.J. Rose, A.W.R. Wells, A.A. Wilson
    DLS, Oxfordshire, United Kingdom
 
  Diamond Light Source is currently developing a web-based EPICS control system User Interface (UI). This will replace the use of EDM and the Eclipse-based CS-Studio at Diamond, and it will integrate with future Acquisition and Analysis software. For interoperability, it will use the Phoebus BOB file format. The architecture consists of a back-end application using EPICS Python libraries to obtain PV data and the query language GraphQL to serve these data to a React-based front end. A prototype was made in 2021, and we are now doing further development from the prototype to meet the first use cases. Our current work focuses on the back-end application, Coniql, and for the query interface we have selected the Strawberry GraphQL implementation from the many GraphQL libraries available. We discuss the reasons for this decision, highlight the issues that arose with GraphQL, and outline our solutions. We also demonstrate how well these libraries perform within the context of the EPICS web UI requirements using a set of performance metrics. Finally, we provide a summary of our development plans.  
slides icon Slides MO4BCO02 [4.243 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-MO4BCO02  
About • Received ※ 29 September 2023 — Accepted ※ 13 October 2023 — Issued ※ 20 October 2023  
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
TUPDP014 Bluesky Web Client at Bessy II experiment, controls, interface, real-time 518
 
  • H.L. He, G. Preuß, S.S. Sachse, W. Smith
    HZB, Berlin, Germany
  • R. Ovsyannikov
    BESSY GmbH, Berlin, Germany
 
  Funding: Helmholtz-Zentrum Berlin
Considering the existing Bluesky control framework at BESSY II, a web client with React based on Bluesky HTTP Server is being developed. We hope to achieve a cross-platform and cross-device system to realize remote control and monitoring of experiments. The current functions of the system are monitoring of the Bluesky Queue Server status, control over a Bluesky Run Engine environment, browsing of Queue Server history and editing and running of Bluesky plans. Challenges around the presentation of live data are explored. This work builds on that of NSLS II who created a React based web interface and implements a tool for BESSY II.
 
poster icon Poster TUPDP014 [0.311 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-TUPDP014  
About • Received ※ 29 September 2023 — Accepted ※ 01 December 2023 — Issued ※ 11 December 2023  
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
TUPDP035 New Developments for eGiga2m Historic Database Web Visualizer database, controls, extraction, factory 588
 
  • L. Zambon, R. Passuello
    Elettra-Sincrotrone Trieste S.C.p.A., Basovizza, Italy
 
  eGiga is an historic database web visualizer since 2002. At the beginning it was connected to a proprietary database schema, support for other schemas was added later, for example HDB and HDB++. eGiga was deeply refactored in 2015 becoming eGiga2m. Between 2022 and 2023 a few improvements have been made, among them, optimization of large data extraction, improvement of images and pdf exports, substitution of 3d chart library with a touch screen enabled one; the addition of: logger status info, a new canvas responsive chart library, adjustable splitter, support for TimescaleDB and HDF5 data format, correlations and time series analysis, and ARIMA (autoregressive integrated moving average) forecast.  
poster icon Poster TUPDP035 [0.821 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-TUPDP035  
About • Received ※ 05 October 2023 — Revised ※ 11 October 2023 — Accepted ※ 14 December 2023 — Issued ※ 17 December 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
TUPDP092 Life Cycle Management and Reliability Analysis of Controls Hardware Using Operational Data From EAM operation, hardware, controls, electron 758
 
  • E. Fortescue, I. Kozsar, V. Schramm
    CERN, Meyrin, Switzerland
 
  The use of operational data from Enterprise Asset Management(EAM) systems has become an increasingly popular approach for conducting reliability analysis of industrial equipment. This paper presents a case study of how EAM data was used to analyse the reliability of CERN’s standard controls hardware, deployed and maintained by the Controls Electronics and Mechatronics group. The first part of the study involved the extraction, treatment and analysis of state-transition data to detect failures. The analysis was conducted using statistical methods, including failure-rate analysis and time-to-failure analysis to identify trends in equipment performance and plan for future obsolescence, upgrades and replacement strategies. The results of the analysis are available via a dynamic online dashboard. The second part of the study considers Front-End computers as repairable systems, composed of the previously studied non-repairable modules. The faults were recorded and analysed using the Accelerator Fault Tracking system. The study brought to light the need for high quality data, which led to improvements in the data recording process and refinement of the infrastructure team’s workflow. In the future, reliability analysis will become even more critical for ensuring the cost-effective and efficient operation of controls systems for accelerators. This study demonstrates the potential of EAM operational data to provide valuable insights into equipment reliability and inform decision-making for repairable and non-repairable systems.  
poster icon Poster TUPDP092 [40.179 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-TUPDP092  
About • Received ※ 04 October 2023 — Revised ※ 11 October 2023 — Accepted ※ 05 December 2023 — Issued ※ 12 December 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WE3BCO03 Data Management for Tracking Optic Lifetimes at the National Ignition Facility optics, database, site, laser 1012
 
  • R.D. Clark, L.M. Kegelmeyer
    LLNL, Livermore, California, USA
 
  Funding: This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.
The National Ignition Facility (NIF), the most energetic laser in the world, employs over 9000 optics to reshape, amplify, redirect, smooth, focus, and convert the wavelength of laser light as it travels along 192 beamlines. Underlying the management of these optics is an extensive Oracle database storing details of the entire life of each optic from the time it leaves the vendor to the time it is retired. This journey includes testing and verification, preparing, installing, monitoring, removing, and in some cases repairing and re-using the optics. This talk will address data structures and processes that enable storing information about each step like identifying where an optic is in its lifecycle and tracking damage through time. We will describe tools for reporting status and enabling key decisions like which damage sites should be blocked or repaired and which optics exchanged. Managing relational information and ensuring its integrity is key to managing the status and inventory of optics for NIF.
LLNL Release Number: LLNL-ABS-847598
 
slides icon Slides WE3BCO03 [2.379 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-WE3BCO03  
About • Received ※ 26 September 2023 — Revised ※ 09 October 2023 — Accepted ※ 13 October 2023 — Issued ※ 24 October 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
TH1BCO01 Five years of EPICS 7 - Status Update and Roadmap EPICS, controls, network, site 1087
 
  • R. Lange
    ITER Organization, St. Paul lez Durance, France
  • L.R. Dalesio, M.A. Davidsaver, G.S. McIntyre
    Osprey DCS LLC, Ocean City, USA
  • S.M. Hartman, K.-U. Kasemir
    ORNL, Oak Ridge, Tennessee, USA
  • A.N. Johnson, S. Veseli
    ANL, Lemont, Illinois, USA
  • H. Junkes
    FHI, Berlin, Germany
  • T. Korhonen, S.C.F. Rose
    ESS, Lund, Sweden
  • M.R. Kraimer
    Self Employment, Private address, USA
  • K. Shroff
    BNL, Upton, New York, USA
  • G.R. White
    SLAC, Menlo Park, California, USA
 
  Funding: Work supported in part by the U.S. Department of Energy under contracts DE-AC02-76SF00515 and DE-AC05-00OR22725.
After its first release in 2017, EPICS version 7 has been introduced into production at several sites. The central feature of EPICS 7, the support of structured data through the new pvAccess network protocol, has been proven to work in large production systems. EPICS 7 facilitates the implementation of new functionality, including developing AI/ML applications in controls, managing large data volumes, interfacing to middle-layer services, and more. Other features like support for the IPv6 protocol and enhancements to access control have been implemented. Future work includes integrating a refactored API into the core distribution, adding modern network security features, as well as developing new and enhancing existing services that take advantage of these new capabilities. The talk will give an overview of the status of deployments, new additions to the EPICS Core, and an overview of its planned future development.
 
slides icon Slides TH1BCO01 [0.562 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-TH1BCO01  
About • Received ※ 04 October 2023 — Revised ※ 12 October 2023 — Accepted ※ 19 November 2023 — Issued ※ 24 November 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
TH1BCO04 Asynchronous Execution of Tango Commands in the SKA Telescope Control System: An Alternative to the Tango Async Device TANGO, controls, GUI, network 1108
 
  • B.A. Ojur, A.J. Venter
    SARAO, Cape Town, South Africa
  • D. Devereux
    CSIRO, Clayton, Australia
  • D. Devereux, S.N. Twum, S. Vrcic
    SKAO, Macclesfield, United Kingdom
 
  Equipment controlled by the Square Kilometre Array (SKA) Control System will have a TANGO interface for control and monitoring. Commands on TANGO device servers have a 3000 milliseconds window to complete their execution and return to the client. This timeout places a limitation on some commands used on SKA TANGO devices which take longer than the 3000 milliseconds window to complete; the threshold is more stricter in the SKA Control System (CS) Guidelines. Such a command, identified as a Long Running Command (LRC), needs to be executed asynchronously to circumvent the timeout. TANGO has support for an asynchronous device which allows commands to be executed slower than 3000 milliseconds by using a coroutine to put the task on an event loop. During the exploration of this, a decision was made to implement a custom approach in our base repository which all devices depend on. In this approach, every command annotated as ¿long running¿ is handed over to a thread to complete the task and its progress is tracked through attributes. These attributes report the queued commands along with their progress, status and results. The client is provided with a unique identifier which can be used to track the execution of the LRC and take further action based on the outcome of that command. LRCs can be aborted safely using a custom TANGO command. We present the reference design and implementation of the Long Running Commands for the SKA Controls System.  
slides icon Slides TH1BCO04 [0.674 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-TH1BCO04  
About • Received ※ 06 October 2023 — Revised ※ 24 October 2023 — Accepted ※ 20 December 2023 — Issued ※ 22 December 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
TH2AO02 High Availability Alarm System Deployed with Kubernetes monitoring, interface, feedback, site 1134
 
  • J.J. Bellister, T. Schwander, T. Summers
    SLAC, Menlo Park, California, USA
 
  To support multiple scientific facilities at SLAC, a modern alarm system designed for availability, integrability, and extensibility is required. The new alarm system deployed at SLAC fulfills these requirements by blending the Phoebus alarm server with existing open-source technologies for deployment, management, and visualization. To deliver a high-availability deployment, Kubernetes was chosen for orchestration of the system. By deploying all parts of the system as containers with Kubernetes, each component becomes robust to failures, self-healing, and readily recoverable. Well-supported Kubernetes Operators were selected to manage Kafka and Elasticsearch in accordance with current best practices, using high-level declarative deployment files to shift deployment details into the software itself and facilitate nearly seamless future upgrades. An automated process based on git-sync allows for automated restarts of the alarm server when configuration files change eliminating the need for sysadmin intervention. To encourage increased accelerator operator engagement, multiple interfaces are provided for interacting with alarms. Grafana dashboards offer a user-friendly way to build displays with minimal code, while a custom Python client allows for direct consumption from the Kafka message queue and access to any information logged by the system.  
slides icon Slides TH2AO02 [0.798 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-TH2AO02  
About • Received ※ 06 October 2023 — Revised ※ 09 October 2023 — Accepted ※ 14 December 2023 — Issued ※ 18 December 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
THMBCMO10 SECoP Integration for the Ophyd Hardware Abstraction Layer hardware, interface, controls, EPICS 1212
 
  • P. Wegmann, K. Kiefer, O. Mannix, L. Rossa, W. Smith
    HZB, Berlin, Germany
  • E. Faulhaber
    MLZ, Garching, Germany
  • M. Zolliker
    PSI, Villigen PSI, Switzerland
 
  At the core of the Bluesky experimental control ecosystem the ophyd hardware abstraction, a consistent high-level interface layer, is extremely powerful for complex device integration. It introduces the device data model to EPICS and eases integration of alien control protocols. This paper focuses on the integration of the Sample Environment Communication Protocol (SECoP)* into the ophyd layer, enabling seamless incorporation of sample environment hardware into beamline experiments at photon and neutron sources. The SECoP integration was designed to have a simple interface and provide plug-and-play functionality while preserving all metadata and structural information about the controlled hardware. Leveraging the self-describing characteristics of SECoP, automatic generation and configuration of ophyd devices is facilitated upon connecting to a Sample Environment Control (SEC) node. This work builds upon a modified SECoP-client provided by the Frappy framework**, intended for programming SEC nodes with a SECoP interface. This paper presents an overview of the architecture and implementation of the ophyd-SECoP integration and includes examples for better understanding.
*Klaus Kiefer et al. "An introduction to SECoP - the sample environment communication protocol".
**Markus Zolliker and Enrico Faulhaber url: https://github.com/sampleenvironment/Frappy.
 
slides icon Slides THMBCMO10 [0.596 MB]  
poster icon Poster THMBCMO10 [0.809 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-THMBCMO10  
About • Received ※ 05 October 2023 — Accepted ※ 08 December 2023 — Issued ※ 14 December 2023  
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
THPDP037 The Alarm System at HLS-II monitoring, controls, EPICS, distributed 1399
 
  • S. Xu, X.K. Sun
    USTC/NSRL, Hefei, Anhui, People’s Republic of China
 
  The control system of the Hefei Light Source II (HLS-II) is a distributed system based on Experimental Physics and Industrial Control System. The alarm system of HLS-II is responsible for monitoring the alarm state of the facility and distributing the alarm message in time. The monitoring range of the alarm system covers the devices of HLS-II technical group and the server platform. Zabbix is an open-source software tool to monitor the server platform. Custom metrics collection is achieved by implementing external scripts written in Python and automated agent deployment discovers the monitored servers running with Zabbix agents. The alarm distribution strategy of the front end devices is designed to overcome alarm floods. The alarm system of HLS-II provides multiple messaging channels to notify the responsible staff, including WeChat, SMS and web-based GUI. The alarm system of HLS-II has been deployed since December 2022. The result shows the alarm system facilitates the operator to troubleshoot problem efficiently to improve the availability of HLS-II.  
poster icon Poster THPDP037 [0.653 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-THPDP037  
About • Received ※ 30 September 2023 — Accepted ※ 08 December 2023 — Issued ※ 13 December 2023  
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
THPDP090 LCLS-II Accelerator Vacuum Control System Design, Installation and Checkout vacuum, controls, PLC, interface 1564
 
  • S. Saraf, S.C. Alverson, S. Karimian, C. Lai, S. Nguyen
    SLAC, Menlo Park, California, USA
 
  Funding: Work supported by the U.S. Department of Energy under contract number DE-AC02-76SF00515
The LCLS-II Project at SLAC National Accelerator Laboratory has constructed a new superconducting accelerator which occupies the first kilometer of SLAC’s original 2-mile-long linear accelerator tunnel. The LCLS-II Vacuum System consists of a combination of particle free(PF) and non-particle free vacuum(non-PF) areas and multiple independent and interdependent systems, including the beamline vacuum, RF system vacuum, cryogenic system vacuum and support systems vacuum. The Vacuum Control System incorporates controls and monitoring of a variety of gauges, pumps, valves and Hiden RGAs. The design uses a Programmable Logic Controller (PLC) to perform valve interlocking functions to isolate bad vacuum areas. In PF areas, a voting scheme has been implemented for slow and fast shutter interlock logic to prevent spurious trips. Additional auxiliary control functions and high-level monitoring of vacuum components is reported to global control system via an Experimental Physics and Industrial Control System (EPICS) input output controller (IOC). This paper will discuss the design as well as the phased approach to installation and successful checkout of LCLS-II Vacuum Control System.
https://lcls.slac.stanford.edu/lcls-ii
 
poster icon Poster THPDP090 [1.787 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-THPDP090  
About • Received ※ 06 October 2023 — Revised ※ 10 October 2023 — Accepted ※ 19 December 2023 — Issued ※ 21 December 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
THSDSC03 Integrate EPICS 7 with MATLAB Using PVAccess for Python (P4P) Module EPICS, controls, interface, experiment 1580
 
  • K.T. Kim, J.J. Bellister, K.H. Kim, E. Williams, S. Zelazny
    SLAC, Menlo Park, California, USA
 
  MATLAB is essential for accelerator scientists engaged in data analysis and processing across diverse fields, including particle physics experiments, synchrotron light sources, XFELs, and telescopes, due to its extensive range of built-in functions and tools. Scientists also depend on EPICS 7* to control and monitor complex systems. Since Python has gained popularity in the scientific community and many facilities have been migrating towards it, SLAC has developed matpva, a Python interface to integrate EPICS 7 with MATLAB. Matpva utilizes the Python P4P module** and EPICS 7 to offer a robust and reliable interface for MATLAB users that employ EPICS 7. The EPICS 7 PVAccess API allows higher-level scientific applications to get/set/monitor simple and complex structures from an EPICS 7-based control system. Moreover, matpva simplifies the process by handling the data type conversion from Python to MATLAB, making it easier for researchers to focus on their analyses and innovative ideas instead of technical data conversion. By leveraging matpva, researchers can work more efficiently and make discoveries in diverse fields, including particle physics and astronomy.
* See https://epics-controls.org/resources-and-support/base/epics-7/ to learn more about EPICS 7
** Visit https://mdavidsaver.github.io/p4p/ to learn more about the P4P
 
poster icon Poster THSDSC03 [0.865 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-THSDSC03  
About • Received ※ 06 October 2023 — Revised ※ 10 October 2023 — Accepted ※ 06 December 2023 — Issued ※ 15 December 2023
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
FR1BCO03 SKA Project Status Update software, site, MMI, controls 1610
 
  • N.P. Rees
    SKAO, Macclesfield, United Kingdom
 
  The SKA Project is a science mega-project whose mission is to build the world’s two largest radio telescopes with sensitivity, angular resolution, and survey speed far surpassing current state-of-the-art instruments at relevant radio frequencies. The Low Frequency telescope, SKA-Low, is designed to observe between 50 and 350 MHz and will be built at Inyarrimanha Ilgari Bundara, the CSIRO Murchison Radio-astronomy Observatory in Western Australia. The Mid Frequency telescope, SKA-Mid, is designed to observe between 350 MHz and 15 GHz and will be built in the Meerkat National Park, in the Northern Cape of South Africa. Each telescope will be delivered in a number of stages, called Array Assemblies. Each Array Assembly will be a fully working telescope which will allow us to understand the design and potentially improve the system to deliver a better scientific instrument for the users. The final control system will consist of around 2 million control points per telescope, and the first Array Assembly, known as AA0.5, is being delivered at the time of ICALEPCS 2023.  
slides icon Slides FR1BCO03 [38.177 MB]  
DOI • reference for this paper ※ doi:10.18429/JACoW-ICALEPCS2023-FR1BCO03  
About • Received ※ 06 October 2023 — Accepted ※ 19 November 2023 — Issued ※ 05 December 2023  
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)