Application of Machine Learning in Thermodynamic System Modeling and Optimization

Application of Machine Learning in Thermodynamic System Modeling and Optimization

Li Wang Chang Liu Ran Huo Xin Lu Yue Yang*

Department of Software Engineering, Shijiazhuang Information Engineering Vocational College, Shijiazhuang 050000, China

Department of Media Arts, Shijiazhuang Information Engineering Vocational College, Shijiazhuang 050000, China

Corresponding Author Email: 
yyshangke@163.com
Page: 
1613-1621
|
DOI: 
https://doi.org/10.18280/ijht.420515
Received: 
6 April 2024
|
Revised: 
12 August 2024
|
Accepted: 
9 September 2024
|
Available online: 
31 October 2024
| Citation

© 2024 The authors. This article is published by IIETA and is licensed under the CC BY 4.0 license (http://creativecommons.org/licenses/by/4.0/).

OPEN ACCESS

Abstract: 

With the growing demand for energy and increasing environmental concerns, accurate modeling and efficient optimization of thermodynamic systems have become key research areas in engineering. In recent years, the application of machine learning techniques in thermodynamic systems has demonstrated significant potential, greatly improving modeling precision and optimization efficiency. However, existing methods face challenges in handling temperature delay phenomena, limiting the accuracy and effectiveness of system modeling and control. This paper introduces a temperature delay identification algorithm to enhance the accuracy of delay representation. Additionally, it develops a predictive control model for thermodynamic systems that incorporates temperature delay. These two components aim to improve the precision and efficiency of thermodynamic system modeling, providing new insights and methods for the intelligent management of complex thermodynamic systems.

Keywords: 

machine learning, thermodynamic system, temperature delay, identification algorithm, predictive control, modeling and optimization

1. Introduction

With the continuous increase in energy demand and the growing severity of environmental issues, the modeling and optimization of thermodynamic systems have become important research areas in the field of engineering [1-4]. Accurate thermodynamic system modeling and efficient optimization methods are of great significance for improving energy utilization efficiency and reducing energy consumption [5, 6]. In recent years, the advantages of machine learning in data processing and pattern recognition have shown broad application prospects in the research of thermodynamic systems.

Relevant studies have shown that the use of machine learning techniques can significantly enhance the accuracy of thermodynamic system modeling and the efficiency of optimization, enabling effective prediction and control of complex thermodynamic processes [7-10]. This is of great theoretical and practical importance for improving system stability and reliability while reducing energy waste [11-15]. Meanwhile, with the development of intelligent technologies, machine-learning-based optimization methods for thermodynamic systems can better adapt to dynamically changing operating conditions, enhancing the system’s adaptability and intelligence.

However, current research methods have certain limitations in addressing the temperature delay problem in thermodynamic systems [16, 17]. Many traditional algorithms struggle to maintain modeling accuracy and control performance when dealing with long or highly variable temperature delays [18, 19]. Furthermore, existing predictive control schemes often fail to fully consider the impact of temperature delay on system performance, resulting in less-than-satisfactory outcomes in practical applications. Therefore, it is crucial to propose more effective temperature delay identification algorithms and optimized predictive control modeling methods to address these issues.

The primary research content of this paper consists of two parts: First, a temperature delay identification algorithm for thermodynamic systems is proposed to improve the accuracy of describing temperature delay phenomena. Second, based on this identification algorithm, a predictive control model for thermodynamic systems that considers temperature delay is developed. Through the study of these two aspects, this paper aims to enhance the modeling accuracy and optimization efficiency of thermodynamic systems, providing new ideas and methods for the intelligent management of complex thermodynamic systems. The research not only enriches the application of machine learning in thermodynamic systems but also offers important technical support for the efficient operation of energy systems and the goals of energy conservation and emission reduction.

2. Temperature Delay Identification Algorithm for Thermodynamic Systems

Temperature delay is a common phenomenon in thermodynamic systems. Temperature delay refers to the time lag between the input of energy into the system and the corresponding temperature change. This phenomenon occurs in practical thermodynamic systems, such as heating systems, cooling systems, and heat exchangers, and has a significant impact on system performance and efficiency. If this delay phenomenon cannot be accurately identified and described, it becomes difficult to achieve precise modeling and effective control of the system. Therefore, developing an algorithm that can accurately identify temperature delay is of great significance for improving the modeling accuracy of thermodynamic systems. Predictive control is an important direction for modern thermodynamic system optimization. Predictive control optimizes system operation by forecasting its future state and taking control actions in advance. However, existing predictive control schemes often fail to fully account for the impact of temperature delay on system performance, resulting in unsatisfactory control effects. Based on the development of the temperature delay identification algorithm, this paper further studies predictive control modeling that incorporates temperature delay, ensuring that the control strategy fully considers the delay phenomenon. This will help improve the control effect and operational efficiency of thermodynamic systems, ensuring that the system can achieve optimal performance under various operating conditions. Figure 1 illustrates the framework of temperature delay identification and predictive control for thermodynamic systems.

Figure 1. Framework of temperature delay identification and predictive control for thermodynamic systems

Specifically, temperature delay in thermodynamic systems refers to the phenomenon where the temperature change does not occur immediately after the system receives external energy input but manifests gradually over time. This delay is particularly evident in heating systems. For example, when a heating system starts operating, the boiler ignites and heats the water, which is then delivered through pipes to radiators in various rooms. Although the boiler begins working and heats the water, it takes some time for the hot water to travel through the pipes and reach the radiators, meaning the room temperature does not rise immediately. This lag in time constitutes the temperature delay. The length of the temperature delay depends not only on the pipe length and the flow rate of the hot water but also on the heat conduction efficiency of the radiators and the thermal capacity of the room. Similarly, temperature delay is a crucial factor in industrial refrigeration systems. For example, in large cold storage facilities, when the refrigeration units start operating, the temperature inside the storage space does not immediately drop to the set value. Cold air must pass through a complex piping system and circulate within the storage area to gradually lower the temperature. In this case, the temperature delay is influenced not only by the performance of the refrigeration equipment but also by the structure of the storage facility, the quality of insulation, and the thermal capacity of the stored goods. Temperature delay also affects the performance and efficiency of heat exchangers, which are widely used in various industrial processes such as chemical, petroleum, and energy sectors. The working principle of heat exchangers involves the transfer of heat between a hot fluid and a cold fluid. During this process, the heat transfer between the two fluids is not instantaneous but takes time to reach thermal equilibrium. This time interval is the temperature delay, which depends on factors such as the flow rate of the fluids, the heat exchange area, and the heat transfer coefficient. Figure 2 illustrates the basic working principle of thermodynamic systems.

Figure 2. Basic working principle of thermodynamic systems

This paper proposes a temperature delay identification algorithm for thermodynamic systems. To ensure the effectiveness and accuracy of the algorithm, the following assumptions are made:

(1) It is assumed that temperature changes in the thermodynamic system exhibit a time delay. In other words, after receiving heat input, the temperature change does not respond immediately but appears gradually after a certain period.

(2) It is assumed that there is a high correlation between the heat input and the temperature output in the system. Specifically, in heating systems, there is usually a strong relationship between the supply water temperature and the return water temperature. Similarly, in refrigeration systems, the temperature change of the refrigerant is closely related to the cooling performance. Temperature delay identification is based on this correlation, using input-output data analysis to determine the time-lag relationship between temperature changes.

(3) It is assumed that the temperature data in the thermodynamic system is collected at consistent intervals, meaning that the time between each sample point is fixed, with no irregular sampling.

(4) It is assumed that the temperature delay time is greater than the data collection time interval. This assumption ensures that the temperature delay phenomenon can be captured during data collection without missing critical temperature change information due to a low sampling frequency.

The following section outlines the specific steps of the proposed temperature delay identification process for thermodynamic systems.

Step 1: Set the time window and divide time intervals

First, the temperature delay identification algorithm requires the setting of a time window, denoted as Δs, to capture periods of temperature change within the thermodynamic system. Since temperature changes do not occur instantaneously but rather develop over time, selecting a reasonable time window helps capture the temperature response during heating or cooling. After setting the time window Δs, it is divided into two equal sub-intervals, Δsu1 and Δsu2, representing the temperature state before and after the heat input, respectively. This division helps identify the trend of temperature changes following heat input, facilitating the analysis of temperature delay.

Taking a heating system as an example, Δs1 captures the temperature when the system is in a steady state, while Δs2 records the temperature changes during the heating process. By comparing the temperature variations in these two intervals, an initial assessment can be made to determine if there is a significant temperature delay. Assuming that the u-th sample point of the day is denoted as su, we have:

$\Delta {{s}_{u1}}=\left( {{s}_{u}},{{s}_{u}}+\frac{\Delta s}{2} \right)$          (1)

$\Delta {{s}_{u2}}=\left( {{s}_{u}}+\frac{\Delta s}{2},{{s}_{u}}+\Delta s \right)$              (2)

Step 2: Sliding time window and calculating the average supply water temperature

Along the timeline, this paper applies a sliding time window approach to incrementally move forward, calculating the average temperature within each window to detect the operational change points in the system. The sliding time window helps capture the dynamic process of temperature changes and avoids missing critical fluctuations. In practice, the average temperature calculated within each window reflects how the temperature evolves over time in the thermodynamic system.

Taking the heating system as an example, the average value of the secondary supply water temperature is calculated within each sliding window Δs to capture the temperature trend within that window. If the average temperature within a sliding window change significantly, it can be inferred that the system's operating conditions have shifted, likely due to heat input or environmental influences affecting the temperature state. At this point, the specific time can be identified as an operational change point. The formulas to compute the average secondary supply water temperature within the time windows Δsu1 and Δsu2 are as follows:

$\overline{{{S}_{\Delta {{s}_{u1}}}}}=\frac{{{S}_{{{s}_{u}}}}+{{S}_{{{s}_{u}}+1}}\cdots +{{S}_{\left( {{s}_{u}}+\frac{\Delta s}{2}-1 \right)}}}{\frac{\Delta s}{2}}$               (3)

$\overline{{{S}_{\Delta {{s}_{u2}}}}}=\frac{{{S}_{\left( {{s}_{u}}+\frac{\Delta s}{2} \right)}}+{{S}_{\left( {{s}_{u}}+\frac{\Delta s}{2}+1 \right)}}\cdots +{{S}_{\left( {{s}_{u}}+\Delta s-1 \right)}}}{\frac{\Delta s}{2}}$                   (4)

Step 3: Analyzing the supply and return water temperature sequences to identify the temperature delay

Once the operational change point su is identified, further analysis is conducted on the supply and return water temperature sequences from the beginning of the change to the end of the time window. By comparing the changes in the supply and return water temperatures and analyzing their correlation, the temperature delay within the thermodynamic system can be determined. In thermodynamic systems, the supply water temperature reflects the temperature changes after heat input, while the return water temperature represents the state after heat transfer. Typically, changes in the supply water temperature trigger a delayed response in the return water temperature, which is the manifestation of temperature delay.

In practice, statistical analysis of the correlation between the supply and return water temperature sequences can determine the length of the temperature delay. In a heating system, as the supply water temperature changes, the return water temperature does not follow immediately but gradually aligns after a certain period. By analyzing the time difference between the changes in supply water temperature and the response of return water temperature, the length of the temperature delay can be identified. The following expressions describe the relationship between the supply water temperature sequence and the return water temperature sequence, based on su and Δs:

$S{{T}_{u}}={{\left[ \begin{matrix}   {{S}_{tu}} & {{S}_{t\left( u+1 \right)}} & \cdots  & {{S}_{t\left( u+\Delta s \right)}}  \\\end{matrix} \right]}^{T}}$           (5)

$S{{E}_{u}}={{\left[ \begin{matrix}   {{S}_{eu}} & {{S}_{e\left( u+1 \right)}} & \cdots  & {{S}_{e\left( u+\Delta s \right)}}  \\\end{matrix} \right]}^{T}}$           (6)

Step 4: Constructing the return water temperature lag matrix

Given that temperature responses in thermodynamic systems often exhibit time lags, the return water temperature typically shows a delayed change relative to the supply water temperature. To identify the specific time point of this delay, the return water temperature sequence can be shifted forward incrementally, simulating different delay times, to generate a set of new return water temperature sequences. Specifically, the original return water temperature sequence SEu is incrementally shifted forward by different steps, forming a set of new sequences such as SEu+1, SEu+2, ...SEu+j, where j is the maximum number of shifts. This shifting operation corresponds to adjusting the return water temperature under different assumed delay times, generating a matrix where each column represents the values of the return water temperature sequence under a specific delay time. This matrix is called the Return Water Temperature Lag Matrix N, which is used to compare the alignment between the return water temperature sequences and the supply water temperature sequence under different delay times.

$\begin{matrix}  N=\left[ \begin{matrix}   S{{E}_{u}} & S{{E}_{u+1}} & \cdots  & S{{E}_{u+j}}  \\\end{matrix} \right] \\  =\left[ \begin{matrix}   {{S}_{eu}} & {{S}_{e\left( u+1 \right)}} & \cdots  & {{S}_{e\left( u+j \right)}}  \\   {{S}_{e\left( u+1 \right)}} & {{S}_{e\left( u+2 \right)}} & \cdots  & {{S}_{e\left( u+j+1 \right)}}  \\   \vdots  & \vdots  & \ddots  & \vdots   \\   {{S}_{e\left( u+\Delta s \right)}} & {{S}_{e\left( u+1+\Delta s \right)}} & \cdots  & {{S}_{e\left( u+j+\Delta s \right)}}  \\\end{matrix} \right] \\\end{matrix}$                (7)

Step 5: Calculating Pearson correlation coefficient and determining the optimal delay time

After constructing the time delay matrix for the return water temperature in Step 4, the next step is to calculate the Pearson correlation coefficient between each column of the matrix and the supply water temperature series. The Pearson correlation coefficient is a statistical measure of the linear correlation between two variables, ranging from -1 to 1. A value closer to 1 indicates a stronger positive correlation between the two variables, while a value closer to -1 indicates a stronger negative correlation. A value near 0 suggests a weaker or no correlation. In a thermodynamic system, the correlation between the supply water temperature and the return water temperature can be measured using the correlation coefficient. By calculating the Pearson correlation coefficient between the supply water temperature series STu and each column of the return water temperature time delay matrix, we can identify the time point with the highest correlation coefficient, thereby determining the optimal temperature delay time. In other words, the delay time corresponding to the maximum correlation coefficient represents the time lag in the response between the supply water temperature and the return water temperature in the thermodynamic system. Assuming that the mean value of variable A is denoted as A- and the mean value of variable B as B-, the calculation formula is as follows:

$e=\frac{\sum\limits_{u=1}^{v}{\left( {{A}_{u}}-\bar{A} \right)\left( {{B}_{u}}-\bar{B} \right)}}{\sqrt{\sum\limits_{u=1}^{v}{{{\left( {{A}_{u}}-\bar{A} \right)}^{2}}}}\sqrt{\sum\limits_{u=1}^{v}{{{\left( {{B}_{u}}-\bar{B} \right)}^{2}}}}}$           (8)

3. Predictive Control Modeling of Thermodynamic Systems Considering Temperature Delay

In thermodynamic systems, temperature changes are influenced by both internal and external factors. The presence of temperature delay complicates real-time control, making it challenging to achieve optimal performance. To address this issue, this paper proposes a hybrid HHO-PSO optimization algorithm, which combines the Harris Hawks Optimization (HHO) algorithm with the Particle Swarm Optimization (PSO) algorithm. This hybrid approach aims to overcome the limitations of individual optimization algorithms, striking a balance between global search and local search to enhance the efficiency and precision of system control.

The basic steps of the HHO-PSO hybrid optimization algorithm are illustrated in Figure 3 and detailed below:

Step 1: Initializing the particle swarm

In the predictive control model of the thermodynamic system, the first step is to initialize the particle swarm. Each particle represents a system control strategy, including the initial position and velocity of various key parameters. Initially, random positions are generated for each particle to represent different temperature control strategies. The initial positions can be generated using either a uniform distribution or a Gaussian distribution to cover the entire search space. Additionally, each particle is assigned an initial velocity that reflects the rate of change of the control strategy. Similarly, the initial velocity can also be generated using random distributions. Following these steps, an initial population is created, ensuring the diversity and broadness of the particles, which is essential for effective exploration when searching for the global optimal solution. When considering temperature delay time, the initial positions and velocities must not only reflect the current temperature state but also estimate future temperature changes, allowing for accurate adjustments of the strategies during subsequent control processes.

Figure 3. Steps of the HHO-PSO hybrid optimization algorithm

Step 2: Fitness calculation

Next, the fitness value of each particle is calculated. The fitness function is used to measure the effectiveness of the particle's current control strategy, typically based on indicators such as temperature stability, response time, and energy consumption. For each particle, the fitness value is computed based on its current strategy. The fitness function can be defined as a multi-objective optimization function that considers factors like temperature stability, energy consumption, and response time. Among all the particles, the one with the best fitness value is selected as the global optimal solution. The global optimal solution represents the current best control strategy, serving as a reference for subsequent iterations. Each particle retains its historical best fitness value and the corresponding control strategy, acting as a local optimal solution. The local optimal solution is used to guide the particle in searching and optimizing within its neighborhood. The fitness function needs to comprehensively consider the current temperature state and delay effects to ensure that the control strategy maintains system stability and efficient operation in future moments.

The global optimal solution at time s can be expressed as:

${{g}_{best}}\left( s \right)={{\left[ {{h}_{1}}\left( s \right),{{h}_{2}}\left( s \right),\cdots ,{{h}_{v}}\left( s \right) \right]}^{T}}$          (9)

The local optimal solution at time s can be expressed as:

${{p}_{best}}\left( s \right)={{\left[ {{o}_{1}}\left( s \right),{{o}_{2}}\left( s \right),\cdots ,{{o}_{v}}\left( s \right) \right]}^{T}}$         (10)

where, ai denotes the actual value and ${{\hat{a}}_{i}}$ denotes the simulated value, using the Root Mean Square Error (RMSE) as the accuracy metric for predictive control in thermodynamic systems:

$RMSE=\sqrt{\frac{1}{S}\sum\limits_{i=1}^{v}{{{\left( {{{\hat{a}}}_{i}}-{{a}_{i}} \right)}^{2}}}}$             (11)

Step 3: Role assignment and leader introduction

Based on the fitness values of each particle, the probabilities of being selected as "eagles" or "prey" are calculated to determine each particle's role. First, each particle's role is determined through probability calculations based on its fitness value. Particles with high fitness values are more likely to become "eagles," while those with low fitness values are more likely to become "prey." This role assignment mechanism helps differentiate the tasks of different particles, enhancing the synergy between global and local searches. This paper innovatively introduces a "leader" role, assigned to the "prey" with the highest fitness function value. The leader is responsible for guiding the search direction of other "prey," assisting them in better utilizing surrounding information during their search. With the leader's guidance, the convergence speed of the "prey" can be accelerated, helping avoid the algorithm getting trapped in local optimal solutions. Depending on their roles, corresponding behavioral strategies are defined. For "eagles," the main task is to perform global search missions, exploring a wide range to find potential high-quality solutions. For "prey," guided by the leader, the task is to execute local searches, refining their strategies to improve fitness values. Notably, the leader role must fully leverage temperature delay information to guide other particles in finding optimal control strategies for future temperature changes, thereby enhancing the convergence speed and search effectiveness of the entire population. Specifically, let the fitness value of particle u be denoted as du, and let the maximum and minimum fitness values in the population be denoted as dMAX and dMIN, respectively. The probability of each particle uuu being selected as an "eagle" is given by:

$O_{u}^{r}=\frac{{{d}_{u}}-{{d}_{MIN}}}{2\left( {{d}_{MAX}}-{{d}_{MIN}} \right)}$         (12)

For each particle u, the probability of being selected as "Prey" can be expressed as:

$O_{u}^{o}=\frac{{{d}_{MAX}}-{{d}_{u}}}{2\left( {{d}_{MAX}}-{{d}_{MIN}} \right)}$         (13)

The total probability of a particle being classified as either "Eagle" or "Prey" is the sum of both probabilities:

${{O}_{u}}=O_{u}^{r}+O_{u}^{o}$            (14)

Assuming the current iteration number is represented by s, and the pre-defined maximum number of iterations is represented by sMAX, if Ou is greater than a calculated random threshold RANDa, then the particle will be selected as either a "Harris Hawk" or "Prey." Otherwise, particle u retains its original role:

$RAN{{D}_{a}}=RANDOM\left( 1-\frac{1}{{{s}_{MAX}}} \right)$             (15)

Step 4: Update of particle velocity and position

In this step, the roles of particles and their corresponding behavioral rules are combined to improve the update mechanism, making the control process more intelligent and efficient. If particle u is selected as an "Eagle," its velocity update according to the PSO algorithm is given by:

${{n}_{u}}\left( s+1 \right)=\mu \cdot {{n}_{u}}\left( s \right)+{{z}_{4}}\cdot RAN{{D}_{4}}\cdot \left( {{M}_{u}}\left( s \right)-{{a}_{u}}\left( s \right) \right)$         (16)

The position update formula is:

${{a}_{u}}\left( s+1 \right)={{a}_{u}}\left( s \right)+{{n}_{u}}\left( s+1 \right)$          (17)

For particles assigned the role of "prey," their position updates mainly consist of two types of behaviors: exploratory behavior Gu(s) and leader behavior Mu(s). Exploratory behavior represents the search process of the "prey" in the surrounding environment. The "prey" focuses on the historical control results of neighboring particles and adjusts its control strategy based on this information. In the regulation of thermodynamic systems, temperature delay times can affect the system's immediate response; therefore, when performing exploratory behavior, the "prey" must consider future temperature changes to ensure that the adjusted control strategy can accommodate the system's delay effects. This process helps the "prey" find better control solutions within a local range, improving the system's response speed and accuracy. Let the position of the selected "prey" particle u at time s be denoted as au(s), and the positions of the two neighboring "prey" particles at time s be denoted as ab(s) and ac(s). The expression for the exploratory behavior Gu(s) is given by:

${{G}_{u}}\left( s+1 \right)={{a}_{u}}\left( s \right)+RAND\left( {{a}_{b}}\left( s \right)-{{a}_{c}}\left( s \right) \right)$               (18)

Leader behavior is a significant innovation in this algorithm. The "prey" selects an optimal leader based on the fitness values of neighboring particles. The selection of the leader is crucial because it guides the search direction of other "prey." When considering temperature delay times, the choice of leader must not only be based on the current control effectiveness but also predict future temperature changes to ensure that the leader can guide the entire system toward a global optimal solution in the future. Once the "prey" selects a leader, it adjusts its control direction based on the leader's strategy, allowing the entire system to gradually converge toward a better state. Let the local optimal solution of the selected leader particle u before time s be denoted as Mu(s), and the positions of the two neighboring "prey" particles at time s be denoted as ab(s) and ac(s). The expression for leader behavior Mu(s) is given by:

${{M}_{u}}\left( s+1 \right)={{M}_{u}}\left( s \right)+RAND\left( {{a}_{b}}\left( s \right)-{{a}_{c}}\left( s \right) \right)$                (19)

After combining exploration behavior and leader behavior, each particle updates its speed and position based on these two behaviors. For the "prey," it will utilize exploration behavior to make local adjustments to its position based on the historical optimal strategies of surrounding particles. At the same time, it will also reference the leader's strategy to further optimize its control direction. In this way, each "prey" can seek the optimal solution on a global scale without falling into the trap of local optima. Specifically, if particle u is selected as "prey," the update speed formula is given by:

$\begin{align}  & {{n}_{u}}\left( s+1 \right)=\mu \cdot {{n}_{u}}\left( s \right) \\ & \begin{matrix}   {} & {} & {}  \\\end{matrix}+{{z}_{5}}\cdot RAN{{D}_{5}}\cdot \left( {{H}_{u}}\left( s \right)-{{a}_{u}}\left( s \right) \right) \\ & \begin{matrix}   {} & {} & {}  \\\end{matrix}+{{z}_{6}}\cdot RAN{{D}_{6}}\cdot \left( {{M}_{u}}\left( s \right)-{{a}_{u}}\left( s \right) \right) \\\end{align}$         (20)

${{a}_{u}}\left( s+1 \right)={{a}_{u}}\left( s \right)+{{n}_{u}}\left( s+1 \right)$              (21)

The Update Position Formula is:

$\begin{align}  & {{n}_{u}}\left( s+1 \right)=\mu \cdot {{n}_{u}}\left( s \right) \\ & \begin{matrix}   {} & {} & {}  \\\end{matrix}+{{z}_{1}}\cdot RAN{{D}_{1}}\cdot \left( {{p}_{best-u}}\left( s \right)-{{a}_{u}}\left( s \right) \right) \\ & \begin{matrix}   {} & {} & {}  \\\end{matrix}+{{z}_{2}}\cdot RAN{{D}_{2}}\cdot \left( {{M}_{u}}\left( s \right)-{{a}_{u}}\left( s \right) \right) \\ & \begin{matrix}   {} & {} & {}  \\\end{matrix}+{{z}_{3}}\cdot RAN{{D}_{3}}\cdot \left( {{g}_{best-u}}\left( s \right)-{{a}_{u}}\left( s \right) \right) \\\end{align}$             (22)

Due to the presence of temperature delay, the update process for velocity and position must fully consider future temperature variations. Particles should not adjust their strategies based solely on their current state but also need to anticipate the system's responses to ensure that the control strategies remain effective in the future. This update method guarantees that the system can quickly and effectively adjust temperatures in a complex environment with delay effects, thereby achieving stable and efficient control outcomes.

Step 5: Update weights based on adaptive weight strategy

In the initial stages of the system, due to temperature delays, control strategies require substantial adjustments to accommodate the system's lag in response. Thus, larger weight values at the beginning ensure that the particle swarm maintains exploratory capabilities, helping to find broader solutions amidst the complexities introduced by temperature delays. As iterations progress, the system gradually adapts to the effects of temperature delays, and the particle swarm stabilizes. Consequently, gradually reducing the weight can prevent excessive adjustments by the particles, avoiding unnecessary oscillations in a localized area of the system. In considering the control of temperature delay, the variation of weight values not only determines the magnitude of particle updates but also how control strategies respond to the time window of delays. The adaptive weight adjustments to accommodate temperature delays help the system converge to more precise control strategies in the face of future temperature variations, ensuring stability at future time points. Let the maximum weight value be represented by μMAX and the minimum weight value by μMIN, then the weight calculation can be expressed as:

$\mu ={{\mu }_{MAX}}-\frac{\left( {{\mu }_{MAX}}-{{\mu }_{MIN}} \right)s}{{{s}_{MAX}}}$                (23)

Step 6: Update global optimal solution

After each iteration, the system evaluates the control strategies of the particles. By comparing the fitness values of all particles, the particle with the optimal fitness value is selected as the global optimal solution. In the context of predictive control for thermodynamic systems, the fitness value represents the effectiveness of the control scheme. Considering temperature delays, the fitness calculation relies not only on the current temperature control effectiveness but also on future temperature responses. This means that when selecting the global optimal solution, it is essential to analyze not only the accuracy of the current temperature control but also to evaluate the system's response during the delay period. The update of the global optimal solution must comprehensively consider temperature variations within the delay period, ensuring that the selected global optimal solution maintains good control effectiveness at future time points. Through this process, the system continuously corrects its current control strategies, ensuring that the final output of the global optimal solution is suitable not just for the current state but also meets the system's requirements under future temperature conditions.

Step 7: Check for stopping criteria

The ultimate goal of the algorithm is to find a globally optimal control strategy suitable for the thermodynamic system through multiple iterations. The algorithm concludes and outputs the global optimal solution upon reaching a specific stopping condition. If the stopping condition is not met, the process returns to Step 3 and continues updating particle velocity and position. In the context of thermodynamic system control, the stopping conditions can be defined as one of the following: reaching a predetermined number of iterations, no significant changes in the control strategies of the particle swarm, or the convergence of temperature control errors within a certain threshold. When these conditions are satisfied, the algorithm considers the control strategies sufficiently optimized, and the system can output the current global optimal solution. Given the existence of temperature delay effects, the stopping criteria must consider not only the control effectiveness at the current moment but also ensure that temperature responses during the delay period have stabilized. If significant fluctuations are still present during the delay period, the algorithm must continue to iterate, further adjusting control strategies until the entire system can effectively respond to both current and future temperature changes.

Finally, by utilizing the HHO-PSO hybrid optimization algorithm, the development coefficients in the GM(1,1) prediction model are corrected to obtain optimal values, establishing a predictive control model for thermodynamic systems.

4. Experimental Results and Analysis

According to the distribution data of temperature delay time in the thermodynamic system shown in Figure 4, we can observe the kernel density estimation values at different delay time points. The kernel density estimation value is zero at 6000 seconds, then gradually increases, reaching a peak of 0.00065 around 4000 seconds, and then gradually decreases, approaching zero again near 6000 seconds. Notably, the kernel density estimation value starts to significantly increase from 1000 seconds and reaches its maximum between 2000 and 3000 seconds, then slowly decreases. This indicates that the distribution of temperature delay time shows a clear peak, with most temperature delay times concentrated between 1000 and 2000 seconds. In addition, the minimum delay time is 0 seconds, and the maximum delay time is 6000 seconds, indicating the wide variation of temperature delay time in the thermodynamic system under different operating conditions. Data analysis shows that more than half of the temperature delay times are concentrated between 1000 and 2000 seconds, indicating that in most cases, the temperature regulation of the thermodynamic system exhibits significant delay within this time range. This result is important for optimizing the predictive control model of the temperature control system, as it provides a primary time window to focus on studying and optimizing system responses. Meanwhile, the minimum delay time of 0 seconds and the maximum delay time of 6000 seconds indicate that the temperature delay time of the thermodynamic system can vary greatly under different operating conditions. This variability suggests that accurate identification and modeling of temperature delay phenomena are necessary to achieve precise temperature control and optimization management under various operating conditions. This provides new methods and ideas for the intelligent management of complex thermodynamic systems, further enhancing the accuracy of system modeling and optimization efficiency.

Figure 4. Distribution of temperature delay time in the thermodynamic system

According to the boxplot data of temperature delay time in the thermodynamic system shown in Figure 5, we can observe the distribution of temperature delay times for different sample numbers. By analyzing the maximum value, upper quartile, median, lower quartile, and minimum value, we can see the variation of temperature delay times for sample numbers from 0 to 57 in different ranges. Most samples have a large range of delay times, for example, sample number 3 has a maximum delay time of 2600 seconds and a minimum delay time of 2400 seconds, indicating a higher concentration of its delay time. Sample number 12 has a maximum delay time of 3450 seconds and a minimum delay time of 2600 seconds, indicating a larger volatility in its delay time. Overall, the median of most samples is concentrated between 1500 seconds and 2000 seconds, indicating that the temperature delay time for these samples is relatively stable in this range. Notably, some samples such as numbers 6 and 13 exhibit a significant difference between their minimum and maximum delay times, showing significant differences under different operating conditions. Data analysis indicates that there is significant variability in the distribution of temperature delay times among different samples, but the median of most samples is concentrated between 1500 seconds and 2000 seconds, suggesting that in most cases, the temperature delay time of the thermodynamic system exhibits relatively consistent characteristics within this range. This provides a primary time window for optimizing the predictive control model of the temperature control system, allowing for focused research and optimization of system responses. Meanwhile, some samples, such as numbers 6 and 13, exhibit significant volatility in delay times, indicating the need to consider significant differences under different operating conditions in the modeling process to ensure the broad adaptability of the model.

Figure 5. Boxplot of temperature delay time in the thermodynamic system

According to Figure 6, we can observe the comparison of the algorithm's predicted values and actual values across three different datasets: the industrial process control dataset, the meteorological and environmental monitoring dataset, and the building energy management system dataset. Specifically, in the industrial process control dataset, the prediction errors for temperature delay times are small, indicating high predictive accuracy; in the meteorological and environmental monitoring dataset, the error range is slightly larger; in the building energy management system dataset, although there are individual samples with larger errors, the overall prediction performance remains satisfactory, with the vast majority of samples having errors within a reasonable range. The boxplot showing the distribution of delay times indicates that the medians of most samples are concentrated between 1500 seconds and 2000 seconds, suggesting that the algorithm exhibits high stability and consistency across different datasets. From the experimental results, it can be concluded that the thermodynamic system temperature delay identification algorithm proposed in this paper performs excellently across different datasets, particularly demonstrating extremely high predictive accuracy in the industrial process control dataset. This indicates that the algorithm can effectively describe and predict temperature delay phenomena, providing a solid foundation for further constructing predictive control models. The predictive control model built on this identification algorithm shows high adaptability and accuracy in predicting temperature delay times, effectively improving the modeling accuracy and optimization efficiency of thermodynamic systems.

Figure 6. Comparison of predicted values and actual values of the algorithm on three datasets

Figure 7. Prediction error of future 24-hour thermodynamic system temperature under different temperature delay conditions

Table 1. Prediction error of thermodynamic system temperature by the model

Iteration Count

Temperature Delay Time

1500

2500

2500

6

0.884

--

--

12

1.168

1.065

--

24

1.045

1.124

1.43

48

1.368

1.6

1.874

72

1.147

1.589

1.689

Based on the temperature prediction error data shown in Figure 7 and Table 1, we can observe the prediction error of the model under different temperature delay times (1500 seconds, 2500 seconds, and 2500 seconds) at various iteration counts. Specifically, under a delay time of 1500 seconds, the prediction error gradually increases from an initial value of 0.884 to 1.147 after 72 iterations, indicating that the error is increasing; under a delay time of 2500 seconds, the initial prediction error was not recorded, but it gradually rises from 1.065 after 12 iterations to 1.589 after 72 iterations, showing that the error increases with the number of iterations; for the other 2500 seconds delay time, the prediction error only increases from 1.43 at 24 iterations to 1.874 and 1.689 at 48 and 72 iterations, respectively, showing some fluctuation but generally an upward trend. This data indicates that the model's prediction error shows a growing trend under different delay times and iteration counts, particularly under longer delay time conditions, where the error increases more significantly. From the above experimental results, it can be concluded that although the prediction errors of the thermodynamic system temperature delay identification algorithm proposed in this paper show an overall upward trend under different temperature delay times and iteration counts, they still remain within a relatively small range, indicating that the algorithm has a certain degree of stability and applicability. Especially under a delay time of 1500 seconds, the error increase is relatively smooth, indicating that the model has high predictive accuracy under shorter delay time conditions. However, under longer delay times (2500 seconds), as the number of iterations increases, the prediction error rises significantly, suggesting that in practical applications, the impact of delay time on model errors should be considered, and further optimization of the algorithm may be needed to reduce prediction errors under long delay time conditions.

Overall, these results validate the effectiveness of the method proposed in this paper in enhancing the modeling accuracy and optimization efficiency of thermodynamic systems, while also highlighting directions for further improvement under specific conditions, providing valuable suggestions and references for the intelligent management of complex thermodynamic systems.

5. Conclusion

This paper presents a thermodynamic system temperature delay identification algorithm and constructs a predictive control model considering temperature delay times based on this algorithm. The aim is to improve the accurate description of temperature delay phenomena and enhance modeling precision and optimization efficiency. Through experimental analysis of results under different temperature delay times and iteration counts, a series of valuable conclusions and insights have been drawn. The experimental results indicate that while the prediction errors of the proposed algorithm exhibit a certain upward trend under different delay times and iteration counts, the overall error range remains small, demonstrating the model's stability and applicability. Particularly under shorter delay time conditions, the model shows high predictive accuracy, with relatively smooth error growth, validating the algorithm's effectiveness under shorter delay times. However, under longer delay time conditions, as the number of iterations increases, the prediction error rises significantly, highlighting the impact of delay time on model accuracy and indicating the need for further optimization of the algorithm under long delay time conditions.

The research provides new ideas and methods for the intelligent management of complex thermodynamic systems, especially holding important application value in enhancing system modeling precision and optimization efficiency. However, this study also has certain limitations: primarily reflected in the larger prediction errors under long delay time conditions, with the model's adaptability and generalization ability across different datasets needing further validation. Additionally, future research can focus on several improvement and expansion directions: firstly, optimizing the algorithm to reduce prediction errors under long delay times; secondly, increasing the variety of thermodynamic system datasets for validation to enhance the model's generalization ability; and thirdly, integrating real-time data and dynamic adjustment mechanisms to further improve the model's practicality and flexibility in real-world applications.

In summary, this paper, through innovative algorithms and model construction, provides strong support for the intelligent control and accurate prediction of thermodynamic systems. Despite certain limitations, the research findings significantly contribute to the development of this field and lay a solid foundation and clear direction for subsequent studies.

  References

[1] Izumida, Y. (2023). Non-quasistatic response coefficients and dissipated availability for macroscopic thermodynamic systems. Journal of Physics Communications, 7(12): 125002. https://doi.org/10.1088/2399-6528/ad1597

[2] Rezaei, R.A. (2023). Energy and exergy evaluation of a dual fuel combined cycle power plant: An optimization case study of the khoy plant. Power Engineering and Engineering Thermophysics, 2(2): 97-109. https://doi.org/10.56578/peet020204

[3] Zhang, M. (2023). Enhanced estimation of thermodynamic parameters: A hybrid approach integrating rough set theory and deep learning. International Journal of Heat and Technology, 41(6): 1587-1595. https://doi.org/10.18280/ijht.410621

[4] Dai, X.Y., Li, T.Y. (2024). Real-time remote monitoring and overheating early warning of thermodynamic state of complex equipment systems based on computer network technology. International Journal of Heat and Technology, 42(1): 111-120. https://doi.org/10.18280/ijht.420112

[5] Cafaro, C., Luongo, O., Mancini, S., Quevedo, H. (2022). Thermodynamic length, geometric efficiency and Legendre invariance. Physica A: Statistical Mechanics and its Applications, 590: 126740. https://doi.org/10.1016/j.physa.2021.126740

[6] Xiong, W., Hao, L. (2022). Fundamental issues identified for thermodynamic description of molten salt systems. Journal of Phase Equilibria and Diffusion, 43(6): 894-902. https://doi.org/10.1007/s11669-022-01018-8

[7] Hylton, T. (2022). Thermodynamic state machine network. Entropy, 24(6): 744. https://doi.org/10.3390/e24060744

[8] Malik, H., Chaudhry, M.U., Jasinski, M. (2022). Deep learning for molecular thermodynamics. Energies, 15(24): 9344. https://doi.org/10.3390/en15249344

[9] Arróyave, R. (2022). Phase stability through machine learning. Journal of Phase Equilibria and Diffusion, 43(6): 606-628. https://doi.org/10.1007/s11669-022-01009-9

[10] Guan, P.W. (2022). Differentiable thermodynamic modeling. Scripta Materialia, 207: 114217. https://doi.org/10.1016/j.scriptamat.2021.114217

[11] Boyd, A.B., Crutchfield, J.P., Gu, M. (2022). Thermodynamic machine learning through maximum work production. New Journal of Physics, 24(8): 083040. https://doi.org/10.1088/1367-2630/ac4309

[12] Sun, G., Zhao, Z., Sun, S., Ma, Y., Li, H., Gao, X. (2023). Vapor-liquid phase equilibria behavior prediction of binary mixtures using machine learning. Chemical Engineering Science, 282: 119358. https://doi.org/10.1016/j.ces.2023.119358

[13] Chen, M. (2021). Collective variable-based enhanced sampling and machine learning. The European Physical Journal B, 94: 211. https://doi.org/10.1140/epjb/s10051-021-00220-w

[14] Alghamdi, H., Maduabuchi, C., Mbachu, D.S., Albaker, A., Alatawi, I., Alsenani, T.R., Alsafran, A.S., AlAqil, M. (2023). Machine learning model for transient exergy performance of a phase change material integrated-concentrated solar thermoelectric generator. Applied Thermal Engineering, 228: 120540. https://doi.org/10.1016/j.applthermaleng.2023.120540

[15] Cho, H., Dong, J.G., Ha, S.Y. (2022). Emergent behaviors of a thermodynamic Cucker-Smale flock with a time-delay on a general digraph. Mathematical Methods in the Applied Sciences, 45(1): 164-196. https://doi.org/10.1002/mma.7771

[16] Wu, C., Dong, J.G. (2023). Discrete thermodynamic Cucker–Smale model with time-delay on a general digraph. Journal of Mathematical Physics, 64(4): 042707. https://doi.org/10.1063/5.0095621

[17] Liu, J., Wang, G., Wang, X., Sun, Y., Zhou, B., Zou, Y., Wang, B., Zhang, K. (2021). Manipulation of organic afterglow by thermodynamic and kinetic control. Chemistry–A European Journal, 27(67): 16735-16743. https://doi.org/10.1002/chem.202103020

[18] Liu, X., Song, E., Zhang, L., Luan, Y., Wang, J., Luo, C., Xiong, L., Pan, Q. (2024). Design and implementation for the state time-delay and input saturation compensator of gas turbine aero-engine control system. Energy, 288: 129934. https://doi.org/10.1016/j.energy.2023.129934

[19] Zhang, H.C., Chen, H., Xiang, L., Zuo, Z.G., Liu, S.H. (2021). Instabilities of blow-down type Venturi cavitation considering thermodynamic effect. Thermophysics and Aeromechanics, 28(4): 563-576. https://doi.org/10.1134/S0869864321040107