The following is a list of master's dissertations that were recently completed in this Department.

All these dissertations can be found at UPSpace, the University of Pretoria Institutional Repositotry.

The abstracts of older masters from the department are summarized in the following archive pages:

**Johann Möller, "Numerical Optimization of a Finned Cavity Latent Thermal Energy Storage Enclosure for Solar Power Production"**

The utilization of thermal solar energy has significantly increased in recent times. However, due to the daily temporal nature of solar irradiation, which is affected by, for instance, cloud coverage, efficient thermal energy storage (TES) techniques are needed. Latent heat energy storage using phase change materials (PCMs) is a promising technology for concentrated solar power (CSP), but due to the low thermal conductivity of many PCMs, careful geometric design is required to sustain acceptable energy charging and discharging rates. In this numerical investigation the heat transfer rate in a latent heat thermal energy storage enclosure containing sodium nitrate PCM and horizontal high-conductive aluminium plate fins was considered. An enthalpy-porosity technique was used to model the phase change process in a two-dimensional domain while also considering buoyancy driven flow. The influence of the fin pitch on the heat rate during energy discharge, when the PCM solidifies, was studied. The width of the enclosure and the thickness of the fins relative to the enclosure volume was kept constant. Two thermal boundary condition cases were investigated, being, when the outer wall was 10 K and 5 K colder than the phase change temperature. The results revealed a definite optimum fin pitch exist when the wall temperature is 10 K colder than the phase change temperature.

**Supervisor: Prof J Dirker **

**Co-supervisor: Prof J.P. Meyer **

**Ivan Ackermann ,"Towards the discovery of boiler physics in a Kraft recovery boiler using sparse identification of non-linear dynamics"**

A common problem found in boilers in many industries, is fouling accumulation on the heat transfer surfaces. Recovery boilers in certain paper and pulp industries are especially vulnerable to ash fouling due to the composition of the fuel used for them. Typically, soot blowers (long tubes with through which pressurised steam or air is blown) are employed in boilers to deal with fouling accumulation. The high-pressure steam or air knocks off deposits that have formed on the heat transfer surfaces in the boiler. Recently, it has become necessary to optimise these soot blowing strategies, to increase boiler efficiency and longevity. Many methodologies have been implemented that attempt to optimise the soot blowing schedule and duration of soot blowing in different boilers. Typically, traditional machine learning models, such as artificial neural networks, support vector machines and long short-term memory networks, have been used to predict the fouling levels in boilers as well as the fouling change during soot blowing processes. However, these models all possess the same flaw, which is a lack of interpretability. These models are typically described as ‘black-box’ models, and, while their predictions are generally accurate, they are very difficult to interpret. It is often impossible to determine where errors come from for these machine learning models.

In this study, a novel approach to machine learning models is implemented, that attempts to not only fit a dataset with a model, but rather to extract the underlying physics equations from sensor measurement data, obtained from the recovery boiler at SAPPI’s Ngodwana mill. A thermodynamic model is first proposed that takes sensor measurements and boiler parameters into account to calculate an estimated level of fouling in the boiler. Previously, no such metric for fouling existed for the boiler, and the thermodynamic model alone is already a positive result, since the fouling in the boiler can now be monitored more accurately and can be used in different machine learning applications. The thermodynamic model also allows one to understand which physics equations drive the processes within the boiler and which input parameters are important. Once the fouling levels are determined for the boiler over a fixed period, the model is validated using the soot blowing schedule and observing the changes in the estimated fouling levels.

The sparse identification of non-linear system dynamics (SINDy) algorithm is then introduced, which is an algorithm that can extract the underlying physics equations from measurement data and was developed by \cite{Brunton2016}. The SINDy algorithm is prospectively applied to the Ngodwana dataset, with a default algorithm setup (algorithm parameters were not specifically selected), to establish basic model forms typically found in the dataset. The idea was not to test the algorithm on the real dataset yet, but rather just to extract a few default models that can be used for setting up verification problems. These verification models are used to test the identifiability of the underlying physics as well as the recovery ability of the SINDy algorithm.

At first, the basic models' coefficients are sequentially changed, to determine how they affect curve shape of the simulated models. Hence, one can determine in which coefficient ranges the physics are more identifiable. The recovery ability of the SINDy algorithm is subsequently tested, by varying the noise and initial conditions of the artificial dataset curves, which were generated by the verification models. It was seen that the SINDy algorithm was sensitive to noise and that noise could cause the algorithm to extract incorrect models. Further, it was seen that the SINDy algorithm would not extract consistent models when the curve shapes of the data being fitted changed, or when the initial values of these curves shifted. In a final verification problem, first and second order models were fitted to data that was generated by a second order model, using the SINDy algorithm. It was seen that the lower order models were naturally not the optimal fit for the data, however, the results were still interpretable and conveyed basic information regarding the soot blowing curves.

The SINDy algorithm was then applied to the Ngodwana measurement dataset, once the verification problem results were interpreted. The optimal polynomial order models were initially determined, by using the results from the actual dataset as well as knowledge gained through the verification problems. A baseline algorithm setup was established and the soot blowing sequence models were extracted for a specific soot blower pair in the boiler. It was noted that the model coefficients fluctuated severely from one sequence to the next, even though the sequences came from the same soot blower pair. This was expected, since there was a lot of inconsistency regarding the soot blowing curve shapes and initial values as well as the fact that there was inevitably noise present in the data that could further impact the algorithm's performance. A positive result from the baseline implementation, was the fact that interpretable models could already be seen even if they were not consistent. The model forms obtained were logical and showed that one could potentially extract working physics models from the Ngodwana boiler.

In an attempt to circumvent the data inconsistency, additional measurement inputs were given to the algorithm to increase possible model complexity. In some cases, the additional inputs improved the model consistency, however, general models could still not be extracted. Some fo the additional inputs did improve the prediction accuracy of the extracted models dramatically and showed which sensor measurements were important. This was promising, since accurate predictions in a complex physics environment such as the recovery boiler was difficult to come by. Several additional methods were tested to try and overcome the data inconsistency. This included optimising the threshold parameter of the optimiser algorithm sequentially, scaling the fouling factor and normalising the dataset. None of these proved to be very successful and only slightly improved model consistency in some cases. Finally, soot blowing sequences, that had the same basic curve shape and initial values, were manually selected, to introduce some consistency in the data.The models that were extracted, were slightly more consistent and it was shown that a general model could then be found for the simple fouling factor input only models. The more complex models, with additional inputs, however, were too sensitive to even slight data variation. The experiment also showed that sensor measurement errors were likely one of the causes of data inconsistency that was causing coefficient fluctuation and further highlighted how complex the dataset was that was being worked with.

Overall, the results of this study was positive, since a measurement of fouling was introduced in the boiler that had not been available before. Further, the SINDy algorithm had no trouble extracting models that were interpretable and these models were able to predict the soot blowing sequences with a high degree of accuracy, despite the complex nature of the data. Lastly, the study has allowed one to determine how one would have to approach measurement datasets from the boiler in future, especially regarding the data processing and sequence selection. If one would want to implement the extracted models in a more predictive capacity, one would have to be able to extract more general models for which this information is invaluable. Overall, the study has laid a strong foundation for future research regarding the use of SINDy for the extraction of boiler physics, that could potentially be used in soot blowing optimisation.

**Supervisor: Prof. P.S. Heyns **

**Co-supervisor: Prof. D.N. Wilke **

**L Lombaard ,"Numerical investigation into the effects of multiple bubbles in microchannel flow boiling"**

Recent developments in microelectronics have produced higher heat fluxes that are beyond the capabilities of current heat exchangers. An increase in computing power coupled with decreasing processor size requires high thermal management on a smaller contact area. Microchannel heat sinks utilising flow boiling have been shown to produce heat fluxes orders of magnitude higher than those of their macroscale counterparts. Several factors influence the high heat transfer capabilities of the systems such as taking advantage of both the sensible and latent heat of the working fluid and the evaporation of the thin liquid film present between the channel walls and the vapour bubbles. Many researchers have investigated a wide range of microchannel geometries, orientations and different working fluids and applied heat fluxes. The correlations developed between confined boiling, heat flux and pressure drop are for macroscale flow and are ill-suited to microscale analysis. Heat transfer correlations are generally derived from experimental results conducted over a range of parameters and from evaluation of the influence of these varying parameters on the system. Because the scales of these phenomena are extremely small, visualisation and measurement during experimentation are difficult and inaccurate. Numerical modelling through computational fluid dynamics allows researchers to simulate and investigate these small-scale phenomena.

This study focused on numerically modelling the interaction between multiple bubbles during flow boiling of refrigerant R245fa. The two-dimensional numerical domain had a length of 36 mm, consisting of three sections, and a height of 0.5 mm. The first section was adiabatic to allow the patched bubbles to develop in shape before phase change was present. The middle section had an applied heat flux of 5 kW/m^{2} and was the main focus. The last section was also adiabatic and was used to retain the leading bubbles. An interface-tracking mesh refinement method was used in all the cases. This method refined the liquid-vapour interface and a set distance around the interface, reducing the computational cost of the simulations.

The benchmark results were recreated with less than 4% of the required mesh elements. A set of three-dimensional simulations was attempted using the same method, but the simulations have not yet been completed. The bubbles were patched into the domain, instead of simulating bubble departure, to have better control over the positions of the bubbles.

In all the cases, the heat flux improved from the first to the second bubble by at least 25%. A further 20% improvement was observed from the second to the third bubble at the end of the heated section. An increase in phase change was observed as the distance between bubbles were decreased, suggesting better heat transfer. This study illustrated the advantages of flow boiling over single-phase cooling, and the results corresponded to the findings of the literature

**Supervisor: Dr M. Moghimi Ardekani **

**Co-supervisors: Professor J.P. Meyer, Prof P. Valluri **

**Rajesh Singh Padiyaar, "Influential cooling of the free surface by jet impingement of aqueous nanodispersion dominant with hybridized nanoparticles" **

The sustenance of industries depends on implementing energy conservation and sustainability concepts. They partially or wholly rely on thermal energy to fabricate products that are necessary for the paced livelihood. Amidst them, jet impingement cooling is one of the fast-emerging techniques due to its capability of producing very efficient localised heat transfer. The impinging jet cooling technology has been extensively employed in various industrial systems, such as cooling of gas turbine system and components, cooling of the rocket launcher, and cooling of high power density electrical density machines, to remove a considerable quantity of heat. However, it is vital to remember that increasing the impingement cooling magnitude isn't always the best design decision when trying to control the engine's thermal conditions and reactions. The bulk metal temperature and local thermal stresses have the most significant impact on the life expectancy of the hot gas path component. The component will break sooner if the bulk temperature is too high, but cracks will form, spread, and eventually collapse (cycles) if the thermal stress is too high. A high bulk temperature may be reduced by increasing the cooling flow; however, this might increase the problem of thermal stress. In order to eliminate local gradients, surface roughness or orientation of the impingement jets may be used to increase the heat transfer coefficient around a lower starting magnitude. Hence, the current study investigates the influence of free surface cooling by jet impingement of hybrid nanofluids prepared by dispersing MWCNT (5 nm) and "A" "l" _"2" "O" _"3" (<7 nm) in DI water in the ratio of 90:10. The fluid was characterised by TEM and DLS to understand the dispersion and hydrodynamic size. The fluid stability was evaluated and quantified by visual inspection, zeta potential and transient viscosity approach. The nanofluid properties such as viscosity, thermal conductivity and surface tension were measured at different volume concentrations (0.025, 0.05, 0.1, 0.15%) and temperatures (10 to 60°C). The heat transfer experiments were focused on cooling the targeted copper round surface (D=42 mm) using a jet nozzle ("D" _"j" =1.65mm inner diameter) to impinge the HNFs at a constant jet-surface distance at H/"D" _"j" =4 susceptible to a turbulent flow regime. Numerical studies by the Eulerian-Eulerian approach were also carried out by assuming that both gas and solid phases are at interpenetrating continua with the k-w SST model evaluating the changing velocity and turbulent viscosity. Furthermore, we investigated a transient cooling rate for varying particle vol% concentrations. The effect of flow rate, advection, and surface tension on the thermal performance of the nanofluid in cooling the surface is measured by relating the Nusselt number with Reynolds number (6000 ≤ Re ≤ 16500), Peclet number (80000 ≤ Pe ≤ 205000) and Weber number (1000 ≤ We ≤ 7000) respectively. The maximum augmentation in Nu number is at 0.05% HNF with a 17% increase compared to DI water. From the CFD study, a maximum improvement of 19.7% in Nu number is seen by the 0.15% HNF, and the improvement in the 0.05% particle concentration fluid is 13.7%. The impinging Nu number for HNF with 0.05% stands at 180, 18.7% augmentation compared to DI-water and has the maximum improvement than any other particle vol% concentration fluids. It was concluded that the 0.15% HNF is the worst performing fluid at the jet-surface domain and hurts the Nu number. However, this trend was not shown in the CFD analysis and could be caused by other factors of the experiment, such as size and shape of nanoparticles, mixing methods, surfactant amount and heat loss. In terms of the transient cooling rate investigated, the best-performing fluid is 0.15% particle concentration fluid with relaxations time 1 second and 1.75 seconds in CFD and experiment, respectively. The steady-time (the time the cooling curve is approaching the infinity line) is 23 seconds and 33 seconds in CFD and experiment. Correlation for Nusselt number as a function of Re and volume concentration was also proposed.

**Supervisor: Professor M. Sharifpur **

**Co-supervisor: Professor J.P. Meyer **

**M.A. Swart**, **"A Computational Fluid Dynamics Approach to Selecting a Concentrating Solar Thermal Plant Site Location Around a Ferromanganese Smelter Based on Heliostat Soiling Potential"**

There is a need to reduce the share of process heat generated by fossil fuels in energy-intensive industries. One proposed solution in the iron and steel sector is to introduce high temperature solar thermal heat energy into a pre-heating stage of the ferromanganese smelting process. In principle, this is an idea that can work, but there are unknowns related to concentrating solar thermal (CST) solar field performance in the vicinity of an industrial smelting operation. This dissertation adopts a two-part approach to addressing the unknowns related to solar field performance.

First, a field experimental campaign is carried out at a ferromanganese smelter in South Africa, where mirror soiling data, dust characterisation data, and on-site meteorological data are collected. A clear change in rainfall was observed during the summer and winter period, with the dry winter period being the period where the most mirror soiling was observed. Results from the 8-month mirror soiling measurement campaign showed that proximity of the mirror sampling set to the smelter dust source is the primary driver of mirror soiling rates, with dust concentrations decreasing further away from the source. The secondary drivers for mirror soiling rates were observed to be wind direction and wind speed, for reflectance sampling locations at roughly equal distances from the smelter dust source. A 13% relative improvement in mirror reflectance loss rate was observed by simply considering an adjacent mirror sampling location through the dry season.

The second part of this dissertation demonstrates the use of a large-scale atmospheric flow computational fluid dynamics (CFD) modelling based approach to selecting an appropriate CST solar field site in the vicinity of an industrial smelter. The *k-ϵ *turbulence model is adapted to be more suitable to modelling neutral atmospheric boundary layer (ABL) flows, along with other modelling strategies. The tailored modelling approach is validated against wind tunnel and on-site wind mast data. On-site wind mast data is also used to derive priority wind speed and direction simulation cases. The discrete phase method (DPM) is used to simulate dust dispersion and deposition based on the results of the full-scale neutral ABL CFD simulations for priority wind cases. The dust deposition results for individual cases are then combined and weighted using the on-site wind data for a given sampling period, yielding a dust deposition map that shows the deposition hot spots around the smelter for the given period. The weighted dust deposition pattern is validated against experimental mirror soiling data for the same period. Some minor discrepancies are observed, but the simulation approach correctly predicted the experimentally observed soiling pattern for the studied period. The CFD-based CST solar field site-selection approach is thus successfully demonstrated and validated as an approach that can be used to identify a candidate solar field site relative an industrial dust source.

**Supervisor: Professor K.J. Craig **

**Co-supervisors: ****Q.G. Reynolds, S.A.C. Hockaday **

**Sidhant Kumar ,"The coupled effect of surface roughness and nanoparticle size on the heat transfer enhancement of nanofluids for pool boiling "**

In the present work, the combined effect of surface roughness and nanoparticle size, also known as surface-particle interaction parameter (SPIP) and defined as ratio of surface roughness to the particle size, was investigated numerically by simulating nanofluid/vapour two-phase pool boiling inside an unsteady 2-D symmetric chamber consisting of a heat sink as the heated wall. New correlations for bubble departure diameter and nucleation site density were implemented as a user-defined function to account for the SPIP. The bubble waiting time coefficient was corrected at different nucleation site density during validation study where good agreement was found and then the same bubble waiting time coefficients were used during the rest of the investigations accordingly. The effect of nanoparticle concentration, fin aspect ratio, number of fins and different base fluids were also investigated. Aluminium oxide was used as the nanoparticle throughout this study. The results showed that when the SPIP, which is defined as the ratio of surface roughness to the nanoparticle diameter, is near 1, the lowest heat flux is achieved and thus will always show an inferior performance in heat transfer when compared to pure water. As SPIP increases past 1, a higher heat transfer coefficient and heat flux are achieved, thus showing an enhancement in heat transfer performance compared to water at appropriate concentrations. When SPIP is lower than 1, the heat flux is lower than when SPIP is higher than 1 but still higher than when SPIP is near 1. It was also found that as the number of fins and fin aspect ratio increases, the heat transfer coefficient increases. There is, however, a deterioration in heat transfer when the nanoparticle concentration increases. It was found that at SPIP close to 1, water based nanofluid always shows far better heat transfer capabilities than refrigerant based nanofluids. However, at SPIP: 16, R245FA based nanofluid achieves higher heat flux than water based nanofluid at higher wall superheat temperatures.

**Supervisor: M. Sharifpur **

**Co-supervisor: J.P. Meyer **

**Luke van Eyk, "A hybrid gearbox condition monitoring methodology using transfer learning calibration"**

Gearboxes are widely utilized as critical components in a large number of engineering applications. Gearboxes are prone to failures and therefore it is advantageous to utilise a condition-based maintenance (CBM) framework to infer the condition of its components. Various data-driven and physics-driven approaches have been developed for the CBM task. In this work, a hybrid approach is proposed where a data-driven and physics-driven approach are combined to infer the condition of the gearbox. The hybrid approach combines the advantages of both approaches and aims to overcome their respective limitations. For the physics-driven approach, a numerical gearbox model is developed. The modelling procedure for the numerical gearbox model introduced a novel approach to gear fault modelling which aims to generalize the introduction of gear faults to a simpler, unified framework. For the data-driven approach, a supervised convolutional neural network (CNN) is utilised to extract features from vibration signals and classify them simultaneously. By generating synthetic data from the physical model and feeding this to the CNN, a hybrid model is developed which may yield the potential for fault identification of the real asset. There is, however, no guarantee that the learned features from the synthetic data (source domain) are transferable to a new domain of signals (target domain), such as those from the real asset. Two transfer learning methods are utilised to calibrate the hybrid model for a change in input data. To investigate the efficacy of transfer learning calibration, two numerical experiments are constructed where the hybrid model is trained on perfect synthetic data (the source domain) and applied to noisy synthetic data with different vibration signatures (the target domain). The results show that an uncalibrated hybrid model fails to transfer to the target domain, but that the calibrated methods perform well on this transfer task. This work highlights the potential of transfer learning-calibrated hybrid methods for condition monitoring of gearboxes.

**Supervisor: Prof. P.S. Heyns**

**Co-supervisor: Dr. S Schmidt **

**Andre Van Zyl ,"Soot blowing optimization on thermal heat transfer surfaces in a black liquor fired kraft recovery boiler through interpretative machine learning"**

Kraft recovery boilers are used globally in the paper mill industry to produce steam. This steam is used in the digesters to separate the paper fibers from the inorganic matter, and some of the steam is used for power generation. This inorganic matter is then mixed with “white liquor” (fuel consisting of low concentration inorganic matter) to form “black liquor”. Black liquor is vaporized and burned in the boiler as the heat source to produce steam. The high concentration of inorganic matter in the black liquor causes higher fouling accumulation rates compared to pulverized fuel boilers, as ash fouling is encouraged by the in-organics. Not only that, but the development of fouling also leads to higher levels of non-linearity in its characteristics. The major problem with fouling is that it causes an insulative layer on the heat transfer surfaces, seriously reducing the efficiency. Since the digesters and the turbines require specific temperatures and pressures of the steam, higher flue gas temperatures able to account for the accumulation of fouling, is then required. And higher flue gas temperatures are reached by using more fuel. However, this solution may work for a while, but eventually one of two things will happen. The fouling will become so extensive that the airways of the flue gas get completely blocked, this is known as “plugging”. Alternatively, the limitations of the boiler` will be met, and damage could occur. Thus, this is not a viable solution.

Modern kraft recovery boilers employ a mitigation process known as soot blowing, to reduce the effect of fouling accumulation. Soot blowing is a process through which fouling is removed by steam. The steam is used to blast off the accumulated deposits, which then increase the heat transfer between the flue gas and the water/steam in the tubes. Soot blowing itself also poses some problems, namely: the process uses steam, which needs to be generated. Thus, the more steam used in soot blowing, the less efficient the process becomes. Secondly, soot blowing is a corrosive process and erodes the heat transfer surface if the surface is exposed to soot blowing for extensive periods. Modern soot blowing operations entail pre-scheduling the soot blowers to fire, based on visual inspection when the boiler is shut down for maintenance. Although this method of soot blowing seems to work well, the method lacks data support, data that is already available and being used to monitor the conditions in the boiler. Since the operation lacks data support, the boiler still suffers from plugging, which can only be removed by shutting it down. The kraft recovery boiler at Sappi’s Ngodwana mill is shut down at an expense of R 20 million per day, and the shutdown typically lasts 3 days. Thus, reducing the number of annual shuts by only one shut might save the company approximately R60 million per annum.

Soot blowing optimization is a field of study by which one aims to optimize the soot blowing process by incorporating data support. Modern soot blowing optimization algorithms make use of machine learning models that follow a “black box” approach. This results in a model able to predict the state of fouling in the boiler accurately. However, in some cases, the predictions made by the model are faulty. This might be due to a sample lying outside of the parameters boundaries the model was trained on, however since these models follow a uninterpretable approach, it is difficult to conclude why the model is performing as it is, and why the model sometimes makes errors. Thus, the need for an interpretative model.\newline

All machine learning models require a dataset to train the model, and in order to build a labeled dataset for a system such as a boiler, a physics model, able to capture the underlying thermodynamics, is proposed. The thermodynamic model closely follows the approach suggested by Shi et al. (2019) with slight variations to fit the boiler under consideration. This model was built to calculate the state of fouling, refferd to as the Fouling Factor (FF). The FF is simply the ratio of the overall heat transfer coefficient when the boiler is clean to the overall actual heat transfer coefficient at any state of operation. With a process as complex as a boiler, it is very difficult to build a precise physics model, and thus some assumptions need to be made. These assumptions, however, still result in a model able to estimate the lever of fouling accurate enough to build a soot blowing optimization algorithm.

A Sparse Partial Least Squares (SPLS) model is proposed as the basis for a soot blowing optimization algorithm, as this model is interpretable and gives valuable insight into how the model estimates the FF. The model’s predecessor, PLS regression and its shortcomings are discussed and the reasons why an SPLS model is suited for this study are explored. This is followed by a discussion on the SPLS model and how the SPLS model's algorithm allows for interpretability.

Several SPLS models are built to perform soot blowing optimization in this study. The first is an SPLS model able to predict the FF from DCS data for a single sample/measurement, as the thermodynamic model makes use of lengthy calculations to find the FF. This is due to some of the required properties which can only be found through interpolation of tables and reading off of graphs. This makes the calculations computationally expensive, as the FF calculation cannot be vectorized over all the samples to speed up the calculation. The SPLS model predicting a single sample’s FF gives insight into which of the prediction variables seems to be the most informative predictors. It is shown that the flue gas temperatures and the difference in the steam temperature, seem to be the most informative predictors. The model also shows that the amount of information captured by the predictors, change during the operating cycle of the boiler. For instance, when the boiler is clean, the differences in steam temperatures are less informative compared to when the boiler has significant levels of deposition. Furthermore, since fouling is an accumulation process, the current state of fouling depends on the previous states of fouling. To add this information, the model is supplied with the previous three FF measurements. The added temporal information drastically improved the predictive power and robustness of the model, and the model is used to build a labeled dataset to be used for the optimization algorithm.

The main pillar in the optimization algorithm is having a model that can accurately predict the fouling development for different scenarios, two in particular. The two scenarios are: 1) The FF development if soot blower X is activated, and how this compares to all other soot blowers, should they be activated, and 2) The FF development if no soot blowing takes place for the same duration of time. Three SPLS models were built and are able to predict these FF developments. The first predicts the FF development when a soot blower is activated, the second predicts the development in post-soot blowing, and the third predicts the development should no soot blowing take place. All three models perform well and can accurately predict how the FF would change for the mentioned scenarios.

Up to this point, the models predict the FF change for a soot blowing cycle, which is adequate to optimize the timing of soot blowing. However, to optimize the duration of soot blowing, an additional model is required to predict how the FF development would look if soot blowing had increased. The reason being, SPLS modes are closed form, which means, if the model was trained on taking X number of predictors and producing Y number of responses, the model can only be used as such. Thus, if the model is trained to predict the FF development for 3 minutes of soot blowing, and a prediction for 4 minutes is required, an additional model is needed. A Bayes regression model was proposed and fitted on the predictions made by the SPLS. This allows one to predict the FF during soot blowing and uses the Bayes regression model for extrapolation. A kernel function with a model order of 4 (highest polynomial of x^3) was found to be optimal.

Once the FF could be predicted if the duration was increased, the SPLS models were combined with the Bayes regression model to form a single FF predictive model. This model can predict the FF for a complete soot blowing sequence, and in this study, it is shown that the model is robust and can accurately predict the FF.

The last part of the study shows the development of the optimization algorithm and how the algorithm should be implemented and used. Some of the soot blowers show little to no effect on the FF development, and it was decided that these soot blowers should be kept on a pre-scheduled cycle until sufficient data is available. The optimization algorithm also has to perform within constraints provided by the plant engineers, and these constraints are mentioned. Finally, the algorithm is validated using simulation. In this simulation, the performance of one soot blowing pair is tested, to evaluate a soot blowers’ individual performance. This is followed by two soot blowers, to evaluate a soot blowers’ performance in the presence of another soot blower, and finally all the informative soot blowing pairs are combined to find the overall performance of the model. It was found that in most cases the model performs quite well having a overall accuracy of approximately 80%.

**Supervisor: Prof. P.S. Heyns**

**Co-supervisor: Prof. D.N. Wilke **

**Ankit Sharma, "Investigation of a new UAV configuration with non-elliptic lift distribution" **

The development of an alternative wing-body-tail configuration is investigated using the AREND UAV as a baseline. The proposed configuration, inspired from bird wings, assumes that all stability requirements are achieved through the main wing and there is no need for an empennage. The potential here is not only a reduction in structural weight, but also a reduction in drag that leads to an increase in fuel efficiency. This study uses the AREND wing with elliptic loading and compares its characteristics to the new wing design that has a non-elliptic lift distribution (NELD) using the method developed by Prandtl in 1933. Both the AREND and NELD wings are analysed using computational fluid dynamics to investigate the aerodynamics and flight mechanics benefits of the NELD configuration. The AREND and NELD comparison shows that the NELD configuration increases the aerodynamic efficiency by 9,87% due to the higher lift to drag ratio and the removal of the empennage. Further benefits include weight reduction and a wider CG range. A smaller wake region was also found due to the wing being fully blended with the fuselage. Upwash was seen to occur at 2y=b = 0,85 indicating the presence of induced thrust as predicted and shown by Prandtl and the Horten Brothers. The change in lift distribution over the wing due to sideslip also shows that the NELD configuration exhibits proverse yaw and therefore, can indeed perform a coordinated turn.

**Supervisor: Dr. L. Smith**

**Co-supervisor: Dr. M.M. Lone **

**Lionel Dongmo Fouellefack, "Development of a Novel Supervisory Controller on a Parallel-Hybrid Powertrain for Small Unmanned Aerial Systems"**

A Hybrid-Electric Unmanned Aerial Vehicle (HE-UAV) model has been developed to address the problem of low endurance of a small electric UAV. Electric powered UAVs are not capable of achieving a high range and endurance due to the low energy density of its batteries. Alternatively, conventional UAVs (cUAVs) which use at least one source of power using fuel with an internal combustion engine (ICE) produces more noise and thermal signatures which is undesirable especially if the air vehicle is required to patrol at low altitudes and remain undetected by ground patrols. This work investigates the impact of implementing hybrid propulsion technology to improve on the endurance of the UAV (based on a 13.6kg UAV). This was done by creating a HE-UAV model to analyze the fuel consumption of the UAV for given mission profiles which was then compared to a cUAVs. Although, this UAV size was used as reference case study, it can potentially be used to analyze the fuel consumption of any fixed wing UAV of similar take-off weight. This was done by modeling the subsystem of the hybrid powertrain consisting of the ICE, electric motor, battery, DC-DC converter, fuel system and propeller system and the aerodynamic system of the UAV in a Matlab-Simulink environment using Simulink built-in functionalities. Additionally, a ruled based supervisory controlled strategy was implemented to characterize the split between the two propulsive components (ICE and electric motor) during the UAV mission. Finally, an electrification scheme was implemented to account for the hybridization of the UAV during certain stages of flight. The electrification scheme was then varied by changing the time duration of the UAV during certain stages of flight and comparisons were made between the UAV in electric mode and cUAV on the fuel consumption during each mission. Based on simulation, it was observed a HE-UAV could achieve a fuel saving of 33% compared to the cUAV. A validation was also done using the Aerosonde UAV as case study after creating the model and it was shown that the model predicts an improved fuel consumption of 9.5% for the UAV.

**Supervisor: Dr. L. Smith**

**Co-supervisor: ****Dr. M. Kruger **

**Ngonidzashe Mutangara, "Numerical implementation of the pwoer balance method for aerodynamic performance assessment"**

**Supervisor: Dr Lelanie Smith**

**Co-supervisors: Prof Ken Craig & Prof D Sanders**

The numerical implementation of the power balance method within Computational Fluid Dynamics code STAR CCM+ was investigated to ascertain the method’s viability as a means to analyse highly integrated airframe-propulsor configurations. For configurations of this nature, conventional approaches to define thrust and drag become difficult, if not impossible due to the complex physical coupling of the airframe and propulsors. The power balance method attempts to address this by recasting the problem from an energy-based perspective, setting itself apart from the more traditional force-momentum techniques. This approach

allows for regions within a flowfield to be classified according to their relative energy contributions, whether that be energy supply or consumption. To-that-end aircraft performance can thus be quantified via an assessment of power without having to resort to a formal definition of thrust and drag. By doing so, the power requirements of integrated airframes can be compared to those of conventional podded variants. For unpowered geometries, verification of the power balance method was done by comparing its solutions

against commonly used performance bookkeeping schemes, i.e. near- and far-field momentum methods. The configurations analysed covered flat plates, a NACA 0012 airfoil, as well as the Myring and F-57 Low Drag bodies. For these cases, the power balance method performed well, giving solutions agreeing within 2% of the alternative techniques. Later, for the powered studies, the unpowered cases were modified to include propulsion, modelled using actuator disk theory. For these studies, the power requirements of podded and boundary layer ingesting propulsion configurations were studied, this was done to analyse the potential power savings obtainable from utilising boundary layer ingesting as opposed to conventional

podded propulsors. This potential power-saving was effectively quantified using the power saving coefficient. The case studies showed that boundary layer ingestion could provide possible power reduction benefits within the range of 4 – 11%. This demonstrated the potential benefits of BLI, albeit modelled using simplified propulsor models. Considering this, it is useful to highlight that the studies only provide a theoretical estimate of the maximum possible savings that may be obtained, neglecting real-world elements such as the three-dimensional viscous flow phenomena and effects of ingesting boundary layer flow on

the propulsion system.

**Jan Swart, "Subgradient Optimization of Variable Hydraulic Network Operation"**

**Supervisor: Prof Schalk Kok**

**Co-supervisor: Prof. Nico Wilke**

Pumping energy accrues between 2 % and 3 % of global energy consumption, garnering large potential energy cost savings. This works aims to optimize the operation of variable hydraulic networks, specifically water distribution networks by optimizing pump scheduling to exploit variations in energy tariffs and ensure good pump efficiency. This enables network operators to save money without altering the network characteristics, which would require large capital investment. The optimization problem is often tackled with heuristic optimization methods coupled with a hydraulic solver such as the commonly used Epanet software. Another approach that has gained popularity is to use mixed integer programming, which approximates the hydraulic equations by linear or non-linear approximation using continuous variable formulations. This approach is not compatible with hydraulic solvers and therefore disallows the use of Epanet. Instead of duplicating current research efforts to solve water distribution network operation optimization, the work presented in this document rather focuses on less-known formulations and algorithms. The main aim is to identify promising alternatives, that might mature in future to provide competitive performance. Hence this work is intended to provide proof-of-concept for new methods, rather than refinement of methods that are known to work. This work employs subgradient optimization algorithms to find optimal pump scheduling. This class of algorithm, specifically the steepest descent algorithm with constant step size, has the advantage of using gradient information to find the minimum. Since gradient-based optimization algorithms scale well with problem dimensionality, it is worthwhile to investigate if the subgradient optimization algorithm is suitable to solve the variable hydraulic network operation problem. The implementation uses Epanet to formulate the cost- and constraint functions. The required analytical sensitivities were derived for the design functions (cost function and constraint function). These sensitivities were extracted from Epanet along with the zeroth order information of the cost- and constraint function values. The analytical gradients compared favourably to finite difference gradients. The algorithms were tested on small two-dimensional test networks, which allowed for the functions and solutions to be plotted in order to visually confirm the algorithm behaviour. They were also tested on a three-dimensional network to demonstrate scalability of the implementation. From these plots and examination of the solutions, the algorithms were seen to be effective and efficient in finding the theoretical minimum of the test problems from various starting points. Since the subgradient optimization method shows promise, the hydraulic solver community is urged to add sensitivity information to their solver outputs.

**M Vermaak, "Experimental investigation of microchannel flow boiling heat transfer with non-uniform circumferential heat flux at various gravitational orientations"**

**Supervisor: Prof Jaco Dirker **

**Co-supervisor: Prof Josua P. Meyer**

**Co-supervisor: Prof Khellil Sefiane**

Flow boiling of Perfluorohexane (FC-72) in rectangular microchannels with one-sided uniform heating was studied experimentally at different rotations (). Various rotational orientations were investigated ranging from = 0° (bottom-heating) to 90° (side-heating) in increments of 30° as well as 180° (top-heating).

The channels had a relatively high aspect ratio of 10 (5 mm x 0.5 mm), a hydraulic diameter of 909 m and a heated length of approximately 78 mm. Mass fluxes of 10 kg/m2s, 20 kg/m2s and 40 kg/m2s were considered at several heat flux values at a saturation temperature of 56°C. For these conditions, in-channel flow visualisations and heated surface temperature distributions were recorded; fluid temperature and pressure readings were taken, and heat transfer coefficients were determined from subcooled conditions, through the onset of nucleate boiling, to near dryout conditions within the channel.

A channel at a rotation of = 0° produced the optimal results. = 0° had the highest heat transfer coefficient at all mass flux and heat flux combinations tested and had the lowest cross-sectional temperature variation of all rotations, minimizing the probability of warping electronic components. = 0° was nucleate boiling dominated resulting in an improved heat transfer performance with an increase in heat flux. = 180° experienced heat transfer coefficients that were greater than = 30°, 60° and 90° at various qualities up to = 0.3 where the vapour slug became confined the heat transfer coefficient decreased rapidly. = 90° had the lowest heat transfer coefficients at most mass flux and heat flux test cases. = 0° had the highest pressure drop while = 180° had the lowest pressure drop.

**D.G. Marx , "Towards a hybrid approach for diagnostics and prognostics of planetary gearboxes."**

**Supervisor: Prof P.S. Heyns**

**Co-supervisor: Dr S. Schmidt**

The reliable operation of planetary gearboxes is critical for the sustained operation of many machines such as wind turbines and helicopter transmissions. Hybrid methods that make use of the respective advantages of physics-based and data-driven models can be valuable in addressing the unique challenges associated with the condition monitoring of planetary gearboxes.

In this dissertation, a hybrid framework for diagnostics and prognostics of planetary gearboxes is proposed. The proposed framework aims to diagnose and predict the root crack length in a planet gear tooth from accelerometer measurements. Physics-based and data-driven models are combined to exploit their respective advantages, and it is assumed that no failure data is available for training these models. Components required for the implementation of the proposed framework are studied separately and challenges associated with each component are discussed.

The proposed hybrid framework comprises a health state estimation and health state prediction part. In the health state estimation part of the proposed framework, the crack length is diagnosed from the measured vibration response. To do this, the following model components are implemented: A first finite element model is used to simulate the crack growth path in the planet gear tooth. Thereafter, a second finite element model is used to establish a relationship between the gearbox time varying mesh stiffness, and the crack length in the planet gear tooth. A lumped mass model is then used to model the vibration response of the gearbox housing subject to the gearbox time varying mesh stiffness excitation. The measurements from an accelerometer mounted on the gearbox housing are processed by computing the synchronous average. Finally, these model components are combined with an additional data-driven model for diagnosing the crack length from the measured vibration response through the solution of an inverse problem.

After the crack length is diagnosed through the health state estimation model, the Paris crack propagation law and Bayesian state estimation techniques are used to predict the remaining useful life of the gearbox.

To validate the proposed hybrid framework, an experimental setup is developed. The experimental setup allows for the measurement of the vibration response of a planetary gearbox with different tooth root crack lengths in the planet gear. However, challenges in reliably detecting the damage in the experimental setup lead to the use of simulated data for studying the respective components of the hybrid method.

Studies conducted using simulated data highlighted interesting challenges that need to be overcome before a hybrid diagnostics and prognostics framework for planetary gearboxes can be applied in practice.

**RPJ Ludeke, "Towards a Deep Reinforcement Learning based approach for real-time decision making and resource allocation for Prognostics and Health Management applications"**

**Supervisor: Prof. P.S. Heyns**

Industrial operational environments are stochastic and can have complex system dynamics which introduce multiple levels of uncertainty. This uncertainty leads to sub-optimal decision making and resource allocation. Digitalisation and automation of production equipment and the maintenance environment enable predictive maintenance, meaning that equipment can be stopped for maintenance at the optimal time. Resource constraints in maintenance capacity could however result in further undesired downtime if maintenance cannot be performed when scheduled.

In this article the applicability of using a Multi-Agent Deep Reinforcement Learning based approach for decision making is investigated to determine the optimal maintenance scheduling policy in a fleet of assets where there are maintenance resource constraints. By considering the underlying system dynamics of maintenance capacity, as well as the health state of individual assets, a near-optimal decision making policy is found that increases equipment availability while also maximising maintenance capacity.

The implemented solution is compared to a run-to-failure corrective maintenance strategy, a constant interval preventive maintenance strategy and a condition based predictive maintenance strategy. The proposed approach outperformed traditional maintenance strategies across several asset and operational maintenance performance metrics. It is concluded that Deep Reinforcement Learning based decision making for asset health management and resource allocation is more effective than human based decision making.

**B.D. Collins, "Insights into the use of Linear Regression Techniques in Response Reconstruction"**

**Supervisor: Prof. P.S. Heyns **

**Co-supervisor: Prof. S. Kok **

Response reconstruction is used to obtain accurate replication of vehicle structural responses

of field recorded measurements in a laboratory environment, a crucial step in the process of

Accelerated Destructive Testing (ADT). Response Reconstruction is cast as an inverse problem

whereby the desired input is inferred using the measured outputs of a system. ADT typically

involves large shock loadings resulting in a nonlinear response of the structure. A promising

linear regression technique known as Spanning Basis Transformation Regression (SBTR) in con-

junction with non-overlapping windows casts the low dimensional nonlinear problem as a high

dimensional linear problem. However, it is determined that the original implementation of SBTR

struggles to invert a broader class of sensor configurations. A new windowing method called

AntiDiagonal Averaging (ADA) is developed to overcome the shortcomings of the SBTR im-

plementation. ADA introduces overlaps within the predicted time signal windows and averages

them. The newly proposed method is tested on a numerical quarter car model and is shown to

successfully invert a broader range of sensor configurations as well as being capable of describing

nonlinearities in the system.

**R. Balshaw, "Latent analysis of unsupervised latent variable models in fault diagnostics of rotating machinery under stationary and time-varying operating conditions"**

**Supervisor: Prof. P.S. Heyns**

**Co-supervisor: Prof. D.N. Wilke**

**Co-supervisor: Dr. S. Schmidt **

Vibration-based condition monitoring is a key and crucial element for asset longevity and to avoid unexpected financial compromise. Currently, data-driven methodologies often require significant investments into data acquisition and a large amount of operational data for both healthy and unhealthy cases. The acquisition of unhealthy fault data is often financially infeasible and the result is that most methods detailed in literature are not suitable for critical industrial applications.

In this work, unsupervised latent variable models negate the requirement for asset fault data. These models operate by learning the representation of healthy data and utilise health indicators to track deviance from this representation. A variety of latent variable models are compared, namely: Principal Component Analysis, Variational Auto-Encoders and Generative Adversarial Network-based methods. This research investigated the relationship between time-series data and latent variable model design under the sensible notion of data interpretation, the influence of model complexity on result performance on different datasets and shows that the latent manifold, when untangled and traversed in a sensible manner, is indicative of damage.

Three latent health indicators are proposed in this work and utilised in conjunction with a proposed temporal preservation approach. The performance is compared over the different models. It was found that these latent health indicators can augment standard health indicators and benefit model performance. This allows one to compare the performance of different latent variable models, an approach that has not been realised in previous work as the interpretation of the latent manifold and the manifold response to anomalous instances had not been explored. If all aspects of a latent variable model are systematically investigated and compared, different models can be analysed on a consistent platform.

In the model analysis step, a latent variable model is used to evaluate the available data such that the health indicators used to infer the health state of an asset, are available for analysis and comparison. The datasets investigated in this work consist of stationary and time-varying operating conditions. The objective was to determine whether deep learning is comparable or on par with state-of-the-art signal processing techniques. The results showed that damage is detectable in both the input space and the latent space and can be trended to identify clear condition deviance points. This highlights that both spaces are indicative of damage when analysed in a sensible manner. A key take away from this work is that for data that contains impulsive components that manifest naturally and not due to the presence of a fault, the anomaly detection procedure may be limited by inherent assumptions made in model formulations concerning Gaussianity.

This work illustrates how the latent manifold is useful for the detection of anomalous instances, how one must consider a variety of latent-variable model types and how subtle changes to data processing can benefit model performance analysis substantially. For vibration-based condition monitoring, latent variable models offer significant improvements in fault diagnostics and reduce the requirement for expert knowledge. This can ultimately improve asset longevity and the investment required from businesses in asset maintenance.

**Z. Dlamini, "Experimental Investigation of Film-Cooling Hole Performance"**

**Supervisor: Dr. G. Mahmood**

Film cooling has, over the years, allowed for the operation of modern gas turbines at temperatures far exceeding the limits of the material properties of the turbine components. This has resulted in increased power output and efficiency of the gas turbines. But over 40+ years of research has not culminated in the goal of achieving ideal cooling films, such as from two-dimensional (2D) continuous slots.

This study employed a curvature in the forward diffuser section of the film cooling hole; these holes are referred to as cases 1 to 4 in this study. This was expected to improve the performance of the hole. The performance parameters investigated and reported were the discharge coefficient of the holes, the flowfield downstream of the hole exit trailing edge, the temperature field downstream of the hole exit trailing edge and the effectiveness.

The effects of pressure ratio, mainstream crossflow, compound angle, hole geometry, manufacturing method, 3D print build orientation, and inclination angle, on the discharge coefficient were investigated.

The effects of blowing ratio, hole geometry, compound angle, turbulence intensity and downstream distance from hole exit trailing edge, on the flowfield, temperature field and effectiveness were also investigated.

The hole geometries had a diameter of 8 mm and length to diameter ratio equals to 7.5. The compound angle was varied between zero (0) to sixty (60) degrees. The inclination angles of the holes were either thirty (30) and forty (40) degrees.

The effect of the compound angle, manufacturing method and 3D print build orientation was found to be negligible for the discharge coefficient. But the above parameters had a significant effect on the adiabatic film cooling effectiveness.

Cases 1 to 4 holes showed higher discharge coefficient values as compared to the cylindrical and the laidback fan-shaped holes. This was a result of the development of the flow inside the hole and the resulting exit coolant jet velocity profile and its interaction with the mainstream crossflow.

From the flow structure and temperature field measurements it was determined that employing the curvature and the lateral expansion of the cases 1 to 4 holes decreases the height and trajectory of the jet on exit. The decreased height is due to the decreased vertical momentum content of the coolant jet. The decreased trajectory positions the longitudinal vortices closer to the wall which results in better lateral spread of the coolant.

From the effectiveness measurements it was found that increasing the compound angle decreases the lateral averaged effectiveness. And a decrease in the lateral averaged effectiveness was observed as the blowing ratio was increased.

The case 2 hole geometry resulted in low jet height when in the mainstream, which means that it was closer to the surface that requires cooling. It also resulted in a relatively good lateral spread of the coolant on the surface. And it resulted in the highest laterally averaged effectiveness at most of the compound angles and blowing ratios tested.

**M.K. Seal, "The prediction of condensation flow patterns by using artificial intelligence techniques"**

**Supervisor: Dr. M. Mehrabi**

**Co-supervisor: Prof J.P. Meyer**

Multiphase flow provides a solution to the high heat flux and precision required by modern-day gadgets and heat transfer devices. An application of multiphase flow commonly used in industrial applications is the condensation of refrigerants in inclined tubes, where the prediction of multiphase flow patterns is fundamental to the successful design and subsequent optimization given that the performance of such thermo-hydraulic systems is strongly dependent on the local flow patterns – or flow regimes, which affect heat transfer efficiency and pressure gradients.

In this study, it is shown that with the use of visualization data and artificial neural networks (ANN), a machine can learn and subsequently classify the separate flow patterns of condensation of R-134a refrigerant in inclined smooth tubes with more than 98% accuracy. The study considers ten classes of flow pattern images acquired from previous experimental works covering a wide range of flow conditions and the full range of tube inclination angles. Two types of classifiers are considered, namely multilayer perceptron (MLP) and convolutional neural networks (CNN). Although not the focus of this paper, the use of a principal component analysis (PCA) allows feature dimensionality reduction, dataset visualization, as well as decreased associated computational cost when used together with MLP neural networks. The superior 2-dimensional spatial learning capability of convolutional neural networks allows improved image classification and generalization performance across all ten flow pattern classes. In either case, it is shown that the prediction can be performed sufficiently fast to enable real-time execution and analysis in two-phase flow systems. The analysis sequence leads to the development of an online tool for the classification of in-tube flow patterns in inclined tubes, with the goal that the features learned through visualization will be applicable to a broad range of flow conditions, fluids, tube geometries, and orientations, and even generalize well to predicting adiabatic and boiling two-phase flow patterns. The method is validated with the prediction of flow pattern images found in the existing literature.

**C. Roosendaal, "Analysis of a novel low-cost solar concentrator using lunar flux mapping techniques and ray-tracing models"**

**Supervisor: Dr WG le Roux**

**Co-supervisor: Prof JP Meyer**

Concentrated solar power is a growing but expensive alternative energy resource. One of the most common issues faced when it comes to solar dish design is the complex trade-off between cost and optical quality. A novel solar dish reflector setup that makes use of low-cost, commercial television satellite dishes to support aluminised plastic membranes in a multifaceted vacuum-membrane concentrator was investigated in this work. The design aims to reduce costs while maintaining high optical accuracy with the added benefit of optical adjustability. The flux distribution of the novel solar dish reflector setup had to be determined to make recommendations on the feasibility of the design. This research presents a method to determine the expected solar flux distribution from lunar tests using a Canon EOS 700D camera.

Experimental tests and different pollution treatment methods were conducted using lunar flux mapping techniques. A numerical model of the experimental setup, based on photogrammetry results of the membrane surface, was also developed in SolTrace to ascertain the sources of error and allow for further design improvements. Preliminary testing proved that JPEG image formats yielded insufficient accuracy in capturing the incident flux when compared to RAW images. Based on the flux ratio maps, the intercept factor for a large multifaceted dish setup was calculated as 88.6% for an aperture size of 0.25 m × 0.25 m, with a maximum solar flux of 1 395 kW/m2 for a 1 000 W/m2 test case.

The numerical model showed that the experimental setup had a total optical error of 17.5 mrad with a comparable intercept factor of 88.8%, which was mainly due to facet misalignment and not reflector surface inaccuracies. The results suggest that large performance improvements can be gained through a more accurate aiming strategy. It is recommended that more durable membrane material can be used, along with an automated vacuum control system that can account for membrane leaks and temperature swings during operation. Correlations between the optical behaviour and geometrical features of elliptically supported facets can be further investigated to develop a design tool to aid in the design and development of high-performance systems. Overall, the design proved to be a viable design alternative for point focus solar concentrators to reduce costs and maintain optical accuracy. The lunar flux mapping techniques proved effective and safe by using the incident light from the moon and standard camera equipment.

**K.A Goddard, "Investigation of wind patterns on Marion Island using Computational Fluid Dynamics and measured data"**

**Supervisor: Prof K.J. Craig (supervisor)**

**Co-supervisor: Mrs. J. Schoombie**

There have been countless research investigations taking place on Marion Island (MI), both ecological and geological, which have reached conclusions that must necessarily neglect the impacts of wind on the systems under study. Since only the dominant wind direction of the general atmospheric wind is known from weather and satellite data, not much can be said about local wind conditions at ground level. Therefore, a baseline Computational Fluid Dynamics (CFD) model has been developed for simulating wind patterns over Marion and Prince Edward Islands, a South African territory lying in the subantarctic Indian Ocean.

A review of the current state of the art of Computational Wind Engineering (CWE) revealed that large-scale Atmospheric Boundary Layer (ABL) simulations have been successfully performed before with varying degrees of success. With ANSYS Fluent chosen as the numerical solver, the Reynolds-Averaged Navier-Stokes (RANS) equations were set up to simulate a total of 16 wind flow headings approaching MI from each of the cardinal compass directions. The standard k-ε turbulence closure scheme with modified constants was used to numerically approximate the atmospheric turbulence. A strategy was devised for generating a reusable mesh system to simulate multiple climatic conditions and wind directions around MI.

In conjunction with the computational simulations, a wind measurement campaign was executed to install 17 wind data logging stations at key locations around MI. Raw data output from the stations were cleaned and converted into an easily accessible MySQL database format using the Python scripting language. The Marion Island Recorded Experimental Dataset (MIRED) database contains all wind measurements gathered over the span of two years. The decision was taken to focus on validating only three of the 16 cardinal wind directions against the measured wind data; North-Westerly, Westerly and South-Westerly winds.

An initial interrogation of the simulation results showed that island-to-island wake interactions could not be ignored as the turbulent stream from MI could definitely be intercepted by its neighbour under the right conditions, and vice versa. An underestimation of the true strength of the Coriolis effect led to larger wind deflection in the simulations than originally expected, thus resulting in the wind flow at surface levels having an entirely different heading to what was intended. The westerly and south- westerly wind validation cases did not seem too badly affected by the lapse in judgement but the north-westerly case suffered strong losses in accuracy.

Significant effort was put into quantifying the error present in the simulations. After a full validation exercise, it was finally resolved to apply a conservative uncertainty factor of 35 % when using these simulations to predict actual wind speed conditions. Similarly, the predicted wind direction can only be trusted within the bounds of a 35° prediction uncertainty. Under these circumstances, the baseline CFD model was successfully validated against the measured wind data and can thus be used in further research. In terms of post-processing, all the wind direction simulations have been combined into a single wind velocity map, generated by weighting each of the simulations by the frequency of wind prevalence measured in the corresponding wind sector. A second turbulence intensity combined map has been provided using similar techniques. These maps, as well as the individual wind maps showing all cardinal wind directions, are believed to be helpful to many future biological studies on MI as well as any possible forays into wind energy generation on the island.

Despite the encountered deficiencies, this project offers significant value to academia by providing a reliable method of predicting fine-scale wind patterns in a location previously devoid of any accurate data. Furthermore, it has highlighted where future CFD attempts can be improved in order to produce a compelling approximation of the realistic atmospheric phenomena occurring in the Marion Island territory. While error cannot be avoided when modelling such complex systems, it has been well quantified and discussed here so that any further research may make informed judgements in future studies.

- Master's degrees completed 2004
- Master's degrees completed 2006
- Master's degrees completed 2005
- Master's degrees completed 2007
- Master's degrees completed 2008
- Masters Degrees Completed 2009
- Masters Degrees Completed 2010
- Master's Degrees Completed 2011
- Master's Degrees Completed 2012
- Master's Degrees Completed 2013
- Master's Degrees Completed 2014
- Master's Degrees Completed 2015
- Master's Degrees Completed 2016

Copyright © University of Pretoria 2024. All rights reserved.

Get Social With UsDownload the UP Mobile App