These are some of the projects offered to prospective PhD students at the Department of Physics. If you are interested in any of them, please contact Dr Ekkehard Ullner. You can also contact one of the supervisors of the project you are interested in. Their contact details are at the top of each project. Our typical start dates for this programme are February or October.
FindAPhD also lists some of these PhD projects.
Note: Unless stated in the description, there is currently no funding available to support PhD students on these projects. However, we will do our best to help interested students apply for fellowships and other funding opportunities. Please see information about fees and living costs so you can get a realistic idea about what is required financially to be a self-funded PhD student.
Astrophysics and Planetary Science
- Loop quantum gravity with conformal and scale invariance
-
Main supervisor: Dr Charles Wang
A new theoretical framework of loop quantum gravity that incoperates conformal and scale invariance is recently been constructed. It supports a large class of theories of gravity and gravity-matter systems, including general relativity and scale-invariant scalar-tensor and dilaton theories. Its consistent matter coupling must also be conformal or scale invariant, including importantly standard model-type systems. The aim of this PhD project is to further develop the new theory in two significant directions as follows.
- Mathematical foundation. The new loop quantization is based on an extended conformal or scaling symmetry carried by scale fields. Under this project, new polymer structures will be developed to provide a rigorous mathematical foundation of the new theory.
- Loop quantum cosmology. In standard quantum cosmology, the Big Bang scenario is replaced by the so-called Big Bounce. However, the new theory is expected to lead to a radically new quantum cosmological model. Mathematical modelling with computer simulations of the new model will be performed including the very early Universe stage where different scenarios in connections with the Big Bang or Big Bounce are expected.
- Exploring close binary stars: Using nonlinear time series analysis and machine learning for analysing stellar light curves
-
Main supervisor: Dr Sandip George
Full details: FindAPhD
Close binary stars are systems of two stars, where the components are close enough to exchange matter with each other. This mass transfer can result in irregularities in the light variation of the stars (the time series of which is called a light-curve), and a loss in total angular momentum in the system. The latter can result in eventual collapse of the system leading to a red nova merger. Studying the nonlinear dynamics of such irregular light curves forms the basis for this PhD proposal.
The light curves of close binary stars are found to exhibit chaotic properties. Moreover, nonlinear measures estimated from their light curves, such as the value of the fractal dimensions and recurrence properties are closely related to the astrophysical properties of these star systems. Despite some promising early results, nonlinear analysis of close binary stars remains largely under explored.
The availability of big data consisting of long, high quality, evenly sampled light curves from space telescopes such as Kepler and TESS, and the boom in computational power in recent times makes it possible to explore these questions with greater ease than ever before. To handle such large volumes of data, machine learning methods, which can classify data into subclasses and predict their astrophysical parameters such as those that allow to determine how mass is exchanged, become essential.
Through the course of this PhD, we broadly aim to study the nonlinear dynamics of different classes of close binary stars, using data from the Kepler and TESS telescopes. We will then combine these with methods from machine learning and statistics which, apart from the classification and prediction problems mentioned above, can also help us gain insights into the astrophysics of the variability in these star systems. Hence, it is envisaged that at the end of the PhD we would have developed methods for the analysis of close binary light curves, used these methods for the analysis of big data from space telescopes and interpret the results to gain deeper insights into the nonlinear dynamics of close binary stars.
- Giant planet atmospheric modelling
-
Main supervisor: Dr Roland Young
Full details: FindAPhD
Fifty years on from the first spacecraft fly-by of a giant planet in 1973, many aspects of giant planet atmospheres remain poorly understood. Recent discoveries by NASA's Juno and Cassini spacecraft dominate our current understanding of Jupiter and Saturn. Missions such as ESA's JUICE, recently launched to Jupiter, and potential ice giant missions, which have recently been given high priority by NASA’s Decadal Survey, mean the coming years will be crucial for our understanding of giant planet atmospheres. These planets are archetypes for gaseous planets around other stars but can be studied in much more detail than any extrasolar planet and are some of the best natural examples of turbulence on a rotating sphere.
Numerical simulations are key to our understanding of giant planets. They help test ideas in the absence of complete observations, identify the smallest set of processes that explain specific phenomena, and allow the effects of individual physical processes to be isolated. In recent decades significant progress has been made simulating giant planet atmospheres using numerical simulations of varying complexity. The Jason General Circulation Model (GCM), based on the widely used MITgcm ocean model, simulates Jupiter's upper troposphere and lower stratosphere. It qualitatively reproduces several of the major features of Jupiter's weather layer, such as the banded jet structure, eastward equatorial jet, typical zonal jet speeds, and a variety of turbulent vortices.
This PhD project will study the dynamics of giant planet atmospheres using the Jason GCM. The student will have some flexibility to choose which areas to explore, depending on their personal interests and experience. Some possibilities include:
- Modelling the strange distribution of ammonia revealed by the Juno spacecraft. Most of the atmosphere near the cloud level is depleted, apart from a thin column near the equator where there is enhanced ammonia.
- Studying the stratospheric general circulation on Jupiter and Saturn and the links between their stratospheres and tropospheres. This will require extending Jason into the stratosphere and adding gravity wave drag and stratospheric hazes.
- Extending the model to the ice giants Uranus and Neptune, in view of a potential future mission to visit one of those planets.
- Modelling Jupiter’s polar regions, which have been revealed by Juno to contain regularly-spaced and long-lasting cyclonic vortices structures. These should be unstable yet have persisted throughout the mission and are presumably long-lived. The GCM can be used to try to understand the physical processes underlying these phenomena.
This project will suit a keen computationally minded and mathematically strong student with an interest in planetary science or astrophysics, who is excited about using state-of-the-art numerical simulations and recent spacecraft observations of the giant planets. The work will make use of High Performance Computing facilities where appropriate, either within the University of Aberdeen or at a national centre. Prior programming experience is essential, and Linux experience will be helpful. There may be opportunities for collaboration with colleagues in the UK, France, and the USA.
- Measuring wind on Mars using orbital imaging data
-
Main supervisor: Dr Roland Young
Full details: FindAPhD
The motion of air in Mars’ atmosphere is characterized by storms, fronts, and rising and falling airmasses, like on Earth. These are all part of Mars’ complicated general circulation, based on the atmosphere’s interaction with the Sun, surface, and planetary rotation.
To understand the general circulation fully, we need to be able to measure wind. However, this is very difficult on other planets, as methods for doing so on Earth (surface anemometers, radiosondes, aircraft, etc.) generally do not exist. On Mars, there are some in-situ lander measurements, but these only sample at one location, most have had technical difficulties, and surface measurements are not representative of the bulk atmosphere. Missions proposed for the next decade may carry instruments to measure winds directly from orbit, but even on Earth this is a challenge. The nature of Mars’ atmospheric wind structure is an open and pressing question in planetary science.
At present, the best method for measuring winds on Mars is to track water ice clouds as they move between images taken of the same location some time apart. This project will use existing datasets to measure wind on Mars and study its atmospheric circulation. Some spacecraft currently take observations suitable for this purpose, including:
- Emirates Mars Mission - Emirates eXploration Imager ultraviolet and visible imaging. This spacecraft is unique among Mars orbiters as it takes regular observation sequences designed for tracking clouds. Its orbit allows it to see the full disk of Mars at a time.
- Mars Reconnaissance Orbiter - Mars Color Imager ultraviolet imaging. There are nearly 20 years of data from this instrument, which is suitable for measuring winds near the poles.
- Mars Express - Visual Monitoring Camera. This was originally an engineering camera but has been recently repurposed for atmospheric monitoring. It takes nearly full-disk images showing clouds and dust.
The student will start with a well-established cloud-tracking method based on Correlation Imaging Velocimetry (CIV), originally designed for laboratory experiments but with heritage in planetary data analysis. However, they will also explore the applicability of such techniques to planetary images in a more general sense. These methods require significant care to distinguish between real atmospheric motions and other sources of changes between images. There will be scope for analysing whether tracking dust or clouds is better for measuring wind, developing more tailor-made methods, experimenting with other existing methods such as optical flow, or even applying machine learning techniques.
This project will suit a keen computationally minded and mathematically strong student with an interest in planetary science or astrophysics, who is excited about using recent spacecraft observations of Mars. The work will make use of High Performance Computing facilities where appropriate, either external or within the University of Aberdeen. Prior programming experience is essential, and Linux experience will be helpful. There may be opportunities for collaboration with colleagues in the UK, USA, and the UAE.
- Exploring Venus’ atmosphere using data assimilation
-
Main supervisor: Dr Roland Young
Full details: FindAPhD
The space agencies NASA and ESA both plan to send spacecraft to explore Venus in the early 2030s. One of the goals of these missions is to understand Venus’ dynamic atmosphere, and they will carry instruments to make measurements of atmospheric properties. A powerful way to combine our best theoretical understanding of a planetary atmosphere with observations made remotely or in situ is to use data assimilation. This is a rigorous statistical technique used to combine a forecast model with observations in a way that takes into account the uncertainties in both and produces a result that is closer to the true (unknowable) atmospheric state than either the forecast or observations by themselves. Data assimilation is a crucial part of weather forecasting on Earth, and is also in use for other planets, particularly for Mars’ atmosphere, where it has been in use since the 1990s. Some work has been done assimilating observations from Venus, and this is a growing field.
We already have a data assimilation system set up for Mars’ atmosphere, using the Mars Planetary Climate Model (Mars PCM) as the forecast model, the Local Ensemble Transform Kalman Filter for assimilation. We have significant expertise in both, and collaborate directly with the developers of the PCM. The student will start with this system and adapt it for Venus using the Venus version of the PCM, with the initial goal of assimilating observations from Venus into the Venus PCM. Venus’ atmosphere has unique properties and behaviour that will provide challenges that are quite different from those found at Mars. Its atmosphere is in a fundamentally different flow regime to Mars’ atmosphere, and different dynamical processes dominate its climate.
Once the scheme is up and running, the student will have some freedom in the direction of the research. Possibilities include reanalysis of existing observations, such as from Venus Express or Akatsuki, or using synthetic observations in an Observing System Simulation Experiment to evaluate the impact of future observations on our ability to constrain Venus’ atmospheric state. It may also be possible to investigate alternative methods for assimilation, depending on how they might be applicable to the atmosphere of Venus, or apply machine learning techniques, where relevant.
This is a great opportunity for a computationally minded and mathematically strong student with an interest in planetary science or astrophysics to become involved in an area of planetary science that will develop a high profile over the coming decade. The work will make use of High Performance Computing facilities where appropriate, either within the University of Aberdeen or at a national centre.
Dynamical Systems and Chaos
- Chaos-Driven Optomechanical Systems for High-Speed, Secure Optical Communication
-
Main supervisor: Dr Kapil Debnath
Full details: FindAPhD
Optical communication technologies have transformed global data transmission, underpinning the infrastructure of modern telecommunication networks. With the exponential increase in data volume, securing these communications has become a top priority. Traditionally, data encryption occurs at higher software layers of communication system, relying on complex software algorithms to ensure the confidentiality and integrity of transmitted data. However, such methods introduce significant computational overhead, increase energy consumption, and add latency, which can limit their effectiveness in high-speed networks.
A novel approach to overcoming these challenges is physical layer encryption, which secures communication by exploiting the physical properties of the communication channel itself. This method is inherently resistant to a wide range of attacks, including emerging quantum-based threats, without the computational complexity associated with traditional encryption techniques. Physical layer encryption offers several advantages, such as lower energy requirements, reduced latency, and seamless integration with existing network infrastructure, making it an attractive solution for next-generation communication systems.
One of the most promising techniques within physical layer encryption is chaos-based communication. Chaotic systems are characterized by their extreme sensitivity to initial conditions and unpredictable behaviour, which makes them ideal for securing optical data transmissions. By encoding data into chaotic signals, chaos-based communication ensures that only trusted synchronised receivers can decode the information, making it exceedingly difficult for eavesdroppers to intercept without being noticed or decipher the transmitted data within acceptable time limits. Additionally, chaos-based communication supports high-speed data transmission over long distances, providing both security and performance.
Despite its potential, current chaos-based systems typically rely on optoelectronic devices, such as chaotic semiconductor lasers and optoelectronic oscillators (OEOs), which introduce complexity due to the need for optical-electronic conversions and external perturbations. These requirements limit the scalability, stability, and integration of such systems into compact, efficient architectures.
This PhD project will aim to overcome these limitations by exploring cavity optomechanical systems (OMCs) for secure optical communication. OMCs leverage the intrinsic nonlinear dynamics of light interacting with mechanical elements within an optical cavity to generate and synchronise chaotic signals. Unlike conventional optoelectronic systems, OMCs are purely optical, eliminating the need for complex feedback loops and external perturbations. This enables a more stable, efficient, and simplified system architecture for chaos-based communication. Moreover, OMCs are highly suitable for chip-scale integration, offering extreme miniaturisation, compatibility with CMOS processes, and potential for monolithic integration. These characteristics make OMCs a viable option for large-scale production and deployment in modern communication networks. By enhancing both the stability and scalability of chaos-based communication, this project seeks to provide a robust and efficient alternative for securing high-speed optical networks. This PhD will not only advance the field of secure communication but also contribute to the development of next-generation optical technologies with the potential for real-world applications in sectors like telecommunications, data centres, and quantum networks.
- Chaos and fractals in fluid motion
-
Main supervisor: Dr Alessandro Moura
The advection of particles and fields by fluid flows is a problem of great interest for both fundamental physics and engineering applications. This area of research encompasses phenomena such as the dispersion of pollutants in the atmosphere and oceans, the mixing of chemicals in chemical and pharmaceutical industry, and many others. The dynamics of these flows is characterised by chaotic advection, which means that particles carried by the flow have complex and unpredictable trajectories; this is an example of the phenomenon of chaos. One consequence of chaotic advection is that any given portion of the fluid is deformed by the flow into a complicated scale-invariant shape with fractal geometry. The exotic geometric properties of this fractal set leads to anomalous behaviour in important dynamical properties of the flow, such as its mixing rates and the rates of chemical reactions and other processes taking place on the flow.
The goal of this project is to investigate the mixing and transport properties of open chaotic flows and develop a general theory capable of predicting and explaining the transport properties of these systems. The theory will be based on the advection-diffusion partial differential equation. The main idea is that the main eigenvalues and eigenmodes of the advection-diffusion operator describe the long-time transport properties of the system. The scaling and behaviour of the eigenmodes will be estimated by developing approximations based on the fractal geometrical properties of the chaotic advection, and will also be calculated numerically for some simple flows. Mixing and reaction dynamics will then be expressed in terms of the eigenmodes and eigenvalues. To test the theory, we will apply it to the flow configuration describing an experiment performed to study geophysical transport mechanisms, and we will compare the theoretical predictions to the experimental findings.
- Fast, reliable and secure wireless IoT chaos-based communication
-
Main supervisor: Dr Murilo Baptista
This PhD project tackles scientific issues aimed at creating and laying the mathematical foundations of an innovative Wireless IoT Communication System that is reliable (information arrives), fast and light (less power, less hardware, less computation, higher bit rates), universal (to mainly function underwater, but also appropriated to other wireless media), secure (not disclosing information to untrusted agents). This PhD project will be separated in 2 main scientific challenges. The first challenge will be to understand under which conditions and configurations chaotic signals propagating in non-ideal channels can naturally support networked underwater communication systems involving several agents. The innovation here is to extend previous works that have shown the use of chaos for communication systems involving two agents in ideal channels to networked communication in the non-ideal channel. The second challenge is focused on network communication in non-ideal channels with trusted agents, knowledge required for the creation of the IoT Communication chaos-based system proposed. In the non-ideal channel, the received signal is a composition by interference of strongly distorted signals coming from several transmitters and propagating over several paths. The goal is to show that fast and light neural networks can be trained to recover information from a unique trusted transmitter, potentially enable “smart” data analytics about the information received. Communication can be done only by the trusted agents who know specifics of the training, knowledge that will provide support for the creation of a secure IoT communication system.
- Chaos-based cryptography
-
Main supervisor: Dr Murilo Baptista
Full details: FindAPhD
We all live today in a cyber world. Much of the world data traffic is encrypted because of security threats, occurring among different societies, and within several societal levels. The threat is real, and it is escalating. The ability to connect, communicate with, and remotely manage an incalculable number of networked, automated devices via the Internet is becoming pervasive, from the factory floor to the hospital operating room to the residential basement. As we become increasingly reliant on intelligent, interconnected devices in every aspect of our lives, we do require a new type of cryptographic methods to protect our sensitive personal or institutional data, one that is sufficiently efficient in terms of security (passing standard statistical battery tests, as well as causal tests, and with large key space), but that is as well as light (low computational cost), fast (for real time applications), and with low algorithmic complexity. Recent works have demonstrated that chaotic systems are the key to create a cyber secure world for our present and future needs.
Application of chaos to cryptography has entered a new era, with applications spanning the protection of deep layers of industrial networks, tag generation for physical-layer authentication, encryption of 3d image objects, to pseudo-random number generation. Yet, there is still much scope to better understand what the limits of chaos to cryptography are. This is the main goal of this project.
To that goal, we aim at breaking this complex problem in a set of smaller problems. Firstly, our goal will be to understand the crucial dynamical requirements for a chaotic system to support cryptographic systems that are the lighter. Then, the fastest, and then the less complex. I hope this knowledge can provide clues for the discovery of a class of chaotic systems that provide nearly perfect secure chaos-based cryptosystems but that operates under a chosen set of constraints. For example, either of being light or fast, but not complex, or being light only, or of being simultaneously light, fast and with low complexity.
- Heat conduction in classical one-dimensional system
-
Main supervisor: Prof. Antonio Politi
Heat transport in classical one-dimensional systems is a long-standing problem, which goes back to the beginning of the 19th century, when Fourier formulated the famous diffusion heat law. In the last two decades, much progress has been made thanks to numerical simulations and, more recently, to the application of fluctuating-thermodynamics arguments [1,2] The resulting message is that whenever a one-dimensional system has only internal forces (i.e. the momentum is conserved), heat conductivity diverges in the thermodynamic limit (in the limit of infinitely long chains). On the other hand, numerical simulations suggest that the above scenario does not arise in some setups. Is this evidence of strong finite-size corrections, or even of the need to revisit the theory? The starting point of the project is a simple model, where hard-core collisions combine with harmonic interactions. As a result, it is found that heat conductivity remains, unexpectedly, finite in the thermodynamic limit. The plan consists in generalizing the model to test the robustness of the observed anomaly and to eventually understand its origin. The model is simple enough to allow for extensive simulations. Furthermore, I have some ideas for a novel computational approach, which would further help in performing simulations. Finally, more realistic systems will be explored in a second stage, with the goal of testing the degree of universality of the “anomaly”.
Altogether, the research project will allow the student to familiarize themselves with different numerical methods that can be used outside the specific field addressed by the Thesis work. Additionally, the supervisor is connected with many scientists all over the world, thus offering the chance to get in contact with other approaches and research environments. Last but not least, the selected topic is very challenging: a meaningful progress would be very welcome in the community.
Machine Learning and AI
- Omics, statistics and machine learning to understand human ageing
-
Main supervisor: Dr Francisco Perez-Reche
Full details: FindAPhD
Background: Global ageing is accelerating, with the number of individuals over 60 expected to double and those over 80 to triple by 2050 [WHO (2021)]. Despite increased life expectancy, years gained are often spent in poor health, posing economic and healthcare challenges. Therefore, a central goal in ageing research is to find ways to extend healthy lifespan and reduce morbidity.
Ageing is a complex process marked by accumulating cellular damage, which diminishes physical and mental capabilities and increases susceptibility to disease. Currently, there is no single diagnostic tool to assess frailty. Instead, phenotypic measures like the frailty index (FI) and frailty phenotype (FP) are used, derived from health deficits, physical characteristics, and clinical tests [Livshits et al. (2018)]. Although these measures indicate frailty, their biological underpinnings remain poorly understood, limiting early identification and intervention.
Research Aims: This project seeks to identify biological pathways associated with ageing and frailty by leveraging high-throughput “omics” data. By enhancing machine learning methods previously developed in related research [Perez-Reche et al. (2020), Perez-Reche et al. (2024)], the project aims to:
- Identify biomarkers (e.g., metabolites, proteins) associated with ageing and frailty indices.
- Examine biomarker trajectories to better understand the development of frailty.
- Develop predictive models for frailty assessment and early detection.
Data and Methodology: The study will use comprehensive datasets, including health and multi-tissue omics data, from the Department of Twins Research and Genetic Epidemiology at King’s College London and the UK Longitudinal Linkage Collaboration (UK-LLC). The initial focus will be on metabolomics and proteomics, given their direct links to phenotypes and frailty indicators. [Zierer et al. (2015)]. Other omics data may also be integrated as time permits.To achieve the objectives, several statistical and computational approaches will be applied. Associations between biomarkers and frailty will be assessed using univariate statistical analyses and multiple logistic regression (Objective 1). Biomarker correlations will be used to construct networks reflecting age-related changes and to simplify models by removing redundant information (Objective 3). Longitudinal metabolomic data will be analysed using linear mixed-effect models and functional data analysis to capture temporal trends (Objective 2).
A combination of supervised and unsupervised machine learning techniques will be used to classify individuals by frailty level and predict frailty scores (Objective 3). Feature reduction methods and techniques such as elastic net and neural networks will be explored to handle the high dimensionality of omics data.
Impact: This project addresses fundamental questions in healthy ageing, with potential benefits across academia, healthcare, and public policy. Insights will be shared with clinicians, policymakers, and the public, including events such as Techfest or the May Festival and in publications aimed at general audiences. The project’s outcomes may influence policies aimed at reducing the healthcare burden associated with ageing in the UK.
- The mathematics behind the smartness of neural networks
-
Main supervisor: Dr Murilo Baptista
Intelligence is one of the pillars that allows animals to master their environment. Scientific approaches recently proposed have been capable of simulating networked systems that reproduce similar emergent manifestations of behavior as those observed in the brain. This big scientific area was coined as Artificial Intelligence (AI). Our society is today intrinsically connected to it. Why and how a neural network can be trained to process information and produce a logically intelligent output is today still a big mystery, despite the explosive growth in this area. Its success in solving tasks cannot today be fully explained in physical or mathematical terms. Contributing to this challenge is the grand goal of this PhD project: the creation of a general theory that describes the fundamental mathematical rules and physical laws, relevant properties and features behind the “intelligent” functioning of trained neural networks. To this goal, this project will focus in a simpler but also successful type of machine learning approach, named Reservoir Computing (RC).
However other more popular approaches will also be considered. In RC, the learning phase to train a dynamical neural network to process information about an input signal only deals with the much easier task of understanding how the neural network needs to be observed, without dealing with the more difficult task of doing structural changes in it (e.g. as no deep learning). We aim at showing with mathematical rigour the contribution of the configurations and emergent behaviour of a dynamical network into the informational processing of an input signal leading to an intelligent response about it. We want to show why chaos in neural networks can enhance the smart behaviour of trained neural networks. Another goal will be to determine how “intelligence”depends on the particular way a network is observed to construct the output functions. Today, output functions are constructed based on the randomly chosen observation of some neurons in the network.
The outputs of this project will potentially contribute to a better understanding of how our own brain computes. But, it will also contribute towards industrial exploitation of neural networks, by developing a mathematical formalism to create simpler but smarter neural networks that can process quicker more information with less computational resources.
- Machine learning approaches to advance clinical decision making
-
Main supervisor: Prof. Bjoern Schelter
Massive amounts of data related to long term health conditions is routinely collected by professionals. The analysis of these data sets is challenging not only because of the volume of the data, but especially because of its variety. Standard techniques fail at combing and exploit all information contained therein into one coherent framework that is amenable to statistical analysis.
In this project we will build on our previous research to devise such a framework. Approaches such as machine learning (artificial neural networks, support vector machines, decision trees, etc) and state space modelling will provide the basis for this demanding project.
Efficient visualisation and presentation of the results will be used to communicate the findings to medical professionals. Involvement of various stakeholders will guarantee a tailored solution maximising the benefit for patients.
Young researchers will be embedded in an active and striving environment, with state of the art facilities and strong connections to local stakeholders and industry.
Photonics
- Development of on-chip spectrometer
-
Main supervisor: Dr Kapil Debnath
Full details: FindAPhD
Spectrometry is a widely used technique in various fields, including physics, chemistry, and biomedical sensing. Spectrometers have been employed in both industrial and fundamental research applications. In recent years, the need for compact and low-cost spectrometers has increased, with a focus on reducing device footprint, cost, and power consumption. As a result, the trend has shifted from huge instruments to smaller, more cost-effective, and user-friendly units. Handheld or portable spectral analysis devices are in demand, resulting in the miniaturization of spectrometers to centimeter-scale footprints. The further miniaturization of these devices to submillimeter scale may open up new opportunities, such as integration into lab-on-a-chip systems, smartphones, and spectrometer-per-pixel snapshot hyperspectral imaging devices.
In this project, you will design a compact and integratable spectrometer on an integrated photonics platform using 'Reconstructive Spectroscopy'. Reconstructive spectroscopy, a novel approach to spectroscopy that utilizes computational techniques to reconstruct the spectral distribution of an input light from pre-calibrated information stored within a set of detectors. Your objectives will be to design a dispersive optical element using integrated photonic technology suitable for reconstructive spectroscopy and develop supervised or unsupervised reconstructive algorithms.
- Developing optical beamshaping approaches for optical trapping and optical tweezering of aerosols
-
Main supervisor: Prof. David McGloin
Full details: FindAPhD
Optical tweezers are typically built around sophisticated and bulky microscope systems. These are powerful tools that allow the study of individual microscopic particles how they interact with their surrounding with high precision. They are very much a laboratory based tool.
In this project the idea is to explore how optical tweezers can be developed around integrated optical approaches, to make them both smaller and more suitable for applications outside highly controlled laboratory environments. The work will be based around studying the behaviour of airborne aerosols in optical traps, which have environmental sampling and analysis applications, through the use of optical beamshaping approaches to control optical propagation down optical fibres and optical waveguides. As part of the goal is miniaturisation the work will also examine the viability of planar metasurfaces in such experiments. As another goal is to reduce the cost and adaptability of the devices, approaches using low cost waveguides, such as polymer waveguides will also be studied.
The work will be experimentally based and offers flexibility in terms of ultimate research questions. There is also scope, for example, to work on studying aerosol trapping in acoustic traps and acoustic beamshaping. The work is housed in a newly established photonics lab, housing a range of optical sources, beamshaping technologies and spectroscopic analysis tools.
Physics of Biological Systems
- Statistical Data Analysis of medical health records
-
Main supervisor: Prof. Bjoern Schelter
Massive amounts of data related to long term health conditions is routinely collected by professionals. The analysis of these data sets is challenging not only because of the volume of the data, but especially because of its variety. Standard techniques fail at combing and exploit all information contained therein into one coherent framework that is amenable to statistical analysis.
In this project we will build on our previous research to devise such a framework. Approaches such as mixed models for repeated measures, generalised linear models, state space modelling, and non-parametric statistics will provide the basis for this demanding project.
Efficient visualisation and presentation of the results will be used to communicate the findings to medical professionals. Involvement of various stakeholders will guarantee a tailored solution maximising the benefit for patients.
Young researchers will be embedded in an active and striving environment, with state of the art facilities and strong connections to local stakeholders and industry.
- Learning from the past to predict and reduce the risk of infectious disease pandemics
-
Main supervisor: Dr Francisco Perez-Reche
The World Health Organisation reports that infectious diseases cause 63% of childhood deaths and 48% of premature deaths. There is the ongoing risk of epidemics and pandemics that can cause widespread morbidity and mortality (Spanish flu, ebola, SARS, E. coli, Listeria etc).
Infectious diseases can reach humans in many different ways: they can be transmitted between healthy and infected people, through consumption of infected food or water, through contact with animals, etc. The world is massively interconnected enabling people, animals and food to move rapidly between continents along with infectious disease agents.
Tracing the origin and spread of infectious diseases has never been more challenging and more important. The spectacular developments in detection and whole genome sequencing of disease agents as well as the computational power which enables timely processing of big data offers the opportunity to tackle this problem.
For example, the geographical spread of infectious disease is generally insufficient to trace back the labyrinth of possible pathways through which humans become infected. However, combining geographical information on disease spread with information on the genetic evolution of the infectious disease agents has promise [1-3]. Methodologies to achieve this are still in their infancy and this project is an opportunity to make a significant contribution in tackling this critical problem. The project will:
- Use computer workstations to simulate the spread and evolution of infectious disease agents. Simulations will be based on geographical and whole genome sequence datasets of real pathogens. The simulations will generate virtual histories of their spread and evolution.
- Use these results to develop methods that explain how epidemics occurred.
- Use this knowledge to predict future epidemics and to develop and simulate strategies to reduce infectious disease risk.
- Modelling the deformation of cells
-
Main supervisor: Dr Francisco Perez-Reche
Cells are the elementary building blocks of living organisms. The correct functioning of living organisms is therefore conditioned to the ability of living cells to withstand forces and deformations and to promptly adapt to their mechanical environment. Alteration of the cell mechanical properties can contribute to disease such as cancer. It is, therefore, crucial to identify the conditions that compromise the mechanical resilience of cells.
We recently observed that cells pocked by a microscopic cantilever respond with sudden avalanche-like events [1]. Avalanche dynamics have been observed in solids (e.g. during fracture or deformation of shape memory alloys) but are more surprising for living cells which are regarded as a soft material. The behaviour is, however, not completely unexpected since cells exhibit solid and liquid-like properties.
Another intriguing manifestation of solid-like behaviour of cells is the recently observed degradation of the cytoskeleton integrity under cyclic loading [2]. This behaviour is reminiscent of the degradation in solids which has been explained in terms of deformation-induced dislocations [3]. The mechanism behind degradation for cells, however, is unknown.
This project will study avalanches and degradation of cells under mechanical load. The study will be based on network models inspired from models of avalanches in solids [3,4]. Predictions of the model will be validated through comparison with experimental data. After validation, models will be used to make new predictions that can motivate new experiments.
- Neuronal Dynamics from a complex network perspective
-
Main supervisor: Dr Ekkehard Ullner
The human brain is possibly the most intriguing complex system we know. The combination of experimental work and theoretical approaches has gained many insides in the last century but we are still far away from a real understanding.
The goal of the project is to combine dynamical-system and statistical-mechanics tools to understand the functioning of neural networks. While most of computational neuroscience focuses on rate models, it is obvious that neurons communicate by emitting spikes and it is thereby worth, if not necessary, to explore more realistic setups involving pulse-coupled neurons. In such a context, we wish to make use of concepts such as synchronization, phase-transitions, response theory to improve our comprehension. On a more specific level, the project unfolds by means of direct numerical simulations of the "microscopic" equations, combined with the analysis of "macroscopic" equations describing suitable probability densities. Depending on the interest of the potential applicants, the focus can be adjusted and be more theoretically or numerically oriented.
The student will learn techniques to characterise dynamical systems by means of chaotic measures (e.g. Lyanunov exponents, entropy, dimension), network measures and universal approaches to analyse large data. The project will familiarise the student with model building with a neuronal context and beyond, transferable to other natural and manmade complex networks. This includes the understanding of different levels of abstraction, the necessary and meaningful choice of simplifications for the model building in the context of scientific question. Programming competences are welcome.
- Statistical physics of DNA replication
-
Main supervisor: Dr Alessandro Moura
The goal of this project is to apply statistical physics and probability theory to the problem of DNA replication in living cells. The replication of DNA is one of the most important processes in all of biology. DNA encodes all the information that is passed on to the next generation of cells, and it must be rapidly and faithfully copied when the time comes for cells to divide. Dramatic advances in sequencing technology and microscopy in the last two decades have allowed unprecedented experimental access to the inner workings of cells. We now have quantitative measurements of the dynamics of DNA replication in populations and even in individual cells. This means that the approaches of physics and applied mathematics can be applied to the study of DNA replication, and can be used to gain better understanding and new insights on this crucial phenomenon.
The replication of DNA is executed by molecular machines called DNA polymerase, which travel along the DNA molecule as it replicates it. The DNA polymerases must be assembled from several molecules at the starting point of replication – specific locations on the DNA called replication origins. These precursor molecule move in the nucleus through Brownian motion, making this a stochastic process. Once assembled, they travel through the DNA, “unzipping” it into its component strands and performing the replication as it goes. Because the DNA polymerase is a molecular machine, it is subject to thermal fluctuations from the environment, which affects how it moves. To further complicate things, the DNA is a busy place: many processes are taking place in it at the same time, especially protein synthesis. So the DNA polymerases can collide with other molecules bound to DNA and get stuck for a while. For all those reasons, DNA replication is expected to be highly stochastic. However, most models assume that the DNA polymerases travel at constant speed on the DNA. We will formulate mathematical models taking the stochastic nature of the movement of DNA polymerases into account. We will create numerical simulations to test the predictions from our theory, and compare our results to experimental data available from collaborators. We will also examine the assembly of the DNA polymerases from its component molecules, and model its waiting-time statistics.
- Noisy translation: modelling of stochastic effects on protein production
-
Main supervisor: Prof. M. Carmen Romano
This project will focus on the mathematical modelling of a fundamental process in the cell: translation of the messenger RNA into a protein. A messenger RNA (mRNA) contains the sequence of nucleotides transcribed from the DNA that encode a certain protein. Molecular machines called ribosomes bind to the mRNA sequence and move along the nucleotide sequence thereby translating the sequence of codons (groups of 3 consecutive nucleotides) into the sequence of amino acids that form the protein. Like cars on a narrow countryside road, ribosomes cannot overtake each other, so that queues of ribosomes can form on mRNAs. In this project we will develop a mathematical model to describe how ribosomes move along the mRNA sequence, thereby predicting protein production rates. This is a fundamental problem in molecular biology, as the amount of different kinds of proteins produced largely determined the behaviour of a cell.
In particular, we will focus on stochastic effects of the process of translation, predicting the extent of fluctuations in protein production expected from different mRNAs, taking into account effects such as codon composition, mRNA secondary structures and global competition for translation resources. Model predictions will be directly compared with experimental results so that a series of rounds of model refinement and validation can be performed. The PhD student will work in a dynamic and interdisciplinary team of researchers working at the interface between physics and biology, integrating theoretical and experimental results.
- How to make molecular machines: modelling ribosome biogenesis
-
Main supervisor: Prof. M. Carmen Romano
Ribosomes are arguably the most important biological molecular machines. Cells can be seen as factories of proteins, with ribosomes being the machines producing them. Therefore, understanding how ribosomes themselves are made and how their production is controlled depending on the environment of the cell, is a fundamental question in cell biology. Despite of extensive information on the process of ribosome biogenesis gathered in recent years, the regulation of ribosome production upon changes in external cellular conditions remains an outstanding open question. The main aim of this project is to develop a mathematical model of ribosome biogenesis taking into account the current knowledge about the biochemical pathways. In particular, we will aim at identifying the rate-limiting steps and most crucial mechanisms determining the production rate of ribosomes, as well as how this production can be finely tuned depending on external cellular resources and environmental conditions. We will also explore the links between the cell cycle and ribosome biogenesis, as well as metabolism. In contrast to a large mathematical model comprising a very high number of components, our objective is to develop a model as simple as possible, but nevertheless predictive, so that it allows us gaining insight into the fascinating process of ribosome biogenesis. The PhD student will work in a dynamic and interdisciplinary team of researchers working at the interface between physics and biology, integrating theoretical and experimental results.
- Modelling of the self-feedback control of the androgen receptor under the influence of testosterone
-
Main supervisor: Dr Ekkehard Ullner
Sex steroids, including androgens, are master regulators of cell function with well-established roles in the regulation of anabolic metabolism, reproduction and cancer. The androgen testosterone acts through the androgen receptor (AR). The response of different tissues to testosterone depends on the presence of the receptor protein. The consequences for the expression of the AR receptor are poorly understood but are crucial for the overall health benefits for ageing men and women by maintaining muscle and bone integrity and cardiovascular health. How changes in hormone levels, for example during ageing, affect receptor expression, dimerisation, DNA binding and cell-specific expression. We aim to develop and validate mathematical models that integrate these key steps in the pathway into a model that can predict gene expression in response to changes in hormone levels.
The mathematical model will be based on ordinary differential equations (ODEs) and laboratory studies in prostate cells. The student will learn mathematical biology to understand the construction of ODEs, different gene regulatory mechanisms and how they relate to microbiological biological processes, numerical solution of ODEs, basics of dynamical systems theory and methods to analyse response and sensitivity. The work will be developed in close collaboration with a biologist and offers some flexibility for the student to tailor the project to their own research interests. The mathematical model will consider different binding sites and binding mechanisms and compare these through the response between the mathematical models and against experimental results. The project can develop in several directions, e.g. looking further down to the single cell response with stochastic models, investigating the response of the downstream regulated gene leading to a network approach to understand the interaction between the self-regulated AR and the genes responding to AR.
As an interdisciplinary project, the student will receive training in the different areas. It is desirable for the student to have knowledge in one or two of the following areas and a strong interest in learning the other techniques required. Desirables are: theoretical biology, bioinformatics, physics, applied mathematics, numerical solution of differential equations, programming (C, Matlab, Python, Mathematica or any other programming language), parameter fitting, dynamical systems theory and complex networks.
- Modelling competition inside the cell: understanding how exogeneous and endogenous mRNAs compete for translational resources
-
Main supervisor: Prof. M. Carmen Romano
Full details: FindAPhD
Cells in any living system depend on the regulated production of proteins using the information encoded in genes at DNA level. How cells regulate the amount of proteins that they produce is crucially important in biotechnology, during production of therapeutic proteins, and in medicine, when infection of a cell by a virus can change the functioning of the host gene expression system.
The main objective of this project is to apply physics to develop a mathematical model that predicts how protein production is regulated within cells, and specifically, how protein production is affected when foreign mRNAs (e.g. viral) are introduced into a cell. This exciting application of physics will for the first time allow understanding of the highly complex processes that underpin cell functioning in health and disease, as well as every biotechnological process.
To make proteins in the cell, foreign mRNAs are translated into proteins using molecular machines called ribosomes. The ribosomes bind to the mRNA and advance through a series of 3-nucleotide (codon) steps, in doing so adding one amino acid at each step, until the mature protein has been assembled. This process can be mathematically described using a well-established model of transport in statistical physics called Totally Asymmetric Exclusion Process (TASEP). In this model, the mRNA is described by a one-dimensional lattice along which particles (ribosomes) can hop stochastically from site to site.
We have now extended the model to consider other translation resources, such as the small diffusing molecules called transfer RNAs that supply the amino acids to the ribosome. Cellular mRNAs compete for the same pool of translational resources with viral or other foreign RNAs such as those introduced in a biotechnology process, where cells are used as factories to produce proteins of interest for pharmaceutical or food industry purposes. In doing so, the balance of translational resource is distorted. Understanding how the balance between demand and supply for translational resources is distorted is critically important not only to understand gene expression regulation, but also for optimisation of heterologous gene expression.
Using physics, and sophisticated mathematical models of gene expression, the PhD student will work in a highly interdisciplinary team led by 2 supervisors (Physics and Molecular Biology), who have a long-established collaborationon this topic. Training in athematical modelling and molecular biology will be provided, relevant for a wide range of research and other career destinations.
- Reduction of cardiac complexity as a general marker of disease
-
Main supervisor: Dr Sandip George
Full details: FindAPhD
The heartbeat is often thought of as periodic oscillations of the heart. However, we know that this not true, with cardiac activity changing according to a number of physical and mental factors. Sleep results in a lower heart rate for example, while exercise and anxiety can raise it. Even at rest, the exact timing and amplitudes of heartbeats differ between beats.
This complexity of the cardiac rhythm is related to its nonlinearity and the coupling between the heart and other parts of the body.
Complexity is broad term used to describe a number of related concepts. In the context of time series, complexity can refer to a few different ideas including the information content in a system or to the nature of the nonlinear dynamics of the system. These are measured using quantifiers such as the entropy, fractal dimensions and recurrence based analysis.
A reduction in the complexity of cardiac dynamics, as measured using time series analysis, is known to be correlated to disease. This is true in the case of cardiac illnesses such as congestive heart failure, fibrillation, cardiac autonomic neuropathy and post myocardial infarction among others. But this reduction in complexity is also seen in ECG measured from individuals suffering from other illnesses that are not directly related, including diabetes and depression.
In this PhD we will explore reduction in complexity of ECG using ideas from networks of dynamical systems and real ECG time series. The first part will explore reductions in complexity in ECG time series as a result of disease. The second part will develop a mechanism for understanding reduction in complexity observed in ECG using complex networks and nonlinear dynamical systems. Finally, the developed model will be validated by comparing observables from the model system and real ECG data.
Plasma Physics
- Propellant Ambiguity for Radio-Frequency Plasma Micro-Propulsion (AMBI-RF) [Funded]
-
Main supervisor: Dr Scott Doyle
Full details: FindAPhD
Important Note: This project is exclusively available to UK nationals and US nationals, with full funding provided only to those eligible for the Home/UK fee rate. We cannot consider non-UK / non-US nationals for this project.
Recently, there has been a growing interest in alternative propellants for electric propulsion systems. For high-power, deep-space satellites, this search has focused on a replacement for xenon, with leading contenders being iodine and bismuth. For ‘off-the-shelf’, academic, and commercial satellites (particularly micro-satellites) the search for alternative propellants is driven by a requirement for safety, affordability, and simplicity, and includes many molecular substitutes such as ammonia, water, peroxide, ethanol, nitrous oxide, and carbon dioxide. Notably, such volatile species are also abundant in comets and asteroids, raising the possibility of In-Situ Resource Utilisation (ISRU) refuelling platforms capable of extracting and employing such molecular propellants. While numerous studies have addressed the need to replace xenon, there have been significantly fewer studies into molecular propellants in electric propulsion (EP) systems. Ammonia and water in particular have not been well studied, despite presenting a storage-dense, cheap, and abundant (both terrestrially and in-situ) propellant solution for satellite operations.
To address these knowledge gaps, AMBI-RF will perform detailed predictive modelling to numerically investigate the feasibility of ambiguous molecular propellants in RF micropropulsion systems; specifically addressing the role(s) that vibrational modes and molecular fragments play in propulsive efficiencies and the associated effectiveness of existing electromagnetic RF control schemes. Assessing the ionisation, direct and vibrational dissociation, and power coupling pathways facilitated by thermal (bulk) and non-thermal (secondary) electrons in RF-coupled nitrogen, hydrogen and ammonia discharges represents the primary aim of the project. Numerical modelling will be undertaken by the PhD candidate using the 2D fluid/Monte-Carlo Hybrid Plasma Equipment Model (HPEM6), under the supervision of the primary PI, Dr Scott J. Doyle. Investigations into the feasibility of off-world refuelling and the application of “dirty”, CO2/H2O impurity containing, propellant mixtures will also be addressed.
This 3-year (36 month) PhD studentship will address this shortfall in the literature by compiling a potable ammonia/nitrogen/hydrogen reaction mechanism, building upon prior work via the inclusion of vibrational ammonia and nitrogen states. Exploring how such complex vibrationally excited molecular species interact with, and alter, existing multi-harmonic and magnetised control techniques in radio-frequency driven plasmas is of critical importance to fundamental plasma science and a wide range of applications from materials processing to chemical catalysis. The successful candidate will contribute the field of electric spacecraft propulsion by facilitating a broader understanding of the pros and cons of molecular, nitrogen-containing, propellants within the high-powered air-breathing and low-to-mid powered electric propulsion environments.
- Design and Optimisation of RF Wave-Coupled Plasma Propulsion Systems
-
Main supervisor: Dr Scott Doyle
Full details: FindAPhD
Increasing available onboard electrical power in telecommunication satellites has driven interest in a new wave of high-power electric propulsion systems. Mature high-powered propulsion systems, Similar to a tokamak, transformer-coupled propulsion sources employ primary coil antennae to induce a secondary current in a toroidal plasma column. Ferrites are employed to enhance the coupling efficiency and control discharge topology. Low radio frequency (250 – 1000 kHz) power is ohmically coupled within the plasma column, facilitating efficient and homogeneous neutral gas heating. Transformer coupled propulsion sources offer a scalable, robust, and cost-effective method of coupling high-wattage electrical power into a wide range of propellants, offering high thrust (>1 N), mid-specific impulse (100 – 300 s) solutions for LEO, GSO, Lunar transfer orbits, and beyond.
To address these knowledge gaps, this project will perform detailed predictive modelling to numerically investigate the feasibility of wave-coupled plasma propulsion systems as a replacement for existing propulsion systems on mid-to-large scale satellite platforms. Numerical modelling will be undertaken by the PhD candidate using the 2D fluid/Monte-Carlo Hybrid Plasma Equipment Model (HPEM), under the supervision of the primary PI, Dr. Scott J. Doyle. An initial assessment of the key ionisation, neutral gas heating, and power coupling pathways facilitated by thermal (collisional) and non-thermal (kinetic) processes will be assessed over a wide range of propellant flow rates and discharge regimes. Investigations into heating mechanisms and efficiencies for continuous wave and pulsed discharge conditions will be assessed, over a range of applied voltage frequencies, and the prospect of hybrid transformer-coupled / inductively-coupled modes will be considered. Finally, thruster geometry, nozzle design, and the differences in propulsive characteristics for open and closed toroidal geometries will be assessed. Investigations into the feasibility of off-world refuelling and the application of “ambiguous” and “dirty” propellant mixtures may also be addressed.