Modelling animal perception of camouflaged prey using artificial observers

About the Project

Background:

Animals display a remarkable diversity of colouration, a topic that has fascinated scientists since Darwin and Wallace. Despite decades of research, significant gaps remain in understanding why certain animals look the way they do. Most studies have focused on narrowing down the functions of particular strategies, such as disruptive coloration, which helps conceal animals by breaking up their shape. Others have explored general correlations between colouration and ecology, like the prevalence of spots or stripes in more closed environments. The core challenge of understanding colouration is complexity: animals exist in intricate ecosystems where they interact with various observers, including predators, prey, potential mates and competitors (1). Modelling optimal colouration for concealment or signalling is hard due to the sheer number of variables, including possible colours, patterns, and shapes of animals as well as the environment they inhabit. Studying these systems with biological observers is constrained by time, complexity, and ethical considerations (2,3,4).

Advancements in machine learning now make it possible to create artificial agents mimicking biological behaviour. Deep neural networks have demonstrated success in target detection across various contexts, but these tasks are typically performed without integrating biological vision, i.e. networks determine the best way to detect targets limited only by their mathematical complexity rather than biological input (5). However, we have previously shown that networks can model human performance in detecting camouflaged targets, effectively replicating aspects of human vision for certain tasks. Networks were used predict human reaction times to colours (6) and textures (7), allowing us to test and validate vast parameter spaces to establish optimal concealment. Notably, neural networks were not presented with images containing targets, but only parameters describing those targets, offering the opportunity for practical continuation of the methodology as proposed here.

We envisage that this PhD will expand on this approach in two ways. First, models will be created based on data obtained from humans with both the targets and their backgrounds included, enabling artificial observers to process the complete visual scene rather than focusing solely on the target. Second, the project will extend the methodology to non-human animals with different visual systems, including fish (sticklebacks, Gasterosteidae) and domestic chickens (Gallus domesticus), with an option to include rats (Rattus norvegicus).

The project will establish you as an expert of visual ecology at the intersection of sensory biology and machine learning. You will acquire transferable skills in programming (Python / R / Matlab), animal behaviour, and mathematical modelling, and will join a dynamic, fun, and interdisciplinary team with opportunities to tailor your own research plans.

Aims and objectives:

1.        Create deep neural network models with both targets and their backgrounds included, enabling artificial observers to process the complete visual scene rather than focusing solely on target parameters.

2.        Establish a framework that allows comparisons of how target detection is performed between biological observers (humans) and artificial agents in order to develop valid digital twins. For example, understand which parts of targets the observers detect first.

3.        Extend the methodology to non-human animals with different visual systems, in particular domestic sticklebacks, chickens, and potentially rats, with a view to estimating optimal presentation contexts (8) such as lighting conditions.

Methods:

1.        Use texture generating algorithms, e.g. based on reaction-diffusion equations (7,9), to create large pattern spaces and parameterize them to control for visual similarity (10).

2.        Run computer-based psychophysics experiments on human participants to collect reaction time data to a large set of targets, including where they click on the target, with the potential extensibility to eye tracking.

3.        Develop convolutional deep neural network architectures that can predict reaction times to targets presented in particular contexts. Networks will also be able to output the weights of image areas showing their relative importance in target detection, e.g. using Class Activation Mapping (11).

4.        Develop and implement a paradigm to present a large number of experimental trials to non-human animals with a focus on domestic chickens. These trials could be either physical object or computer-based, as long as targets can be represented in a parameter space and reaction times to detection are measurable. The paradigm should also accommodate changes in the environment, for example adjustments in light levels.

5.        Implement a system to efficiently validate the predictions using a limited number of biological observers, e.g. using Genetic Algorithms (4,7).

Key references:

1. Cuthill et al. (2017), https://doi.org/10.1126/science.aan0221

2. Bond & Kamil (2002), https://doi.org/10.1038/415609a

3. Bond & Kamil (2006), https://doi.org/10.1073/pnas.0509963103

4. Hancock & Troscianko (2022), https://doi.org/10.1111/evo.14476

5. Talas et al. (2019), https://doi.org/10.1111/2041-210X.13334

6. Fennell et al. (2019), https://doi.org/10.1098/rsif.2019.0183

7. Fennell et al. (2021), https://doi.org/10.1111/evo.14162

8. Lambton et al. (2010), https://doi.org/10.1016/j.applanim.2009.12.010

9. Turing (1952), https://doi.org/10.1098/rstb.1952.0012

10. Talas, Baddeley & Cuthill (2017), https://doi.org/10.1098/rstb.2016.0351

11. Minh (2023), https://doi.org/10.48550/arXiv.2309.14304

Supervisors:

Dr Laszlo Talas (Bristol Veterinary School)

Dr John Fennell (Bristol Veterinary School)

Dr Sarah Lambton (Bristol Veterinary School)

Dr Vikki Neville (Bristol Veterinary School)

Professor Christos Ioannou (School of Biological Sciences)

Professor Nick Scott-Samuel (School of Experimental Psychology)

Start date: Sept 2025

How to apply: See How to apply – SWBiosciences Doctoral Training Partnership

Candidate requirements:

See Eligibility – SWBiosciences Doctoral Training Partnership.

Standard University of Bristol eligibility rules for PhD admissions also apply. Please visit PhD Veterinary Sciences

Contacts: Contact the lead supervisor if you have queries about the project. For queries about the SWBio DTP scheme contact

To help us track our recruitment effort, please indicate in your email – cover/motivation letter where (jobs-near-me.eu) you saw this job posting.

Share

Recent Posts

RN – Home Health

Job title: RN - Home Health Company Providence RN Job description DescriptionHome Health RN -…

6 minutes ago

Senior Expert Customer Value Management (all genders) / en

Job title: Senior Expert Customer Value Management (all genders) / en Company Greiner Job description…

20 minutes ago

Vice President, Academic & Provost

Founded in 1925, Emily Carr University (ECU) is a world leader in art, media and…

49 minutes ago

Deployment Team Leader

PURPOSE The Deployment Services manage the movement of employees under Swiss contracts, from one assignment…

49 minutes ago

The seminal fluid extracellular matrix: a new paradigm in reproductive biology?

About the Project In many animals, females store sperm from multiple males for weeks, months…

49 minutes ago

Communication Consultant

Terms of Reference Short Term Arrangement for a consultant or agency for the development of…

49 minutes ago
For Apply Button. Please use Non-Amp Version

This website uses cookies.