Trevor L. McDonell

Lambda scientist, physicist at heart. Runner, cyclist, coffee enthusiast.

Contact Me

View My GitHub Profile

About Me

I am currently a Research Associate at with the Programming Languages and Systems group in the School of Computer Science and Engineering at the University of New South Wales, Australia. My interests include parallel programming (in particular, data parallelism), functional programming languages, and using graphics processors and other compute accelerators for high-performance computing.

I previously held a Postdoctoral Researcher position at the School of Informatics and Computing and the Center for Research in Extreme Scale Technologies at Indiana University Bloomington, USA.

I completed my PhD with the Programming Languages and Systems group at the University of New South Wales, Australia.

I was a student at the University of Sydney, Australia, where I studied Mechatronics (Space) at the Australian Centre for Field Robotics (ACFR) (with honours) together with Physics and Computational Science at the School of Physics. I also had a brief encounter with ViSLAB and the Centre for Quantum Computer Technology (CQCT) during this time.

After graduating, I held a brief internship at the Andøya Rocket Range (Andøya, Norway). Before returning to academia, I worked as a software engineer for Canon Information Systems Research Australia (CiSRA) (Sydney, Australia). I also took some time out of my PhD to intern at the National Institute of Informatics (NII) (Tokyo, Japan) as well as the compilers group of NVIDIA (Seattle, USA).

I spend most of my time implementing Functional Programming Languages, which I use to program both multicore SMP systems as well as CUDA graphics cards for general purpose computations (GPGPU).

I am a regular at FP-Syd.

I have been spotted at various cycling and running events.

When not doing the above I (occasionally) practice martial arts.

Publications

Projects

Accelerate

Data.Array.Accelerate defines an embedded language of array computations for high performance computing in Haskell. Computations on multi-dimensional, dense, regular arrays are expressed in the form of parameterised collective operations, such as maps, reductions, and permutations. These computations may be online compiled and executed on a range of architectures, such as GPUs.

Teaching