Picture of Prof. Dr. Rob V. van Nieuwpoort

Prof. Dr. Rob V. van Nieuwpoort

I am the director of technology at the Netherlands eScience center, and full professor at the University of Amsterdam in the Systems and Network Engineering group.


Program committees

If you would like to do an interesting and challenging bachelor or master project with me, please look here.


I started as the director of eScience Technology at the Netherlands eScience Center (NLeSC) in 2012. I previously worked as assistant professor at VU University Amsterdam in the Computer Systems research group, and as a researcher at ASTRON, the Netherlands Institute for Radio Astronomy.
I form a link between the research being undertaken at the University of Amsterdam and at the eScience Center. The NLeSC promotes the use of digital technology in science. It brings together IT, data science, e-infrastructure and data- and computation-intensive research across all domains of research, from physics to the humanities. The applications in the NLeSC’s research portfolio offer a unique opportunity for researchers and students to apply their expertise in scientific issues beyond the domain of IT.

More efficient use of large-scale computing power

I research ways in which large-scale computing power can be used more efficiently in achieving scientific breakthroughs in various scientific fields. Over the past few decades, computers have changed fundamentally, and a shift has taken place in the balance between computing power and data transport. Computer processing speeds are increasing, but computers can’t feed the relevant data into the processors quickly enough. In addition, computers have become highly parallel in their operation: they carry out a lot of calculations simultaneously. Many scientific applications have been unable to keep up with these developments. As a result, much scientific software remains sub-optimal. Improving this software will result in faster large-scale data processing and enhanced scientific tools such as telescopes, climate simulations, particle accelerators, etc.
I develop new programming models and studies that will make the use of large-scale systems (so-called exascale computers) simpler and more efficient. In addition, energy efficiency also plays a crucial part. For large-scale scientific experiments such as the Square Kilometre Array (SKA) telescope, energy use is a limiting factor and a major expense. In these cases, software that uses energy more efficiently will have the immediate effect of increasing the sensitivity of the instruments.


I started the GPU Computing Center at VU University Amsterdam, which has come to play a vital role in the Master programme in Computer Science taught jointly by VU and UvA.

For ASCI PhD students, I teach the GPU part of A24 course: A Programmer's Guide for Modern High-Performance Computing. With Ben van WErkhoven (NLeSC). The other parts are taught by Ana Lucia Varbanescu and Clemens Grelck (UvA), and Alexandru Iosup (VU). The course will be given from December 12 to 16, registration is open now.


My current research interests focuses on developing radio astronomy and signal processing algorithms for very large radio telescopes, such as LOFAR (operated by ASTRON, the Netherlands institute for radio astronomy), and the Square Kilometre Array (SKA). I implement these algorithms on accelerators like multi- and many-core architectures, such as graphics processing units (GPUS) from NVIDIA and AMD. For instance, I developed a software correlator on five different architectures with multiple cores. Per chip, the implementations on NVIDIA GPUs and the Cell are more than 20 times faster than the LOFAR production correlator on our IBM Blue Gene/P supercomputer. Also, the power efficiency of the many-core architectures is much better. I worked on correlators, beam forming, poly-phase filters, gridding (imaging). I am also working on real-time Radio Frequency Interference (RFI) mitigation for exascale instruments such as the Square Kilometre Array (SKA) telescope.

My research interests also include parallel programming with Java, in particular on distributed systems, such as grids, clouds and clusters. I have worked in the virtual labs for eScience (Vl-e) project. Previously, I worked on the GridLab project. In this GridLab project, I worked on adaptive grid middleware, such as Delphoi. This is an information system that can give information about the grid. It can also predict future information, such as anticipated network and CPU load. Using this information, it can for instance set the optimal number of parallel data streams for large data transfers.

I have developed the Java implementation of the Grid Application Toolkit (JavaGAT). The JavaGAT offers a set of coordinated, generic and flexible APIs for accessing grid services from application codes, portals, data managements systems. The JavaGAT sits between grid applications and numerous types of grid middleware, such as Globus, Unicore, SSH or Zorilla. JavaGAT lifts the burden of grid application programmers by providing them with a uniform interface that provides file access, job submission, monitoring, and access to information services. As a result, grid application programmers need only learn a single API to obtain access to the entire grid. Due to its modular design, the JavaGAT can easily be extended with support for other grid middleware layers. The JavaGAT is now standardized within the Open Grid Forum (OGF), and is now called SAGA (Simple Api for Grid Applications). The Java reference implementation for SAGA is built by our group, on top of the JavaGAT software.

Together with the HPDC group at the VU, I designed and implemented Ibis. Ibis consists of a communication library for communication on the grid, and a set of high-level programming models that can be used to write parallel and distributed (grid) applications. These models include Satin (divide-and-conquer and master-worker), MPJ (Java's MPI specification), GMI (Group Method Invocation, an object-oriented MPI-like model), and a highly efficient RMI implementation that can be up to ten times faster than the standard implementation.

I designed and developed Satin. Satin is one of the programming models of Ibis. With Satin, you can write divide-and-conquer programs in Java. These applications recursively divide a problem into smaller pieces. Next, the application can be deployed on a multi-core machine, a cluster, grid or cloud. The programming model is extremely high-level: the programs are essentially sequential, they do not contain any communication code, and there is no concept of remote machines. Still, the programs work highly efficiently on a grid, and support speculative parallelism, transparent fault-tolerance, malleability, are adaptive to CPU load and network performance changes.

PhD candidates I supervise(d):

NLeSC, UvA, LinkedIn, ResearchGate, Google Scholar, Academia.edu, ORCID, Twitter.