2019

Oxford Programming Language Seminar - PL and Emerging Hardware

Jeremy Gibbons and I decided to run a seminar series in Oxford. We invited people to speak about new developments in hardware and their relationship with or requirements from programming languages.

We had fantastic outside speakers:

  • Peter Braam - PL and Emerging Hardware overview (slides)

  • John Gustafson (NUS) spoke about POSITS

  • Steve Pawlowski (Micron) - described the tremendous impact of ML on the hardware

  • Paul Kelly (UCL) spoke about FireDrake (slides)

  • Hironori Kasahara (Waseda University) - the OSCAR parallel compiler and its applications (slides)

  • Simon Knowles (Graphcore) - spoke about the Graphcore architecture

  • Priyanka Raina (Stanford) - the AHA project (slides)

  • Albert Cohen (Google) - MLIR (slides)

n3xt.jpg





Oxford Seminar Series on Machine Learning and Physics

In the autumn trimester of 2019, I started a seminar series in Oxford on the application of machine learning and physics. Miha Zgubic helped a lot in bringing it together and Philipp Winsdischhofer will do so for next year. The seminar was very well attended, and we really learned fantastic things about how ML has become a new scientific method for physics.

Often machine learning is described as the answer to the big data challenge, but I think it goes well beyond that. In particular, it has shown to be a viable supplement or alternative to simulation, and possible it might lead to major breakthroughs in hard computational problems surrounding turbulence for example, where little progress has been made for decades.

I gave the opening seminar (slides here), giving a survey of some of the interactions, and some of the relationships with developments in computer science. The entire series can be found on our YouTube channel.

Physics-ML-Science.png




Keynote - Conference on Next Generation Arithmetic

John Gustafson developed the new UNUM and POSIT formats for floating-point numbers which mathematically and computationally appear to be very attractive. Background can be found on posithub.org and in his wonderful book entitled The End of Error, UNUM Computing.

John and I have been discussing applications of this to grand challenge compute problems like those encountered by the SKA radio telescope. In outline, the idea is to compress floating-point data to reduce the required memory bandwidth. For example, one could contemplate storing numbers with lower precision in memory, yet performing computations with higher precision before sending data back to memory.

I gave a keynote at the CONGA conference in March 2019 explaining the challenges and the ideas. The slides and a video of my presentation are online.

At the conference, I mentioned the importance of not looking at this problem one-float-at-a-time but instead considering the compression and decompression of arrays of floating-point numbers. I learned at the conference that there is very interesting work being done by Milan Klower in the Physics Department at Oxford (one feels somewhat embarrassed to discover such things 5000 miles away from our common department). Milan looks at the information content of arrays before using them in computations. Perhaps this is the way forward.

There are steep obstacles in creating good software for decompression of data in CPU caches because few instructions and even fewer API’s are available to manage data while controlling its write-back to RAM.

unum-posit.jpg