The 2018 ACM International Conference on Computing Frontiers took place between 8-10 May on the beautiful island of Ischia, 30km from the Italian city of Naples. It seeks to explore novel and innovative approaches for design of various types of computing systems: embedded, mobile, high-performance, and more, in order to address increasing complexity and performance needs of current and future applications. Boundaries between state of the art and revolutionary innovation must be pushed forward to provide the computational support that is required to advance both science and engineering.
The call for papers states: “Computing Frontiers is an eclectic, collaborative community of researchers who investigate emerging technologies in the broad field of computing: our common goal is to drive the scientific breakthroughs that transform society”. As such, the conference focuses on a wide spectrum of advanced technologies and radically new solutions relevant to development of computer systems, and aims to foster communication among scientists and engineers to achieve this.
This year’s CF2018 certainly lived up to the brief of bringing together researchers from many different areas of technological innovation. Alongside the main conference program, there were also four co-located workshops covering a range of topics.
One aspect of the conference that I particularly enjoyed were the 'short paper' sessions. Short paper submissions had to be at least two pages and not more than four pages, with limits including figures, tables, and references. Short papers covered on-going research that is seeking for early feedback and were presented at the conference as an oral presentation in a dedicated slot as well as a poster session. This format worked well because the presentations were necessarily short and pithy, but the poster session provided ample opportunities to ask questions and discuss the work in greater detail.
CF2018 Conference Venue
One of my favourite papers from the conference, 'Adaptive and Polymorphic VLIW Processor to Optimize Fault Tolerance, Energy Consumption and Performance', also deservedly won the Best Paper Award (Authors: Anderson Luiz Sartor, Arthur F. Lorenzon, Sandip Kundu, Israel Koren and Antonio Carlos S. Beck).
This paper covers augmenting a VLIW processor so that it can be partitioned and adding a modest amount of result comparison hardware. Fault-tolerance is enabled by duplicating instructions and executing them separately (at a later time or in a separate partition). The results are compared, and if not identical then an exception can be raised. The authors also describe applying power-gating to the VLIW processor using block execution history information to determine whether it will be worth powering down a particular functional unit or entire pipeline. If units are turned off too frequently then there is a performance hit when waiting for them to be re-enabled. They can use these features to provide fault-tolerance and/or optimise energy and performance by varying the configuration of the processor whilst applications are running. Their results were impressive, despite the applications used being small and data-intensive.
View the paper
Delegates enjoying the sunshine and networking between sessions
Another interesting paper was: FreeDA: Deploying Incompatible Stock Dynamic Analyses in Production via Multi-Version Execution (Authors: Luís Pina, Anastasios Andronidis, and Cristian Cadar).
Multi-version execution in this context means being able to run versions of the same program compiled to use different dynamic analysis tools at the same time. Usually, the different dynamic analysis techniques cannot be run simultaneously with a single binary as they interfere with each other and the overhead would be prohibitive. This approach captures syscall information from the initial binary as it runs, and replays them into the different instrumented binaries as they run alongside. The paper deals with some awkward sys call cases (they are able to use rewrite rules for particular tools), they use a ring-buffer and can also perform sampling. The authors throttle to the slowest version in order not to overflow the ring buffer. Combining sampling with a load-balancer is a novel approach to lower the overhead of performing the dynamic analysis. Results are presented from a number of different types of benchmarks including server applications such as memcached and redis.
I participated in a panel session on the final day entitled: Post-Moore Computing: a Hype? The other panellists were two of the keynote speakers; Rosa M. Badia (Barcelona Supercomputing Centre) and Arvind Mithal (MIT), and John Feo (Pacific Northwest National Lab). There was much discussion about when the Post-Moore Computing era started, or indeed if we have reached it at all! Amongst the panel there was broad agreement that there is no single 'silver bullet' new technology that will enable the same rate of improvement in computing performance/energy. New fields such as Quantum Computing hold promise but cannot directly replace what we have now. As the cost of moving data (especially for Big Data) now dominates the cost of compute we expect the focus on schemes to move the compute to where the data lives to start to bring tangible improvements. The final trend is that towards heterogeneity – using the right compute tool for the job. This is seen in the proliferation of specialised hardware for tasks such as neural network processing.
All in all, Computing Frontiers 2018 was well worth attending and I would wholeheartedly recommend it to anyone wanting a broad overview of research taking place on the many frontiers in computing and technology.
Find out more about Computing Frontiers