Final pre-workshop version. There were some last minute changes at the workshop.

Monday, September 16th

8.30 - 9.00 Registration

9.00 Welcome

Session 1: Abstract Machines

9.00 - 9.30 Matthias Horn
A Different Integration of Multithreading into the STGM
9.30 - 10.00 Gerald Ostheimer
Pi Calculus as an Intermediate Language for Parallelizing Compilers
10.00 - 10.30 John Glauert
Parallel Implementation through Object Graph Rewriting

10.30 - 11.00 Coffee

Session 2: Implementation Aspects

11.00 - 11.30 Kei Davis
11.30 - 12.00 Hans Wolfgang Loidl
Making a Packet: Cost-Effective Communication for a Parallel Graph Reducer

12.00 - 13.30 Lunch

Session 3: Language Design

13.30 - 14.00 Claus Aßmann
Coordinating Functional Processes using Petri Nets
14.00 - 14.30 Kevin Hammond
Algorithm + Strategy = Parallelism
14.30 - 15.00 Rita Loogen
Implementation Aspects of Eden (tentative title)

15.00 - 15.30 Coffee

Session 4: Implementation of Data-Flow Languages

15.30 - 16.00 Stuart Maclean
Distributing Prograph Using Lisp
16.00 - 16.30 Shigeru Kusakabe
Towards Practical Implementation of a Dataflow-based Functional Language on Stock Machines

16.30 Demonstrations

Tuesday, September 17th

Session 5: Profiling and Debugging

8.45 - 9.15 Stephen Jarvis
The Results of: Profiling Large-Scale Lazy Functional Programs
9.15 - 9.45 Colin Runciman
Two-pass heap profiling: a matter of life and death
9.45 - 10.15 Jan Sparud
Tracing functional computations with redex trails

10.15 - 10.45 Coffee

Session 6: Program Analysis and Optimization

A Type-Based Algorithm for the Control-Flow Analysis of Higher-Order Concurrent Programs
11.15 - 11.45 Urban Boquist
Interprocedural optimizations for lazy evaluation
11.45 - 12.15 Thomas Johnsson
Analysis and Transformation of Graph Reduction Code

12.15 - 13.30 Lunch

Session 7: Scientific Computing with Arrays

13.30 - 14.00 John van Groningen
The implementation and efficiency of arrays in Clean 1.1
14.00 - 14.30 Pascal Serrarens
14.30 - 15.00 Sven-Bodo Scholz
On Programming Scientific Applications in SAC - a Functional Language Extended by a Subsystem for High-Level Array Operations

15.00 - 15.30 Coffee

Session 8: Program Derivation

15.30 - 16.00 Walter Dosch
Calculating a Functional Module for Binary Search Trees
16.00 - 16.30 Stephen McKeever
Call by Value, Call by Name, Call by Need and the Automatic Generation of Abstract Machines

19.00 Social Event

Wednesday, September 18th

Session 9: Persistence

8.45 - 9.15 Marco Pil
First Class File I/O
9.15 - 9.45 Tony Davie
Functional Hypersheets

9.45 - 10.00 Break

Session 10: New Language elements

10.00 - 10.30 Markus Mohnen
10.30 - 11.00 Martin Erwig
Active Patterns

11.00 - 11.30 Coffee

11.30 - 12.00 Chris Clack
Introducing CLOVER: an Object-Oriented Functional Language
12.00 - 12.15 Fergus Henderson
Mercury, a new functional/logic language (10min announcement)

12.15 - 13.30 Lunch

### Abstracts:

A Different Integration of Multithreading into the STGM
Matthias Horn
A variant of the Spineless Tagless G-Machine (STGM) which contains explicit support for multithreading is introduced in [1]. The main design decisions are the separation of demand for evaluation from case selection and the introduction of an abstract notion of thread boundaries and thread synchronisation. This article proposes an alternative solution which does not separate demand from selection. Instead, case selections are extended by additional alternatives which handle the appearance of long latency operations . The overhead, necessary to control multithreading, is reduced and sequentially evaluated parts of a program are more efficient.

Parallel Implementation through Object Graph Rewriting
John Glauert
The Object Graph Rewriting (OGRe) model is an abstract machine for implementing multi-paradigm languages. For example, an implementation has been developed for Facile, which combines functional and concurrent programming. The model is inherently concurrent, but fine-grained, so distributed implementation is not necessarily beneficial. We will introduce OGRe with some simple examples and explain how primitives are added which can trigger concurrent execution when appropriate.

Kei Davis
MPP Haskell is a parallel implementation of the Haskell functional language for the Thinking Machines Inc. CM-5 supercomputer'--a large-scale distributed-memory multiprocessor. MPP Haskell is a derivative of GUM, a message-based parallel implementation of Haskell. GUM was carefully designed to minimise performance loss from low bandwidth and high latency communications; as an apparent consequence MPP Haskell, with the benefit of high-bandwidth and low latency communications, achieves remarkably scalable performance.

Making a Packet: Cost-Effective Communication for a Parallel Graph Reducer
Hans Wolfgang Loidl, Kevin Hammond
This paper studies issues related to packing data for transmission in a lazy, parallel graph reduction system. We report performance results for a number of packing and fetching schemes. In particular, we compare bulk fetching, a scheme aimed at high latency systems, in which many closures are packed into a single packet, with incremental fetching as used in GRIP, in which only a single closure is packed in each packet. Results obtained using our simulator, GranSim, show: - for low latencies bulk fetching and incremental fetching are equally effective, - for high latencies bulk asynchronous fetching yields better speedups.

We also give results concerning latency hiding: a technique which aims to reduce the impact of high latency networks by ensuring an adequate supply of work. Somewhat surprisingly, our results show that more aggressive work creation strategies are less effective for latency hiding especially at very high latencies.

Coordinating Functional Processes using Petri Nets
Claus Aßmann
The coordination language K2 to be proposed in this paper provides one viable solution to the problem of specifying concurrent process systems. K2 is based on a variant of colored Petri nets. It primarily defines process systems with deterministic behavior but also allows for controlled forms of nondeterminism.

Algorithm + Strategy = Parallelism
P. W. Trinder, Kevin Hammond, H.-W. Loidl, S. L. Peyton Jones
The importance of separating logic from control has long been recognised for conventional programming languages. In this paper, we introduce a technique that uses higher-order {\em evaluation strategies} to separate control from logic in a parallel setting. Evaluation strategies represent a blurring between explicit and implicit parallel programming.

Distributing Prograph Using Lisp
Stuart Maclean, David DeRoure
Graphical dataflow programming gives programmers a clear and immediate view of the potential for parallel execution. The graphical programming language Prograph is thus a useful vehicle in the investigation of parallel and distributed programming systems.

Though the existing Prograph environment offers concurrent programming in the form of a threads package, to incorporate a seamless mechanism for true distributed computation would require a change to the Prograph system, which is currently infeasible. To overcome this problem, we have constructed a translation system which exports Prograph programs to some other representation; the parallel features are then added at this point.

We describe a mechanism for distributing Prograph programs via a translation to Lisp. The effort in distributing the program is then in the domain of the Lisp environment. This paper discusses the work-in-progress regarding the distribution of Prograph programs using Lisp as the parallel engine.

Towards Practical Implementation of a Dataflow-based Functional Language on Stock Machines
Shigeru Kusakabe, Tetsuro Morimoto, Kentaro Inenaga, Makoto Amamiya
Dataflow-based non-strict languages provide high expressiveness exposing high degree of concurrency. However, the performance is poor on stock machines. We are developing efficient implementation of a dataflow-based non-strict language on EWS and a distributed memory parallel machine, incorporating program analysis and information management at every stage from compiler to runtime systems.

The Results of: Profiling Large-Scale Lazy Functional Programs
Stephen A. Jarvis, Richard G. Morgan
At the High Performance Functional Computing conference in Denver last year I presented a new approach to profiling which allowed the complete set of program costs to be recorded in so-called cost centre stacks. It was proposed that these program costs could then be manipulated using a post-processor which would speed up the task of profiling a program and would also produce more accurate profiling results.

This paper presents the results of using this new profiling tool in the analysis of a number of Haskell programs. The overheads of the scheme are discussed and the benefits of this new system are considered. The paper also outlines how this approach can be modified to trace and debug Haskell programs.

Two-pass heap profiling: a matter of life and death
Colin Runciman, Niklas Röjemo
A heap profile is derived by interpolation of a series of censuses of heap contents at different stages. The obvious way to obtain census data is to traverse the live heap at intervals throughout the computation, but some attributes of cells can only be determined retrospectively by post-mortem inspection as a cell is over-written or garbage-collected. It is not obvious how to profile sections of heap memory specified in terms of both live-heap and post-mortem attributes, nor how to generate live-attribute profiles of heap sections specified by post-mortem attributes (or vice versa). The paper describes a two-pass profiling technique that solves these problems.

Tracing functional computations with redex trails
Colin Runciman, Jan Sparud
Functional systems have not been widely adopted by software professionals. One reason is a lack of the kind of debugging and tracing tools familiar to those who work with more conventional systems. It is harder to trace evaluation by normal order graph reduction than to follow a sequence of commands already explicit in a source program. The goal of this work is to extend a compiled graph-reduction system with effective new ways of tracing computations. The emphasis is on backward' traces from errors, giving possibility to retrace computational history, for example to locate the cause of a fault.

A Type-Based Algorithm for the Control-Flow Analysis of Higher-Order Concurrent Programs
M. Debbabi, A. Faour, Nadia Tawbi
We address, in a type-based framework, the problem of control-flow analysis for concurrent and functional languages. We present an efficient algorithm that propagates automatically types, communication effects and call graphs. The algorithm comes with a logical characterization that consists of a type proof system. The latter operates on a Concurrent ML core-syntax: a strongly typed, polymorphic kernel that supports higher-order functions and concurrency primitives. Effects are represented as algebraic terms that record communication effects resulting from channel creation, sending and receiving. Call graphs record function calls and are captured by a term algebra that is close to usual process algebras. Types are annotated with effects and call graphs. For the sake of flexibility, a subtyping relation is considered on the type algebra. We present the language syntax together with its dynamic and static semantics. The dynamic semantics is operational and comes as a two-layered labeled transition system. The static semantics consists of the typing rules and an inference algorithm. The latter is proved to be consistent and complete with respect to the typing rules.

The implementation and efficiency of arrays in Clean 1.1
John H. G. van Groningen
We describe the implementation of arrays in the lazy functional programming language Clean 1.1. The performance of two sorting algorithms and a fast fourier transformation written in Clean using arrays is compared with similar programs written in C.

Pascal R. Serrarens
This paper shows the usage of functional programming in the area of scientific computing. It is an important area because time and space efficiency are crucial: many scientific programs work with large data sets and run for a long time. The paper shows that functional pro- gramming can be a real alternative to imperative methods. The example used in this paper is the conjugate gradient algorithm. It will show that the functional way is cleaner, easier to adapt and almost as quick and lean as imperative implementations.

On Programming Scientific Applications in SAC - a Functional Language Extended by a Subsystem for High-Level Array Operations
L.R. Mullin, W.E. Kluge, S.-B. Scholz
This paper discusses some of the pros and cons of extending a simple functional language called SAC (for Single Assignment C) by array operations similar to those that are available in APL. These array operations are based on the $\psi$-calculus, an algebra of arrays which provides a formalism for specifying and simplifying array operations in terms of index set manipulations. The programming techniques made possible by SAC are demonstrated by means of a functional program for the approximation of numerical solutions of partial differential equations by multigrid relaxation. This application is not only of practical relevance but also fully exposes the flavors of using high-level array operations. In contrast to specifications in other languages, the SAC program is well structured, reasonably concise, and - what is most important - invariant against dimensionalities and shapes. However, some problems arise with the mapping of arrays of different shapes into each other. Also, sophisticated compilation (reduction) techniques are necessary to avoid, whenever possible, the creation of temporary arrays and to eliminate redundant operations. Performance figures for a SAC implementation of the NAS-mgrid-benchmark are competetive with those of a SISAL implementation.

Calculating a Functional Module for Binary Search Trees
Walter Dosch, Bernhard Möller
We formally derive a functional module for binary search trees comprising search, insert, delete, minimum and maximum operations. The derivation starts with an extensional specification that refers only to the multiset of elements stored in the tree.

Call by Value, Call by Name, Call by Need and the Automatic Generation of Abstract Machines
Stephen McKeever
Our research considers the problem of automatically deriving correct compilers from Natural Semantics specifications. We propose a framework that abstracts the compile time behaviour of a Natural Semantics description; and then refines its' underlying evaluation model so as to create language specific abstract machines. We demonstrate our approach on the call by value, call by name and call by need lambda calculus.

First Class File I/O
Marco Pil
We intend to store (a representation of) functions in files and, as a prerequisite, type the file system. File I/O is dynamic in nature, so we cannot do this without (a limitied amount of) dynamic typing. This must be intergrated in the otherwise completely statically typed language. In my talk I would like to pay attention to the theoretical framework as well as the implementation.

Functional Hypersheets
Tony Davie, Kevin Hammond
We propose to integrate lazy functional programming, hyperprogramming and persistence into a program development environment that permits value and program construction. The views of the system that users will see will be via generalised spreadsheets containing formulae for the development of values of any type. These hypersheets will constitute a local or distributed network of hyperlinks.

Markus Mohnen
In modern functional languages, pattern matching is used to define functions or expressions by performing an analysis of the structure of values. We extend Haskell with a more general form of patterns called context patterns, which allow the matching of subterms without fixed distance from the root of the whole term. The semantics of context patterns is defined by transforming them to standard Haskell programs. Typical applications of context patterns are functions which search a data structure and possibly transform it. This concept can easily be adopted for other languages using pattern matching like ML or Clean.

Active Patterns
Martin Erwig
Active patterns apply preprocessing functions to data type values before they are matched. This is of use for unfree data types where more than one representation exists for an abstract value: In many cases there is a specific representation for which function definitions become very simple, and active patterns just allow to assume this specific representation in function definitions. We define the semantics of active patterns and describe their implementation.

Introducing CLOVER: an Object-Oriented Functional Language
Chris Clack, Lee Braine
The search for a language which combines both functional and object oriented paradigms has a long and distinguished history. The aim is to combine the formal-methods benefits of functional programming with the software-engineering benefits of both paradigms. However, we know of no language which can claim to be both purely functional and purely object oriented. We present CLOVER, a new language which is 100% functional and 99.99% object oriented. We explain the design issues and how CLOVER achieves its aim. We also explain the "missing" 0.01%, discuss its relevance, and illustrate how its loss may be extenuated through the use of a new visual programming notation.

Mercury, a new functional/logic language (10min announcement)
Fergus Henderson
As of version 0.6, Mercury now supports functional programming. This announcement will very briefly outline Mercury's major features, explaining why you may want to use it, and where you can get hold of it.

cr@informatik.uni-kiel.de, Fri Sep 6 13:59:51 MET-DST 1996