FUSE Memos

Here are the abstracts of all 15 FUSE memos up to 93-15; four of them are actually present as compressed postscript files. The abstracts are directly from the README file of quilty and only minimally marked up. One of the prime perpetrators of the FUSE project, Daniel Weise, now works for Microsoft,¹ where he maintains a WWW page

FUSE MEMO 90-1
GRAPHS AS AN INTERMEDIATE REPRESENTATION FOR PARTIAL EVALUATION
Daniel Weise
March 1990
(Also Stanford Computer Systems Laboratory Technical Report CSL-TR-90-421)
(This paper is partially obsoleted by FUSE MEMO 90-6.)

We have developed a fully automatic partial evaluator, called FUSE, for the functional subset of the Scheme programming language. FUSE operates differently from other partial evaluators: it transforms programs into graphs rather than into program text. A separate program transforms the graphs into executable code. This paper shows how using graphs solves difficult problems encountered with conventional partial evaluation technology such as code duplication, type propagation, and premature decision making. The major benefits of graphs is the ability to both reduce an expression at partial evaluation time and to have the expression appear in the transformed program. Because FUSE uses graphs, it has been able to do more thorough partial evaluation than its predecessors, and to perform optimizations previously unattainable by partial evaluation. (Note: 2 figures of this paper were inserted using paper technology, so they do not appear in the postscript, and are missing from the ftp'd paper.)

FUSE MEMO 90-2
COMPILING SCIENTIFIC CODE USING PARTIAL EVALUATION
Andrew Berlin and Daniel Weise
March 1990
(Also Stanford Computer Systems Laboratory Technical Report CSL-TR-90-422)
(Revised version published in COMPUTER 23(12), pp. 25-37, Dec 1990)

Partial evaluation converts a high-level program into a low-level program that is specialized for a particular application. We describe a compiler that uses partial evaluation to dramatically speed up programs. We have measured speedups over conventionally compiled code that range from seven times faster to ninety one times faster. Further experiments have also shown that by eliminating inherently sequential data structure references and their associated conditional branches, partial evaluation exposes the low-level parallelism inherent in a computation. By coupling partial evaluation with parallel scheduling techniques, this parallelism can be exploited for use on heavily pipelined or parallel architectures. We have demonstracted this approach by applying a parallel scheduler to a partially evaluated program that simulates the motion of a nine body solar system.

FUSE MEMO 90-3
COMPUTING TYPES DURING PROGRAM SPECIALIZATION
Daniel Weise and Erik Ruf
October 1990
(Also Stanford Computer Science Laboratory Technical Report CSL-TR-90-441)
(Revised as FUSE MEMO 90-3-revised.)

We have developed techniques for obtaining and using type information during program specialization (partial evaluation). Computed along with every residual expression and every specialized program is type information that bounds the possible values that the specialized program will compute at run time. The three keystones of this research are Symbolic Values that represent both the set of values that might be computed at runtime and the code for creating the runtime value, generalization of symbolic values, and the use of online fixed-point iterations for computing the type of values returned by specialized recursive functions. The specializer exploits type information to increase the efficiency of specialized functions. This research has two benefits, one anticipated and one unanticipated. The anticipated benefit is that programs that are to be specialized can now be written in a more natural style without losing accuracy during specialization. The unanticipated benefit is the creation of what we term concrete abstract interpretation. This is a method of performing abstract interpretation with concrete values where possible. The specializer abstracts values as needed, instead of requiring that all values be abstracted prior to abstract interpretation.

FUSE MEMO 90-3-revised
COMPUTING TYPES DURING PROGRAM SPECIALIZATION
Daniel Weise and Erik Ruf
December 1990

We have developed techniques for obtaining and using type information during program specialization (partial evaluation). Computed along with every residual expression and every specialized program is type information that bounds the possible values that the specialized program will compute at run time. The three keystones of this research are Symbolic Values that represent both the set of values that might be computed at runtime and the code for creating the runtime value, generalization of symbolic values, and the use of online fixed-point iterations for computing the type of values returned by specialized recursive functions. This work differs from previous specializers in computing type information for all residual expressions, including residual if expressions, and residual calls to specialized user functions. The specializer exploits the type information it computes to increase the efficiency of specialized functions. In particular, this research allows the class of recursive descent programs that use data structures to be more efficiently specialized.
FUSE MEMO 91-4
GENERATING COMPILED SIMULATIONS USING PARTIAL EVALUATION
Wing-Yee Au, Daniel Weise, and Scott Seligman
June 1991
(Published in the Proceedings of the 28th Design Automation Conference, pp. 205-210, IEEE, June, 1991)
Using a program specializer, we automatically generated high-performance digital simulation algorithms from a simple interpreter-based simulator. By making simple changes in the simulator and the specializer, we generated four types of compiled simulations: the PC-set algorithm, an improvement on the PC-set algorithm, and two compiled versions of the BACKSIM algorithm. An analysis of our experiments indicate that large improvements to the PC-set method are possible, and that compiled simulation based on the pure BACKSIM algorithm should not be further investigated as the overhead of the algorithm is larger than the time it saves by pruning gate evaluations.

FUSE MEMO 91-5
USING TYPES TO AVOID REDUNDANT SPECIALIZATION
Erik Ruf and Daniel Weise
June 1991
(Published in the proceedings of the 1991 ACM SIGPLAN Symposium on Partial Evaluation and Semantics-Directed Program Manipulation, ACM Press, pp. 321-333, June, 1991)
(The online version corrects a couple small errors in Section 2.3 of the published paper.)

Existing partial evaluators use a strategy called Polyvariant Specialization, which involves specializing program points on the known portions of their arguments, and re-using such specializations only when these known portions match exactly. We show that this re-use criterion is overly restrictive, and misses opportunities for sharing in residual programs, thus producing large residual programs containing redundant specializations. We develop a criterion for re-use based on computing the domains of specializations, and describe an approximate implementation of this criterion based on types, and its implementation in our partial evaluation system, FUSE. Finally, we relate our algorithm to existing work in partial evaluation and machine learning.

FUSE MEMO 91-6
AUTOMATIC ONLINE PARTIAL EVALUATION
Daniel Weise, Roland Conybeare, Erik Ruf, and Scott Seligman
June 1991
(Published in the Proceedings of the Conference on Functional Programming Languages and Computer Architecture)

We have solved the problem of constructing a fully automatic online program specializer for an untyped functional language (specifically, the functional subset of Scheme). We designed our specializer, called Fuse, as an interpreter that returns a trace of suspended computations. The trace is represented as a graph, rather than as program text, and each suspended computation indicates the type of its result. A separate process translates the graph into a particular programming language. Producing graphs rather than program text solves problems with code duplication and premature reduce/residualize/generalize decisions. Fuse's termination strategy, which employs online generalization, specializes conditional recursive function calls, and unfolds all other calls. This strategy is shown to be both powerful and safe.

FUSE MEMO 92-7
OPPORTUNITIES FOR ONLINE PARTIAL EVALUATION
Erik Ruf and Daniel Weise
April, 1992
(Also Stanford Computer Systems Laboratory Technical Report CSL-TR-92-516)

Partial evaluators can be separated into two classes: offline specializers, which make all of their reduce/residualize decisions before specialization, and online specializers, which make such decisions during specialization. The choice of which method to use is driven by a tradeoff between the efficiency of the specializer and the quality of the residual programs that it produces. Existing research describes some of the inefficiencies of online specializers, and how these are avoided using offline methods, but fails to address the price paid in specialization quality. This paper motivates research in online specialization by describing two fundamental limitations of the offline approach, and explains why the online approach does not encounter the same difficulties.

FUSE MEMO 92-8
PRESERVING INFORMATION DURING ONLINE PARTIAL EVALUATION
Erik Ruf and Daniel Weise
April, 1992
(Also Stanford Computer Systems Laboratory Technical Report CSL-TR-92-517)

The degree to which a partial evaluator can specialize a source program depends on how accurately the partial evaluator can represent and maintain information about runtime values. Partial evaluators always lose some accuracy due to their use of finite type systems; however, existing partial evaluation techniques lose information about runtime values even when their type systems are capable of representing such information. This paper describes two sources of such loss in existing specializers, solutions for both cases, and the implementation of these solutions in our partial evaluation system, FUSE.

FUSE MEMO 92-9
AVOIDING REDUNDANT SPECIALIAZATION DURING PARTIAL EVALUATION
Erik Ruf and Daniel Weise
April, 1992
(Also Stanford Computer Systems Laboratory Technical Report CSL-TR-92-518)

Existing partial evaluators use a strategy called polyvariant specialization, which involves specializing program points on the known portions of their arguments, and re-using such specializations only when these known portions match exactly. We show that this re-use criterion is overly restrictive, and misses opportunities for sharing in residual programs, thus producing large residual programs containing redundant specializations. We develop a criterion for re-use based on computing the domains of specializations, describe an approximate implementation of this criterion based on types, and show its implementation in our partial evaluation system FUSE. In addition, we describe several extensions to our mechanism to make it compatible with more powerful specialization strategies and to increase its efficiency. After evaluating our algorithm's usefulness, we relate it to existing work in partial evaluation and machine learning.

FUSE MEMO 92-10
ACCELERATING OBJECT-ORIENTED SIMULATION VIA AUTOMATIC PROGRAM SPECIALIZATION
Daniel Weise and Scott Seligman
April, 1992
(Also Stanford Computer Systems Laboratory Technical Report CSL-TR-92-519)

Object-oriented simulations in an object-oriented environment are easier to construct and maintain than conventionally programmed simulations. Unfortunately, they are also slower because of message passing and other runtime overhead. We have developed an automatic program transformer that solves the efficiency problem for a large class of simulation programs. It automatically constructs an efficient program from the inefficient simulation program and the objects it will receive as input. Depending on the object-oriented language used, and the application, the new program can be more than an order of magnitude faster than the original program. In this paper we describe the benefits of object-oriented simulation, our transformer, and how such dramatic speedups are possible.

FUSE MEMO 92-11
IMPROVING THE ACCURACY OF HIGHER-ORDER SPECIALIZATION
USING CONTROL FLOW ANALYSIS
Erik Ruf and Daniel Weise
June, 1992
(Published in the proceedings of the 1992 ACM SIGPLAN Workshop on Partial Evaluation and Semantics-Directed Program Manipulation, pp. 67-74, San Francisco, June, 1992. These proceedings are available as Yale Technical Report YALEU/DCS/RR-909)

We have developed a new technique for computing the argument vectors used to build specializations of first-class functions. Instead of building these specializations on completely dynamic actual parameters, our technique performs a control flow analysis of the residual program as it is constructed during specialization, and uses the results of this analysis to compute more accurate actual parameter values. As implemented in the program specializer FUSE, our technique has proven useful in improving the specialization of several realistic programs taken from the domains of interpreters and scientific computation. Also, it extends the utility of the continuation-passing-style (CPS) transformation for binding time improvement to programs with non tail-recursive residual loops.

FUSE MEMO 92-12
TOWARDS A NEW PERSPECTIVE ON PARTIAL EVALUATION
Morry Katz and Daniel Weise
June, 1992
(Published in the proceedings of the 1992 ACM SIGPLAN Workshop on Partial Evaluation and Semantics-Directed Program Manipulation, San Francisco, June, 1992)

We have designed a new method for performing partial evaluation. This method divides specialization into two phases: a polyvariant analysis phase and a code generation phase. The analysis phase does not differentiate between unfolding and specializing. Symbolic execution is the sole operation performed during the analysis phase. Symbolic execution of an expression yields a characterization of the value that would be returned by the expression if it were executed at runtime, a description of the residual operation(s) that must be performed to generate the runtime value, and a record of the information utilized in performing symbolic execution. Only delta-reductions are performed during symbolic execution. No beta-substitution of user functions occurs during the analysis phase and no values are in-lined. Consequently, symbolic execution yields extremely polymorphic, and highly reusable, specialized function bodies. The code generation phase constructs the residual program from the specialized function bodies. It is responsible for all beta-substitution. Delaying the beta-substitution decisions until the code generation phase allows us to construct a partial evaluator that both produces highly specialized code and makes intelligent decisions regarding code size versus lower function call overhead.
Termination decisions are made in the analysis phase based on lazy use-analysis, an extension of eager use-analysis. Lazy use-analysis considers both the information used by symbolic execution in performing delta-reductions and how information about the values returned by delta reductions is used. Information is only used in a fundamental sense if there is a causal chain between its use in performing some delta reduction and the production of the final runtime answer returned by a program. The less information used in creating a specialized function body that contributes to the return value of the function, the greater the number of contexts in which the specialization can be reused. Furthermore, the less information used about the value returned by a function call, the less restrictions that are placed on the characteristics of the specialization to be called. A partial evaluator based on lazy use-analysis promises to produce a better combination of quality residual code and termination than previous alternatives. For example, a partial evaluator constructed using lazy use-analysis safely handles the standard ``counter'' problem and properly unfolds Mogensen's (unpublished) regular expression accepter. No automatic partial evaluator to date has been able to do both.

FUSE MEMO 92-13
ON THE SPECIALIZATION OF ONLINE PROGRAM SPECIALIZERS
Erik Ruf and Daniel Weise
July, 1992
(Also Stanford Computer Systems Laboratory Technical Report CSL-TR-92-53)
(An updated version of this report appeared in the Journal Of Functional Programming (Special Issue on Partial Evaluation) 3(3):251-281, July 1993)

Program specializers improve the speed of programs by performing some of the programs' reductions at specialization time rather than at runtime. This specialization process can be time-consuming; one common technique for improving the speed of the specialization of a particular program is to specialize the specializer itself on that program, creating a custom specializer, or program generator, for that particular program.
Much research has been devoted to the problem of generating efficient program generators, which do not perform reductions at program generation time which could instead have been performed when the program generator was constructed. The conventional wisdom holds that only offline program specializers, which use binding time annotations, can be specialized into such efficient program generators. This paper argues that this is not the case, and demonstrates that the specialization of a nontrivial online program specializer similar to the original ``naive MIX'' can indeed yield an efficient program generator.
The key to our argument is that, while the use of binding time information at program generator generation time is necessary for the construction of an efficient custom specializer, the use of explicit binding time approximation techniques is not. This allows us to distinguish the problem at hand (i.e., the use of binding time information during program generator generation) from particular solutions to that problem (i.e., offline specialization). We show that, given a careful choice of specializer data structures, and sufficiently powerful specialization techniques, binding time information can be inferred and utilized without the use of explicit binding time approximation techniques. This allows the construction of efficient, optimizing program generators from online program specializers.

FUSE MEMO 93-14
TOPICS IN ONLINE PARTIAL EVALUATION
Erik Ruf (Ph.D. dissertation)
March, 1993
(Also Stanford Computer Systems Laboratory Technical Report CSL-TR-93-563)
(This report supersedes Memos 90-3, 91-5, 92-7, 92-8, 92-9, 92-11, and 92-13)
(NOTE: The figures in this document will break many printers with imperfect PostScript implementations. Printers with Rev. 2 Postscript (e.g. HPIIISi/4mx) seem to fare better. If you find yourself completely unable to print a copy, contact Erik Ruf (ruf@research.microsoft.com) to see if he still has any spare copies)

Partial evaluation is a performance optimization technique for computer programs. When a program is run repeatedly with only small variations in its input, we can profit by taking the program and a description of the constant portion of the input, and producing a "specialized" program that computes the same input/output relation as the original program, but only for inputs satisfying the description. This program runs faster because computations depending only on constant inputs are performed only once, when the specialized program is constructed. This technique has not only proven useful for speeding up "interpretive" programs such as simulators, language interpreters, and pattern matchers, but also encompasses many "traditional" compiler optimizations.
Contemporary work in partial evaluation has concentrated on "offline" methods, where high-speed specialization is achieved at the cost of slower specialized programs by limiting the sorts of decision-making that can occur at specialization time. This dissertation investigates "online" methods, which impose no such restrictions, and demonstrates the benefits of specialization-time decision-making in FUSE, a partial evaluator for a functional subset of Scheme. We describe two new methods, return value analysis and parameter value analysis, for computing more accurate estimates of runtime values at specialization time, enabling more optimizations, and yielding better specialized programs with less "hand-tweaking" of the source. We develop a re-use analysis mechanism for avoiding redundancies in specialized code, improving the efficiency of both the specializer and the programs it produces. Finally, we show how to produce efficient, optimizing program generators by using our techniques to specialize an online program specializer.

FUSE MEMO 93-15
TOWARDS A NEW PERSPECTIVE ON PARTIAL EVALUATION: RESULTS, NEW IDEAS, AND FUTURE DIRECTIONS
Morry Katz
June 1993

This paper is an update to the ideas presented in ``Towards a New Perspective on Partial Evaluation''(FUSE-MEMO-92-12) The reader is assumed to be familiar with the previous paper, as the ideas presented therein are not repeated in this document. The major advances since the publication of that paper that will be discussed in this document fall into three categories: changes to the use lattice to improve termination, design of a lazier analysis, and addition of a new mecahnism to the termination strategy based on searching for base cases of recursions.

¹Unterwandererstiefel, ick hör dir trapsen ;-)

This page was last changed on Wed Nov 2 19:03:10 MET 1994 by mfx@pobox.com.