- Contemporary messages sorted: [ by date ] [ by thread ] [ by subject ] [ by author ] [ by messages with attachments ]

From: Claus Reinke <claus.reinke_at_talk21.com>

Date: Sat, 12 May 2007 01:38:26 +0100

*> Claus Reinke wrote:
*

*>> assuming standard (normal-order) definitions of normal form (nf),
*

*>> head nf (hnf), weak hnf (whnf), the typical reduction to whnf
*

*>> reduces less than it could, but any reduction beyond hnf, towards
*

*>> full nf, implies reductions that might not be needed in context.
*

*>> what is the rationale for this unusual choice?
*

*>
*

*> Could you provide a reference to a "standard" definition of
*

*> normal form? The current handling of normal form comes
*

*> from the area of term rewriting (which is somehow the basis of Curry)
*

*> where a term is in normal form if it does not contain a redex.
*

*> This implies reductions at arbitrary places.
*

i can try;-) first, some context (even if it is probably already

known here), then, some references.

the handling of normal forms in programming calculi differs only

in that it tends to be relative to a set of rewrite rules and possibly

rewrite strategies, so for a set of rewrite rules X, there are

X-normal-forms, where no more X-rules are applicable anywhere,

and there are XS-normal-forms, where no X-rules are applicable

according to strategy S.

another difference is that in term rewriting, the rewrite system itself

is the program, whereas in programming calculi, the rewrite system

is the meta-program that interprets object-level programs.

what i was referring to are beta-normal forms for lambda calculus

under normal-order reduction strategy, reducing until no beta-redices

are left (nf), reducing until no head redices are left (hnf), or not

reducing under lambdas at all (whnf).

normalising (or standard) reduction strategies are those that are

guaranteed to produce a normal form if one exists. for lambda-

calculus, that is normal-order reduction (also known as leftmost

outermost reduction).

because it always reduces the leftmost outermost redex next,

there is always at most one normal-order redex (or head redex);

normal-order reduction will first reach weak hnf, then hnf, then

normal form, provided that all of these exist.

any lambda-abstraction is in whnf, no matter what beta-redices

might occur in the body - whnf originated in implementations,

and was long considered of no theoretical interest.

a head normal form has no more beta-redices in head position,

so an hnf consist of a prefix of lambdas, followed by a variable

in head position, followed by a postfix of arbitrary expression

parameters (pre- and postfix possibly empty):

hnf = \v1..vn->v e1..em -- n,m >= 0

there may be redices in the parameters, but until the variable in

head position is substituted, we can not know whether any of

those parameters will be needed. so it is, in general, not safe to

go any further - hnf is the most useful and sadly the least used

of those three normal forms.

a beta-normal form has no beta-redices left anywhere, so

taking nf as the target for reduction runs the danger of attempting

to reduce redices that wouldn't be needed in context. it mostly

makes sense if one wants to see every subterm fully evaluated.

as for references:

for normal-order reduction, normal form and head normal form,

i'd suspect Barendregt's book, which is "the" standard reference

on lambda-calculus anyway (i don't have my copy here, but i

think head normal form is defined in the context of solvability

of lambda-terms):

Barendregt, H. P.

The Lambda Calculus: Its Syntax and Semantics.

North Holland, Amsterdam (1984).

the origin of weak head normal form is more difficult to pin down,

as it goes back to implementations simply not reducing under

lambdas, but perhaps the first to consider the resulting normal

forms as more than an accident was Plotkin, who defined a

call-by-value calculus with values as '\var-> <expression>',

to correspond to Landin's ISWIM languages and SECD

machine:

Gordon Plotkin

Call-by-Name, Call-by Value and the Lambda Calculus

Reproduced from Theoretical Computer Science , Vol. 1,

pp. 125-159, Copyright (1975)

http://homepages.inf.ed.ac.uk/gdp/publications/

completing the triangle of call-by-name, call-by-value, and

call-by-need, and closest to the deterministic cores of Haskell

and Curry, is

A Call-By-Need Lambda Calculus (1995)

Zena M. Ariola, Matthias Felleisen, John Maraist,

Martin Odersky, Philip Wadler

Proceedings of 22nd Annual ACM SIGACT-SIGPLAN

Symposium on Principles of Programming Languages (POPL)

http://citeseer.ist.psu.edu/132534.html

and they take whnf as their notion of values..

as we have seen, head normal forms under normal-order and

call-by-need reductions don't quite coincide, due to observable

sharing, and reducing all the way to normal form might lose

such sharing.

hth,

claus

*> On the other hand, the notion of normal form is not really defined
*

*> in the Curry report so that it is more or less a question of
*

*> the various implementations what they show the user at the top level.
*

true (and haskell isn't any more precise in its report, it even tries

to ignore sharing). it is just the inconsistencies i find confusing.

_______________________________________________

curry mailing list

curry_at_lists.RWTH-Aachen.DE

http://MailMan.RWTH-Aachen.DE/mailman/listinfo/curry

Received on Mo Mai 14 2007 - 08:49:55 CEST

Date: Sat, 12 May 2007 01:38:26 +0100

i can try;-) first, some context (even if it is probably already

known here), then, some references.

the handling of normal forms in programming calculi differs only

in that it tends to be relative to a set of rewrite rules and possibly

rewrite strategies, so for a set of rewrite rules X, there are

X-normal-forms, where no more X-rules are applicable anywhere,

and there are XS-normal-forms, where no X-rules are applicable

according to strategy S.

another difference is that in term rewriting, the rewrite system itself

is the program, whereas in programming calculi, the rewrite system

is the meta-program that interprets object-level programs.

what i was referring to are beta-normal forms for lambda calculus

under normal-order reduction strategy, reducing until no beta-redices

are left (nf), reducing until no head redices are left (hnf), or not

reducing under lambdas at all (whnf).

normalising (or standard) reduction strategies are those that are

guaranteed to produce a normal form if one exists. for lambda-

calculus, that is normal-order reduction (also known as leftmost

outermost reduction).

because it always reduces the leftmost outermost redex next,

there is always at most one normal-order redex (or head redex);

normal-order reduction will first reach weak hnf, then hnf, then

normal form, provided that all of these exist.

any lambda-abstraction is in whnf, no matter what beta-redices

might occur in the body - whnf originated in implementations,

and was long considered of no theoretical interest.

a head normal form has no more beta-redices in head position,

so an hnf consist of a prefix of lambdas, followed by a variable

in head position, followed by a postfix of arbitrary expression

parameters (pre- and postfix possibly empty):

hnf = \v1..vn->v e1..em -- n,m >= 0

there may be redices in the parameters, but until the variable in

head position is substituted, we can not know whether any of

those parameters will be needed. so it is, in general, not safe to

go any further - hnf is the most useful and sadly the least used

of those three normal forms.

a beta-normal form has no beta-redices left anywhere, so

taking nf as the target for reduction runs the danger of attempting

to reduce redices that wouldn't be needed in context. it mostly

makes sense if one wants to see every subterm fully evaluated.

as for references:

for normal-order reduction, normal form and head normal form,

i'd suspect Barendregt's book, which is "the" standard reference

on lambda-calculus anyway (i don't have my copy here, but i

think head normal form is defined in the context of solvability

of lambda-terms):

Barendregt, H. P.

The Lambda Calculus: Its Syntax and Semantics.

North Holland, Amsterdam (1984).

the origin of weak head normal form is more difficult to pin down,

as it goes back to implementations simply not reducing under

lambdas, but perhaps the first to consider the resulting normal

forms as more than an accident was Plotkin, who defined a

call-by-value calculus with values as '\var-> <expression>',

to correspond to Landin's ISWIM languages and SECD

machine:

Gordon Plotkin

Call-by-Name, Call-by Value and the Lambda Calculus

Reproduced from Theoretical Computer Science , Vol. 1,

pp. 125-159, Copyright (1975)

http://homepages.inf.ed.ac.uk/gdp/publications/

completing the triangle of call-by-name, call-by-value, and

call-by-need, and closest to the deterministic cores of Haskell

and Curry, is

A Call-By-Need Lambda Calculus (1995)

Zena M. Ariola, Matthias Felleisen, John Maraist,

Martin Odersky, Philip Wadler

Proceedings of 22nd Annual ACM SIGACT-SIGPLAN

Symposium on Principles of Programming Languages (POPL)

http://citeseer.ist.psu.edu/132534.html

and they take whnf as their notion of values..

as we have seen, head normal forms under normal-order and

call-by-need reductions don't quite coincide, due to observable

sharing, and reducing all the way to normal form might lose

such sharing.

hth,

claus

true (and haskell isn't any more precise in its report, it even tries

to ignore sharing). it is just the inconsistencies i find confusing.

_______________________________________________

curry mailing list

curry_at_lists.RWTH-Aachen.DE

http://MailMan.RWTH-Aachen.DE/mailman/listinfo/curry

Received on Mo Mai 14 2007 - 08:49:55 CEST

*
This archive was generated by hypermail 2.3.0
: So Jan 26 2020 - 07:15:09 CET
*