- Contemporary messages sorted: [ by date ] [ by thread ] [ by subject ] [ by author ] [ by messages with attachments ]

From: Michael Hanus <hanus_at_I2.Informatik.RWTH-Aachen.DE>

Date: Wed, 2 Jul 1997 18:57:23 +0200

Dear John,

Thanks for starting the new discussion in this mailing list.

Of course, I am biased, but nevertheless I would like to

give some remarks which may be useful to understand some

design decisions which I made.

*> 1. A longstanding difference between Curry and Escher is that the operational
*

*> semantics of Curry is based on narrowing (plus a little more to subsume
*

*> what Escher does), while Escher is based on rewriting. I think there is
*

*> agreement that the Escher model is simpler, but it's been argued that it
*

*> is not sufficiently flexible for various reasons.
*

As you can imagine, I do not agree that the Escher model is simpler.

At a first sight, it seems to be simpler since Escher performs

only rewrite steps. However, Escher also deals with logical variables,

and in order to understand the processing of these variables,

one has to consider also all the rewrite rules in the module

Booleans. Thus, the operational model of Escher can be characterized

as "outermost reduction + all rewrite rules of Booleans".

Since the rewrite rules in Booleans describe the treatment

of logical variables, the consequences of applying the different

rules to a concrete goal is not obvious, at least for me.

For instance, the goal "exists [x] x == head []" will be reduced to

"True", whereas the goals "exists [x] head [] == head []" and

"x == head []" are not reducible. Moreover, parallel outermost

or leftmost outermost reduction may also lead to unexpected results.

For instance, consider the functions

f [] X = []

loop = loop

and the expression "f (head []) loop". Since "loop" is the only

outermost redex, parallel or leftmost outermost reductions run

into infinite loops, whereas an appropriate rewrite strategy

(like in Curry specified by definitional trees) immediately stops.

Maybe these examples are artificial, but they show that the

behavior of a pure rewrite model is not obvious.

Moreover, I think that narrowing and Escher's rewrite model

are not so different. The only essential difference is that

Escher binds variables only via explicit equations where

narrowing does also provide implicit bindings where it is

necessary to evaluate a function call. If you forbid such

bindings, you are forced to formulate inverse functions as

predicates (see your "split" example), but if you execute

these predicates with the rewrite rules for Booleans, you get

the same behavior as narrowing on the original functions.

Thus, I cannot see why the rewriting model should be simpler.

*> 2. Escher emphasises set processing partly because I think set processing
*

*> facilities have been sadly neglected in language design (usually
*

*> programmers are expected to do everything with lists even when they are
*

*> not at all appropriate) and partly because I use sets to "encapsulate"
*

*> logic programming computations. I would like to see set processing added
*

*> to Curry. What do you think?
*

Here I fully agree with Sergio. What is the advantage of adding

sets to the kernel of Curry in comparison to a module implementing sets?

*> 3. Curry will have concurrency. My preferred model for this is based on the
*

*> concurrent Haskell proposal which uses monads (Peyton Jones, Gordon,
*

*> Finne, POPL'96), except I've replaced their forkIO function by
*

*> something more "symmetric". Unfortunately, the write-up of this isn't
*

*> in the Escher paper, but one can see enough from the POPl paper to know
*

*> what it would look like. Is this the way to do concurrency in Curry?
*

I am open to suggestions, but the current form of concurrency

is identical to the Goffin approach, which is considered as

more high-level than Concurrent Haskell. Moreover, if we add

a monadic form of concurrency, we have two forms of concurrency

in the language (an explicit via forkIO and an implicit

via the delay of insufficiently instantiated redexes).

*> 4. The Curry document introduces the type Constraint and the value Valid.
*

*> I do not understand the need for these. Certainly, one can write
*

*> constraint systems in the usual mathematical way and solve them without the
*

*>
*

*> need for such things and, logically, Constraint is just Bool. In my view,
*

*> Constraint solving is primarily about building clever constraint solvers -
*

*> it shouldn't have a significant effect on the look or facilities of the
*

*> language. Is the current Curry proposal the best way to handle constraints
*

*> or can we find a way to make constraint solving blend more seamlessly into
*

*> the language?
*

Maybe the mentioning of the value "Valid" is a bad idea.

Forget about this value, it is not accessible to the user

and only used to specify the computation model.

The difference between constraints and Booleans are the

operations which you can apply on them. Booleans are (like

in Haskell) two values which can be arbitrarily used

(whether you call them True/False or A/B is not important).

Constraints contain more information since they have a

distinguished interpretation and a constraint solver which

tries to solve the constraint w.r.t. to this interpretation.

Currently, only strict equality is supported as a constraint.

Operations applied to constraints must be supported by

the constraint solver, thus it is not possible to allow

arbitrary operations on constraints. For instance, the

logical negation of strict equations is not supported

(I have no idea to do it). Nevertheless, you can support

disequations as constraints, like in Paco's Toy system,

but this is different from the logical negation of strict equality.

This is the reason why I think that constraints must be

distinguished from Booleans, as in the Goffin approach.

Constraints are more related to functions: functions are

intended to be applied in order to compute a value,

and constraints are intended to be solved in order

to compute a solution.

I hope my explanation helps a little bit, and I welcome

more discussions on these subjects.

Best regards,

Michael

Received on Wed Jul 02 1997 - 20:01:00 CEST

Date: Wed, 2 Jul 1997 18:57:23 +0200

Dear John,

Thanks for starting the new discussion in this mailing list.

Of course, I am biased, but nevertheless I would like to

give some remarks which may be useful to understand some

design decisions which I made.

As you can imagine, I do not agree that the Escher model is simpler.

At a first sight, it seems to be simpler since Escher performs

only rewrite steps. However, Escher also deals with logical variables,

and in order to understand the processing of these variables,

one has to consider also all the rewrite rules in the module

Booleans. Thus, the operational model of Escher can be characterized

as "outermost reduction + all rewrite rules of Booleans".

Since the rewrite rules in Booleans describe the treatment

of logical variables, the consequences of applying the different

rules to a concrete goal is not obvious, at least for me.

For instance, the goal "exists [x] x == head []" will be reduced to

"True", whereas the goals "exists [x] head [] == head []" and

"x == head []" are not reducible. Moreover, parallel outermost

or leftmost outermost reduction may also lead to unexpected results.

For instance, consider the functions

f [] X = []

loop = loop

and the expression "f (head []) loop". Since "loop" is the only

outermost redex, parallel or leftmost outermost reductions run

into infinite loops, whereas an appropriate rewrite strategy

(like in Curry specified by definitional trees) immediately stops.

Maybe these examples are artificial, but they show that the

behavior of a pure rewrite model is not obvious.

Moreover, I think that narrowing and Escher's rewrite model

are not so different. The only essential difference is that

Escher binds variables only via explicit equations where

narrowing does also provide implicit bindings where it is

necessary to evaluate a function call. If you forbid such

bindings, you are forced to formulate inverse functions as

predicates (see your "split" example), but if you execute

these predicates with the rewrite rules for Booleans, you get

the same behavior as narrowing on the original functions.

Thus, I cannot see why the rewriting model should be simpler.

Here I fully agree with Sergio. What is the advantage of adding

sets to the kernel of Curry in comparison to a module implementing sets?

I am open to suggestions, but the current form of concurrency

is identical to the Goffin approach, which is considered as

more high-level than Concurrent Haskell. Moreover, if we add

a monadic form of concurrency, we have two forms of concurrency

in the language (an explicit via forkIO and an implicit

via the delay of insufficiently instantiated redexes).

Maybe the mentioning of the value "Valid" is a bad idea.

Forget about this value, it is not accessible to the user

and only used to specify the computation model.

The difference between constraints and Booleans are the

operations which you can apply on them. Booleans are (like

in Haskell) two values which can be arbitrarily used

(whether you call them True/False or A/B is not important).

Constraints contain more information since they have a

distinguished interpretation and a constraint solver which

tries to solve the constraint w.r.t. to this interpretation.

Currently, only strict equality is supported as a constraint.

Operations applied to constraints must be supported by

the constraint solver, thus it is not possible to allow

arbitrary operations on constraints. For instance, the

logical negation of strict equations is not supported

(I have no idea to do it). Nevertheless, you can support

disequations as constraints, like in Paco's Toy system,

but this is different from the logical negation of strict equality.

This is the reason why I think that constraints must be

distinguished from Booleans, as in the Goffin approach.

Constraints are more related to functions: functions are

intended to be applied in order to compute a value,

and constraints are intended to be solved in order

to compute a solution.

I hope my explanation helps a little bit, and I welcome

more discussions on these subjects.

Best regards,

Michael

Received on Wed Jul 02 1997 - 20:01:00 CEST

*
This archive was generated by hypermail 2.3.0
: Mon Nov 18 2019 - 07:15:04 CET
*