The choice
Typeclasses are brilliant as they allow us to adapt the extra structure to existing types. Thus, we can put off some design decisions, and not make decisive decisions at the time of conception.
On the other hand, in object-oriented programming, for example, we are forced to take into account what needs to be done right away, and any additional structure that is obvious or necessary later will be included in the subtype.
Everything has its goals. I wonder when it would be better to do one or the other, in the context of Agda.
Invented example
Using the following functions n -ary s → s → ... → s → t ,
Func : ∀ {ℓ} (a : ℕ) (s : Set ℓ) (t : Set ℓ) → Set ℓ Func zero st = t Func (suc a) st = s → Func ast
When defining the syntax of a programming language, we can take a type symbol to be a named element that is “formed”, taking a number of other such named elements, and this value is understood by an operation that takes the above parameters and creates a specific Agda Set .
record Type₁ : Set₁ where constructor cons₁ field name₁ : String arity₁ : ℕ paramters₁ : Vec Type₁ arity₁ semantics₁ : Func arity₁ Type₁ Set open Type₁
So we pack the symbol and interpretation into one type. Alternatively, we can separate two:
record Type₂ : Set₁ where constructor cons₂ field name₂ : String arity₂ : ℕ paramters : Vec Type₂ arity₂ open Type₂ {{...}} record HasMeaning (t : Type₂) : Set₁ where field semantics₂ : Func (arity₂ {{t}}) Type₂ Set open HasMeaning {{...}}
(Note that we need Type₂ to live in Set₁ , although it does live in Set , due to the use of Func above in HasMeaning .)
Which is better and why? In particular, what is better in terms of programming and what is better in terms of proof?
Semantics
Suppose we really want to do something with our type (characters), for example, interpret them as actual Agda types, then we can do this easily with the first definition:
⟦_⟧₁ : Type₁ → Set ⟦ cons₁ name₁ 0 [] semantics₁ ⟧₁ = semantics₁ ⟦ cons₁ name₁ (suc n) (hd ∷ tl) semantics₁ ⟧₁ = ⟦ cons₁ name₁ n tl (semantics₁ hd) ⟧₁
Nothing smart. If there are no "parameters", then the semantics of the function is necessarily the Agda Set , and we are done; otherwise, we pass the parameter of the first type a semantic function and recursion to get a specific Set .
This is a bit annoying to the second definition, since we need to put a typeclass constraint every time we want to use type semantics, which is a lot for me!
⟦_⟧₂ : (t : Type₂) {{_ : HasMeaning t}} → Set ⟦ cons₂ name₂ 0 [] ⟧₂ {{s}} = semantics₂ {{s}} ⟦ cons₂ name₂ (suc n) (hd ∷ tl) ⟧₂ {{s}} = ⟦ cons₂ name₂ n tl ⟧₂ {{ record { semantics₂ = semantics₂ {{s}} hd } }}
Worse, what if we have conflicting semantic instances? Perhaps we want no more than one instance, how to achieve this?
So, for my purposes, it seems the first is better ... or not ..
Equality
The second definition clearly allows decidable equality, since natural and string strings have decidable equality. However, the first of these is not related to interpretation functions. This problem can be circumvented by saying that two types (characters) are identical precisely when they have the same names, strokes and parameters; regardless of interpretation:
postulate eq₁-desire : ∀ {nap ST} → cons₁ nap S ≡ cons₁ nap T
Unfortunately, this does not give us a calculation, and this can be fixed with trustMe .
Decision
As far as I can tell from the above approach, the first approach along with trustMe is the way to go. However, I am not comfortable using trustMe --- I know very little about it.
Any advice on how to go or any reasons to follow should be most appreciated. Thanks you!