Responsible use of operator overloading in Swift
/TLDR;
- Standard or custom operators should not be used to shorten code
- Where an operator only involves your own objects
- Language provided operators can be overloaded if semantics are preserved
- Custom operators must be distinct and semantics consistent & clearly defined
- Never override or provide custom operators where any parameter or return type is an object you have not implemented
With Great Power...
In his great initial exploration of Swift's interesting features, Mike Ash has this to say of operator overloading
In his conclusion on the topic he highlights Swift's ability to define entirely new operators
I think Mike has captured some interesting core rules for operator overloading here
- Overloading traditional operators (such as + - * / == and !=) for your objects is a good thing
- Overloading traditional operators with different semantics for any class is a bad thing
- Custom operators should be approached with care
My own feelings are that we should perhaps be even more declarative, these are global functions but we should treat them as we would any other "object".
- Standard or custom operators should not be used to shorten code
- Where an operator only involves your own objects
- Language provided operators can be overloaded providing the semantics of the operator are preserved (+ means add)
- Custom operators must be distinct (that is, do not copy operators from other languages as they may one day be utilised by core Swift) and their semantics must be clearly defined and consistent across all of your objects
- Never override or provide custom operators where any parameter or return type is an object you have not implemented
Breaking the Rules
Of course, the first thing we then did was break the rules, but we did this for a specific case: testing. As we started to develop tests for the tokeniser and parser, we found that we were writing a lot of code that was very similar in nature, but with small differences that were required for capturing particular edge-cases. These targets are only used by us, and so it seemed acceptable to allow our tests to benefit from the ability to simplify.
One interesting side effect is that it is quite clear that we have developed a "language within a language", and whilst that is technically "cool" I think it's exactly what Mike, and those of us with C++ battle scars, are afraid of.
Can you guess what this does?
tokenizer=>(*"x")=>(*"y")~>"xy"
It's a short-hand form of a tokeniser that creates xy tokens when it sees xy. Here's the entire problem, will we even understand it in twelve months time? Here's the code it replaces (or to say it differently, the quickest way of doing the same.
tokenizer.chain([ SingleCharacter(allowedCharacters: "x"), SingleCharacter(allowedCharacters: "y", createTokensNamed: "xy") ])
And it's achieved with a few simple lines of code
operator infix => {associativity left} operator prefix * {} //Does not breach the rules, they are all our classes! @infix func => (left:TokenizationState, right:TokenizationState)->TokenizationState{ left.addBranchTo(right) return right } //Breaches the rules, String is a parameter @prefix func * (allowedCharacters:String)->TokenizationState{ return SingleCharacter(allowedCharacters:allowedCharacters) } //Breaches the rules, String is a parameter @infix func ~> (left:TokenizationState, right:String)->TokenizationState{ return left.createTokensUsing(){ (state:TokenizationState,controller:TokenizationController)->Token in return Token(name: right, withCharacters:controller.capturedCharacters()) } }
I'm still undecided on this breach and if it's OK or not, but I'd love to hear your thoughts.