Swift - State machines vs willSet/didSet - swift

I recently came across the idea of state machines, which generally seem useful, but I’m wondering if they provide any functionality that can’t be achieved using only willSet and didSet in Swift.
Seemingly, this could be replicated using state and to variables with the bulk of the code in either a willSet on state or a didSet on to (with the end of the didSet setting the value of state equal to the value of to).
Is there any other value that a state machine provides?


What is the difference between Strategy design pattern and State design pattern?

What are the differences between the Strategy design pattern and the State design pattern? I was going through quite a few articles on the web but could not make out the difference clearly.
Can someone please explain the difference in layman's terms?
Honestly, the two patterns are pretty similar in practice, and the defining difference between them tends to vary depending on who you ask. Some popular choices are:
States store a reference to the context object that contains them. Strategies do not.
States are allowed to replace themselves (IE: to change the state of the context object to something else), while Strategies are not.
Strategies are passed to the context object as parameters, while States are created by the context object itself.
Strategies only handle a single, specific task, while States provide the underlying implementation for everything (or most everything) the context object does.
A "classic" implementation would match either State or Strategy for every item on the list, but you do run across hybrids that have mixes of both. Whether a particular one is more State-y or Strategy-y is ultimately a subjective question.
The Strategy pattern is really about having a different implementation that accomplishes (basically) the same thing, so that one implementation can replace the other as the strategy requires. For example, you might have different sorting algorithms in a strategy pattern. The callers to the object does not change based on which strategy is being employed, but regardless of strategy the goal is the same (sort the collection).
The State pattern is about doing different things based on the state, while leaving the caller relieved from the burden of accommodating every possible state. So for example you might have a getStatus() method that will return different statuses based on the state of the object, but the caller of the method doesn't have to be coded differently to account for each potential state.
The difference simply lies in that they solve different problems:
The State pattern deals with what (state or type) an object is (in) -- it encapsulates state-dependent behavior, whereas
the Strategy pattern deals with how an object performs a certain task -- it encapsulates an algorithm.
The constructs for achieving these different goals are however very similar; both patterns are examples of composition with delegation.
Some observations on their advantages:
By using the State pattern the state-holding (context) class is relieved from knowledge of what state or type it is and what states or types that are available. This means that the class adheres to the open-closed design principle (OCP): the class is closed for changes in what states/types there are, but the states/types are open to extensions.
By using the Strategy pattern the algorithm-using (context) class is relieved from knowledge of how to perform a certain task (-- the "algorithm"). This case also creates an adherence to the OCP; the class is closed for changes regarding how to perform this task, but the design is very open to additions of other algorithms for solving this task.
This likely also improves the context class' adherence to the single responsibility principle (SRP). Further the algorithm becomes easily available for reuse by other classes.
Can somebody please explain in layman's terms?
Design patterns are not really "layman" concepts, but I'll try to make it as clear as possible. Any design pattern can be considered in three dimensions:
The problem the pattern solves;
The static structure of the pattern (class diagram);
The dynamics of the pattern (sequence diagrams).
Let's compare State and Strategy.
Problem the pattern solves
State is used in one of two cases [GoF book p. 306]:
An object's behavior depends on its state, and it must change its behavior at run-time depending on that state.
Operations have large, multipart conditional statements that depend on the
object's state. This state is usually represented by one or more enumerated
constants. Often, several operations will contain this same conditional structure. The State pattern puts each branch of the conditional in a separate class. This lets you treat the object's state as an object in its own right that can vary independently from other objects.
If you want to make sure you indeed have the problem the State pattern solves, you should be able to model the states of the object using a finite state machine. You can find an applied example here.
Each state transition is a method in the State interface. This implies that for a design, you have to be pretty certain about state transitions before you apply this pattern. Otherwise, if you add or remove transitions, it will require changing the interface and all the classes that implement it.
I personally haven't found this pattern that useful. You can always implement finite state machines using a lookup table (it's not an OO way, but it works pretty well).
Strategy is used for the following [GoF book p. 316]:
many related classes differ only in their behavior. Strategies provide a way to configure a class with one of many behaviors.
you need different variants of an algorithm. For example, you might define algorithms reflecting different space/time trade-offs. Strategies can be used when these variants are implemented as a class hierarchy of algorithms [HO87].
an algorithm uses data that clients shouldn't know about. Use the Strategy pattern to avoid exposing complex, algorithm-specific data structures.
a class defines many behaviors, and these appear as multiple conditional statements in its operations. Instead of many conditionals, move related conditional branches into their own Strategy class.
The last case of where to apply Strategy is related to a refactoring known as Replace conditional with polymorphism.
Summary: State and Strategy solve very different problems. If your problem can't be modeled with a finite state machine, then likely State pattern isn't appropriate. If your problem isn't about encapsulating variants of a complex algorithm, then Strategy doesn't apply.
Static structure of the pattern
State has the following UML class structure:
Strategy has the following UML class structure:
Summary: in terms of the static structure, these two patterns are mostly identical. In fact, pattern-detecting tools such as this one consider that "the structure of the
[...] patterns is identical, prohibiting their
distinction by an automatic process (e.g., without referring
to conceptual information)."
There can be a major difference, however, if ConcreteStates decide themselves the state transitions (see the "might determine" associations in the diagram above). This results in coupling between concrete states. For example (see the next section), state A determines the transition to state B. If the Context class decides the transition to the next concrete state, these dependencies go away.
Dynamics of the pattern
As mentioned in the Problem section above, State implies that behavior changes at run-time depending on some state of an object. Therefore, the notion of state transitioning applies, as discussed with the relation of the finite state machine. [GoF] mentions that transitions can either be defined in the ConcreteState subclasses, or in a centralized location (such as a table-based location).
Let's assume a simple finite state machine:
Assuming the subclasses decide the state transition (by returning the next state object), the dynamic looks something like this:
To show the dynamics of Strategy, it's useful to borrow a real example.
Summary: Each pattern uses a polymorphic call to do something depending on the context. In the State pattern, the polymorphic call (transition) often causes a change in the next state. In the Strategy pattern, the polymorphic call does not typically change the context (e.g., paying by credit card once doesn't imply you'll pay by PayPal the next time). Again, the State pattern's dynamics are determined by its corresponding fininte state machine, which (to me) is essential to correct application of this pattern.
The Strategy Pattern involves moving the implementation of an algorithm from a hosting class and putting it in a separate class. This means that host class does not need to provide the implementation of each algorithm itself, which is likely to lead to unclean code.
Sorting algorithms are usually used as an example as they all do the same kind of thing (sort). If each differing sorting algorithm is put into its own class, then the client can easily choose which algorithm to use and the pattern provides an easy way to access it.
The State Pattern involves changing the behaviour of an object when the state of the object changes. This means that the host class does not have provide the implementation of behaviour for all the different states that it can be in. The host class usually encapsulates a class which provides the functionality that is required in a given state, and switches to a different class when the state changes.
Strategy represents objects that "do" something, with the same begin and end results, but internally using different methodologies. In that sense they are analogous to representing the implementation of a verb. The State pattern OTOH uses objects that "are" something - the state of an operation. While they can represent operations on that data as well, they are more analogous to representation of a noun than of a verb, and are tailored towards state machines.
Strategy: the strategy is fixed and usually consists of several steps. (Sorting constitutes only one step and thus is a very bad example as it is too primitive in order to understand the purpose of this pattern).
Your "main" routine in the strategy is calling a few abstract methods. E.g. "Enter Room Strategy", "main-method" is goThroughDoor(), which looks like: approachDoor(), if (locked()) openLock(); openDoor(); enterRoom(); turn(); closeDoor(); if (wasLocked()) lockDoor();
Now subclasses of this general "algorithm" for moving from one room to another room through a possible locked door can implement the steps of the algorithm.
In other words subclassing the strategy does not change the basic algorithms, only individual steps.
THAT ABOVE is a Template Method Pattern. Now put steps belonging together (unlocking/locking and opening/closing) into their own implementing objects and delegate to them. E.g. a lock with a key and a lock with a code card are two kinds of locks. Delegate from the strategy to the "Step" objects. Now you have a Strategy pattern.
A State Pattern is something completely different.
You have a wrapping object and the wrapped object. The wrapped one is the "state". The state object is only accessed through its wrapper. Now you can change the wrapped object at any time, thus the wrapper seems to change its state, or even its "class" or type.
E.g. you have a log on service. It accepts a username and a password. It only has one method: logon(String userName, String passwdHash). Instead of deciding for itself whether a log on is accepted or not, it delegates the decision to a state object. That state object usually just checks if the user/pass combination is valid and performs a log on. But now you can exchange the "Checker" by one that only lets priviledged users log on (during maintanace time e.g.) or by one that lets no one log on. That means the "checker" expresses the "log on status" of the system.
The most important difference is: when you have choosen a strategy you stick with it until you are done with it. That means you call its "main method" and as long as that one is running you never change the strategy. OTOH in a state pattern situation during the runtime of your system you change state arbitrarily as you see fit.
Consider an IVR (Interactive Voice Response) system handling customer calls. You may want to program it to handle customers on:
Work days
To handle this situation you can use a State Pattern.
Holiday: IVR simply responds saying that 'Calls can be taken only on working days between 9am to 5pm'.
Work days: it responds by connecting the customer to a customer care executive.
This process of connecting a customer to a support executive can itself be implemented using a Strategy Pattern where the executives are picked based on either of:
Round Robin
Least Recently Used
Other priority based algorithms
The strategy pattern decides on 'how' to perform some action and state pattern decides on 'when' to perform them.
Strategy pattern is used when you have multiple algorithm for a specific task and client decides the actual implementation to be used at runtime.
UML diagram from wiki Strategy pattern article:
Key features:
It's a behavioural pattern.
It's based on delegation.
It changes guts of the object by modifying method behaviour.
It's used to switch between family of algorithms.
It changes the behaviour of the object at run time.
Refer to this post for more info & real world examples:
Real World Example of the Strategy Pattern
State pattern allows an object to alter its behaviour when its internal state changes
UML diagram from wiki State pattern article:
If we have to change the behavior of an object based on its state, we can have a state variable in the Object and use if-else condition block to perform different actions based on the state. State pattern is used to provide a systematic and lose-coupled way to achieve this through Context and State implementations.
Refer to this journaldev article for more details.
Key differences from sourcemaking and journaldev articles:
The difference between State and Strategy lies with binding time. The Strategy is a bind-once pattern, whereas State is more dynamic.
The difference between State and Strategy is in the intent. With Strategy, the choice of algorithm is fairly stable. With State, a change in the state of the "context" object causes it to select from its "palette" of Strategy objects.
Context contains state as instance variable and there can be multiple tasks whose implementation can be dependent on the state whereas in strategy pattern strategy is passed as argument to the method and context object doesn’t have any variable to store it.
In layman's language,
in Strategy pattern, there are no states or all of them have same state.
All one have is different ways of performing a task, like different doctors treat same disease of same patient with same state in different ways.
In state Pattern, subjectively there are states, like patient's current state(say high temperature or low temp), based on which next course of action(medicine prescription) will be decided.And one state can lead to other state, so there is state to state dependency( composition technically).
If we technically try to understand it , based on code comparison of both, we might lose the subjectivity of situation,because both look very similar.
Both patterns delegate to a base class that has several derivative, but it's only in the State pattern that these derivative classes hold a reference back to context class.
Another way to look at it is that the Strategy pattern is a simpler version of the State pattern; a sub-pattern, if you like. It really depends if you want the derived states to hold references back to the context or not (i.e: do you want them to call methods on the context).
For more info: Robert C Martin (& Micah Martin) answer this in their book, "Agile Principles, Patterns and Practices in C#". (http://www.amazon.com/Agile-Principles-Patterns-Practices-C/dp/0131857258)
The difference is discussed in http://c2.com/cgi/wiki?StrategyPattern. I have used the Strategy pattern for allowing different algorithms to be chosen within an overall framework for analysing data. Through that you can add algorithms without having to change the overall frameworks and its logic.
A typical example is that you amy have a framework for optimising a function. The framework sets up the data and parameters. The strategy pattern allows you to select algorithms such as sttepest descents, conjugate gradients, BFGS, etc. without altering the framework.
Both Strategy and State pattern has the same structure. If you look at the UML class diagram for both patterns they look exactly same, but their intent is totally different. State design pattern is used to define and manage state of an object, while Strategy pattern is used to define a set of interchangeable algorithm and lets client to choose one of them. So Strategy pattern is a client driven pattern while Object can manage there state itself.
This is a pretty old question, but still, I was also looking for the same answers and this is what I have discovered.
For State pattern lets consider an example of Medial Player Play button. When we do play it starts playing and makes the context aware that it is playing. Every time the client wants to perform play operation he checks the current state of the player. Now the client knows the state of the object is playing via the context object so he calls the pause state objects actions method. The part of the client realizing the state and on what state it needs to do action can be automated.
In the case of Strategy pattern, the arrangement of the class diagram is same as state pattern. The client comes to this arrangement to do some operation. That is instead of the different states there are different algorithms say for example different analysis that needs to be performed on the pattern. Here the clients tell the context what it wants to do that what algorithm (business defined custom algorithm) and then performs that.
Both implements open close principle so the developer has the capability to add new states to the state pattern and new algorithm.
But the difference is what they are used that is state pattern used to execute different logic based on a state of the object. And in a case of strategy different logic.
State comes with a little bit dependencies within the state derived classes: like one state knows about other states coming after it. For example, Summer comes after winter for any season state, or Delivery state after the Deposit state for shopping.
On the other hand, Strategy has no dependencies like these. Here, any kind of state can be initialized based on the program/product type.

The usage of computed properties in swift [closed]

computed properties evaluated every time they are accessed(I mean the getter is called everytime),so why I just use a stored properties?
When you start having problems maintaining consistency between stored properties that need to be kept in sync, you will find computed properties quite useful. You probably haven't done enough object oriented design to see the benefits yet, but it will come.

weak vs unowned in Swift. What are the internal differences?

I understand the usage and superficial differences between weak and unowned in Swift:
The simplest examples I've seen is that if there is a Dog and a Bone, the Bone may have a weak reference to the Dog (and vice versa) because the each can exist independent of each other.
On the other hand, in the case of a Human and a Heart, the Heart may have an unowned reference to the human, because as soon as the Human becomes... "dereferenced", the Heart can no longer reasonably be accessed. That and the classic example with the Customer and the CreditCard.
So this is not a duplicate of questions asking about that.
My question is, what is the point in having two such similar concepts? What are the internal differences that necessitate having two keywords for what seem essentially 99% the same thing? The question is WHY the differences exist, not what the differences are.
Given that we can just set up a variable like this: weak var customer: Customer!, the advantage of unowned variables being non-optional is a moot point.
The only practical advantage I can see of using unowned vs implicitly unwrapping a weak variable via ! is that we can make unowned references constant via let.
... and that maybe the compiler can make more effective optimizations for that reason.
Is that true, or is there something else happening behind the scenes that provides a compelling argument to keeping both keywords (even though the slight distinction is – based on Stack Overflow traffic – evidently confusing to new and experienced developers alike).
I'd be most interested to hear from people who have worked on the Swift compiler (or other compilers).
My question is, what is the point in having two such similar concepts? What are the internal differences that necessitate having two keywords for what seem essentially 99% the same thing?
They are not at all similar. They are as different as they can be.
weak is a highly complex concept, introduced when ARC was introduced. It performs the near-miraculous task of allowing you to prevent a retain a cycle (by avoiding a strong reference) without risking a crash from a dangling pointer when the referenced object goes out of existence — something that used to happen all the time before ARC was introduced.
unowned, on the other hand, is non-ARC weak (to be specific, it is the same as non-ARC assign). It is what we used to have to risk, it is what caused so many crashes, before ARC was introduced. It is highly dangerous, because you can get a dangling pointer and a crash if the referenced object goes out of existence.
The reason for the difference is that weak, in order to perform its miracle, involves a lot of extra overhead for the runtime, inserted behind the scenes by the compiler. weak references are memory-managed for you. In particular, the runtime must maintain a scratchpad of all references marked in this way, keeping track of them so that if an object weakly referenced goes out of existence, the runtime can locate that reference and replace it by nil to prevent a dangling pointer.
In Swift, as a consequence, a weak reference is always to an Optional (exactly so that it can be replaced by nil). This is an additional source of overhead, because working with an Optional entails extra work, as it must always be unwrapped in order to get anything done with it.
For this reason, unowned is always to be preferred wherever it is applicable. But never use it unless it is absolutely safe to do so! With unowned, you are throwing away automatic memory management and safety. You are deliberately reverting to the bad old days before ARC.
In my usage, the common case arises in situations where a closure needs a capture list involving self in order to avoid a retain cycle. In such a situation, it is almost always possible to say [unowned self] in the capture list. When we do:
It is more convenient for the programmer because there is nothing to unwrap. [weak self] would be an Optional in need of unwrapping in order to use it.
It is more efficient, partly for the same reason (unwrapping always adds an extra level of indirection) and partly because it is one fewer weak reference for the runtime's scratchpad list to keep track of.
A weak reference is actually set to nil and you must check it when the referent deallocates and an unowned one is set to nil, but you are not forced to check it.
You can check a weak against nil with if let, guard, ?, etc, but it makes no sense to check an unowned, because you think that is impossible. If you are wrong, you crash.
I have found that in-practice, I never use unowned. There is a minuscule performance penalty, but the extra safety from using weak is worth it to me.
I would leave unowned usage to very specific code that needs to be optimized, not general app code.
The "why does it exist" that you are looking for is that Swift is meant to be able to write system code (like OS kernels) and if they didn't have the most basic primitives with no extra behavior, they could not do that.
NOTE: I had previously said in this answer that unowned is not set to nil. That is wrong, a bare unowned is set to nil. A unowned(unsafe) is not set to nil and could be a dangling pointer. This is for high-performance needs and should generally not be in application code.

State design pattern implementation query

I am developing an application that gets n number of PENDING records from the database and processes the records. The state while processing is "PROCESSING" and would either mark the records as either "ERROR" or "SUCCESS".
If all the records are successfully processed, the state needs to be updated to "SUCCESS" in the database.
If some of the records failed to process, I need to update those as "ERROR" and insert the reason for error in errorlog table, while updating the remaining as success.
I was thinking of implementing this using State design pattern.
My question -
I understand that it would make sense to implement it using State design pattern if I am dealing with one record at a time. Would it make sense to implement with State pattern if dealing with bulk records.
If not, any alternatives?
This is actually not a good case for the State Pattern. State is about making it easy for an object to behave differently based on the state it's in. For state to apply, you want to have an object that implements a protocol, but implements it differently based on what State it's in. The example they use in The Gang of Four is for a socket class. I used State recently in a case where if a device was in use, I wanted to have it handle certain basic functions differently than if it wasn't. So in this case, what you do is you implement two state handlers (that are implementing the same interface) and you simply swap them when the event arrives saying the device either went into or out of use.
For your case, you need to just model the states of a given object. You should read a bit about state machines, which are making a comeback now with the rise of Reactive Programming.
Found this gentle introduction, which is quite good.

What is stale state?

I was reading about the object pool pattern on Wikipedia (http://en.wikipedia.org/wiki/Object_pool) and it mentions "dangerously stale state".
What exactly is "stale" state? I know state is variables/data, such as my fields and properties, but what does it mean by stale or dangerously stale?
Stale state is information in an object that does not reflect reality.
Example: an object's members are filled with information from a database, but the underlying data in the database has changed since the object was filled.
Dangerously stale state is stale state that might adversely affect the operation of a program, i.e. causing it to perform incorrectly due to invalid assumptions about the data's integrity.
It happens when the value stored in the object does not anymore reflect the underlying persistent value. I guess dangerously stale is just a way to say that the value is really outdated.
"Stale state" is when an object's stored (cached) view of the rest of the system becomes out of date. Eg an object is holding a handle to some other object, but the second object has been deleted in the meantime.
Trying to dereference a stale handle can lead to big problems.
Most systems will try to automagically protect you from various reasons for ending up with stale state, but it is not always possible to cover every case. (Depending on the system.)
Basically, it means invalid state. Usually a by-product of not notifying your instances of state change.