The Author Online Book Forums are Moving

The Author Online Book Forums will soon redirect to Manning's liveBook and liveVideo. All book forum content will migrate to liveBook's discussion forum and all video forum content will migrate to liveVideo. Log in to liveBook or liveVideo with your Manning credentials to join the discussion!

Thank you for your engagement in the AoF over the years! We look forward to offering you a more enhanced forum experience.

I replied to the other thread .. Pls change to the following ..

scala> import scalaz._
import scalaz._

scala> import Scalaz._
import Scalaz._

scala> import \/._
import $bslash$div._

scala> res0.unsafePerformAsync {
     | case -\/(ex) => ex.printStackTrace
     | case \/-(value) => println(value)
     | }

Thanks for your comments .. Here are my observations ..

1. Regarding Listing 8.6, I am not sure why updateState should be called as there is no current state. The account is being opened as a new one.

2. Regarding handleCommand being generic, the A in handleCommand has nothing to do with Account. It's the parameter that's generic to Task or Event. Note the function handleCommand is part of RepositoryBackedAccountInterpreter which is an abstraction coupled to Account. Hence I don't see any correlation between Account being used in the function and making the function generic.

3. You are correct regarding unsafePerformAsync. I missed the argument. I will update the code base and report this to the publisher to add to the errata list. You just need to do the following change ..

scala> import scalaz._
scala> import Scalaz._
scala> import \/._
scala> res0.unsafePerformAsync {
     |   case -\/(ex) => ex.printStackTrace
     |   case \/-(value) => println(value)
     | }

And the entire session runs ok. Please have a look at the gist for a session I ran just now with the above change.

Let me know if you have any more question.

The issue with Future.sequence is that it returns a Failure if any of the futures fail. That may not be what you want in many use cases. For a slightly different technique to start futures in parallel still using for-comprehensions, see this tip ( from Viktor.
Regarding the println, I do appreciate the fact that a redundant print is produced. But that does not affect the main topic of discussion. But I will appreciate a PR and will merge it if it does not have any conflicting change with the main text.

Regarding the definition of source and sink .. I don't think there is any error in the description ..

It specifies :

A Source[+Out, +Mat] takes data from input and has a single output, Out to be written into.

So source is something that takes data from input, which is correct.

A Sink[+In, +Mat] has a single input, In, to be written into.

which is also correct.
Good catch .. you are absolutely correct! The arguments need to be flipped ..

I will take a look next week during the holidays when I have some spare time. Meanwhile I am too much stressed out with work related activities. Thanks for the gist - I will explore how we can make it more functional. Meanwhile you can take a look at - it may have something that solves your issue.

Hello -

The listing is there in the repository. Please find Listing 4.2 in

Sorry for the late reply. I need to find some time to code an example for this. Unfortunately I haven't yet been able to find that time. But it's definitely possible. You need to compose the construction of repositories and services within one monadic expression. Then finally when you run the expression you supply the values from the environment. If this helps, it's great, otherwise I will work up an example for you ..

Pls let me know ..

I would not recommend making another aggregate. This is because those audit fields don;t have any domain representation and that's the reason they don't appear as part of your domain model. Typically the underlying db framework that you use can be tricked to accomplish such usecases. You can use the trick suggested by Stephan in this issue ...

true .. in Scala 2.12, Either is right biased. But scalaz \/ offers lots of combinators including itself being a Monad. Note Scala standard library does not have a generic Monad. So, yeah in some places you can replace \/ with Either, but not all.

> Could you give your extended comment on Eric Evan's strong dislike of "anemic models"?

I am not sure if Eric Evans strongly dislikes anemic models. But when DDD came into existence mostly with Eric's book, all mainstream OO languages espoused domain models which tend to have state and behavior of objects within the same abstraction. This was the "class" in class based OO languages. Designers used to think that keeping state and behavior together also makes the model cohesive - I get *all* details of my abstraction within the same unit. Eric's DDD book also followed the same principle. I am not saying it's incorrect - in fact there are lots of successful instances of mainstream OO based DDD implementations running today. In the book I have just taken an orthogonal view based on my experience over the last 5 years or so. It's just an alternative view which I think is worth exploring.

> Is this a single DDD principle which we completely discard in the context of functional programming?

Not sure which principle are you talking about. I don't think DDD ever mentions anything about class based or function based implementation of the solution architecture. The only guideline it gives is to put enough emphasis on the domain model as it appears in the problem domain.

> FP seems to stress bare data structures as building blocks (ADT). Can Evan's position be explained by his OOP roots? Does the principle of "non-anemic domain models" be useful in any FP context? Any example of its usefulness in FP, especially if language permits OO style?

In FP the most important principle is compositionality. Functions compose in the mathematical sense of the term, objects don't. Evans has adopted an implementation strategy based on OO and I am sure he has many successful instances of DDD running today. I have just taken an orthogonal view. As they say "all models are incorrect, some are useful" smilie
As I recommend in the book, as part of functional thinking we start with the functions or use cases. Related functions go in modules and composing modules lead to larger functionalities.

Now with this thinking, a use case like funds transfer may be conceived as a domain behavior. Depending on the complexity of the use case it can be a single function or a module. But, yes, the interface / algebra of such a function should be a ReaderT, as it expects stuff from the environment (db connection, config info etc.).

Once we have the info from the environment through a Reader monad, we need to implement transactional semantics to ensure that aggregates are consistently updated. For simplicity we can think of ACID here and use the transaction semantics of some library like Slick or doobie.

Here are some of the steps ..

1. Model anything that takes stuff from environment as a ReaderT. This decouples the environment from the domain model and allows you to build your computation separately from executing it
2. Within the function consider using monadic or applicative semantics depending on the use case. In case of funds transfer I think it can be monadic as we want a fast fail semantics
3. Ensure consistency of aggregates using transaction semantics of the underlying db access library. Both Slick and doobie offer compositional semantics for this.

HTH, let me know if u need any more details on this.

We say an aggregate is a unit of consistency. Transaction is an overloaded term - in the age of distributed systems we have realized that ACID transactions don't scale. Please have a look at this awesome presentation from Jonas Boner ..

Having said that you can of course design aggregates as a unit of consistency using all the functional tools described in the book. If you want ACID, you can do that using db interfacing libraries like Slick or Doobie. I have not gone into the details of these libraries as I wanted to keep the discussion generic. Have a look at the places in the book where I discuss aggregates and elaborate on this topic.

Let me know if u need any specific pointer or help on this subject.

I mention as a footnote in Page 78 (Chapter 3) that Try violates a law of being a monad. I also provide a link to the SIP which discusses the details.

Storing in the repository, handling transactions etc. can be done in a referentially transparent way till you reach the end of the world. You just have to use a proper implementation of the storage layer. Have a look at Doobie ( which allows you to do so. Uses free monads and algebra and defers the implementation to the interpreter.

I have some comments on the gist as well.

You are correct as per the classical domain driven design definition. But even in Evans' book he mentions that the line of distinction between the 2 can be fine.

If u take the example of funds transfer between 2 accounts, I am not sure what u will do in the domain service. The funds of each account need to be fetched from the database in order to validate the business rules. Trying to model this with 2 levels of services will lead to some indirections which I thought could be avoided in having a single layer of services. Throughout the book I have intentionally made this distinction fuzzy.

Thanks for pointing this out. I am doing proof reading right now and will change it.
I am not very sure if I understand the concern. Have I mentioned anywhere in chapter 3 that repositories need to be part of domain services ? It's true I said we may need to inject repositories into domain services. I have demonstrated a use case where I injected AccountRepository into AccountService.

Maybe I am missing your point. Will you please clarify ?

Indeed we had a bug there. Now fixed by the latest commit .. .. Please try again and report if u still find any problem.

Thanks a lot.
Thanks for reporting .. I will take a look soon and get back to you.
The code fragment illustrates monadic composition and not transaction handling. Typically if you want to ensure atomic execution then you need to execute the whole sequence of monadic statements within a transaction. You need a separate combinator to handle transactions. e.g. if you use Slick then you can use something like the following ..

def transfer(from: String, to: String, amount: Amount): Command[Unit] = 
  (for {
    _ <- debit(from, amount)
    _ <- credit(to, amount)
} yield ()).transactionally

where the transactionally combinator will ensure database level atomicity of operations.

I agree with your thoughts of having the `apply` and `copy` delegate to the smart constructor. In fact I may add that to the online code as well. But I think still there are some hairy cases as in

Fixed! There was a difference in the way `Source.empty` works with streams. It used to work properly in 1.0. Now had to change it to `Source.maybe`. Konrad pointed this out to me. The `master` is now updated. I have also added some note in the respective classes on how to run the 2 programs - `TransactionProcessor` and `FrontOffice`.

Indeed this code fails after I upgraded akka-streams to the latest version. Looks like some semantics have changed. Let me work through this. Will post updates here once I fix this.

Thanks for reporting.
all of them have been cleaned up .. (hopefully smilie) .. Manning doesn't want references, hence I moved them to footnotes .. Thanks ..
Fixed in the latest version that's out for proof reading. Thanks for reporting.
I will see what I can do about this. But all chapters are now ready for proof reading, not sure if changes are allowed at this stage smilie
You are correct. What u r talking about is the functional variant of the Template Method pattern in a monadic context.
Thanks for reporting. I will have a look shortly. May be something related to the updated version of Akka streams .. will post my findings here.
I am planning to have some examples as part of the online repository, NOT as part of the core book. Let's see.
Using a callback can be an option. But usually what happens is that in a production ready event log implementation, the underlying library takes care of failure management by offering replication, local writes etc. thereby taking the load off the application developer. I have not discussed details of such distributed event logging mechanism - take a look at Eventuate ( for a sample. Especially


I am not sure I understand the question. Instead of guessing, will u please write in some more details .. Thanks.
Hello Mutaz -

At the very outset let me congratulate you for reading the whole book .. Now to your concern ..

I have mentioned in the book that functional programming principles don't play well with distributed systems at the context boundaries. There are some research works going on, but we don't have any standard stable alternative. So we need to accept that types break down at the context boundaries. And messaging provides the right abstraction there. So while architecting I suggest the following principles ..

1. Within a context boundary assuming no distribution is involved, use FP to the max
2. Within the context boundary assuming no distribution is involved, prefer abstractions like Future / Task since they are algebraic structures.
3. When you cross the boundaries use messages, which can be in the form of actors (think Akka). Because when you have distribution coming into play you need to be asynchronous. This helps you with backpressure management as well.

Let me know if you need more details on this.

This is the classical monadic approach towards designing a sequence of actions. It's imperative in nature e.g. you want to do task1, followed by task2 and then task3 .. Typically in imperative logic you will invoke 3 functions 1 by one - in fact you can also use the template method design pattern if you want to bring some discipline in structuring the task executions.

A monadic effect is the functional way to do the same thing. That's why a monad is also often called the semicolon of functional programming. In Scala when you use the for comprehension as ..

for {
  a <- task1
  b <- task2
  c <- task3
} yield ..

you get the same effect. As an exercise de-sugar the above using map and flatMap and you will have a clear understanding of how the sequence works.

In the above case the sequence will break if one of the functions throws an exception. As an alternative effect have a look at applicatives where you execute all the functions irrespective of the output. The book discusses both the effects in detail.

Yes .. I think there's a name clash of items with the enclosing addToStore. Changing to items' fixes the issue.
I was going to add some examples but then I found Listing 3.8 gives an example of composing this Reader with Try. List and/or Option should be exactly similar.

Composing effectful operations can be of many types. If you want failfast composition, then you need to go for monadic ones. You can have each of your functions return the monad of your choice, e.g. Error \/ Entity. Then you can use for-comprehensions to compose them. If you want to have fail slow composition, you can choose the applicative model. In Scalaz, Validation is one such abstraction which lets you compose multiple operations with error accumulation. I discuss both of these models in the book.

If you are using more than 1 monad, e.g. you mentioned about EitherT, then monad transformers is the way to go.

Let me know if you need more specific help on this. The book actually has quite a few examples of composing domain services using repositories.

Keep them as traits within a common package that all services can access. It can be a package object in Scala as well. The advantage of keeping them as traits is that you can supply specialized implementation (think typeclass design pattern) for different services.

One of the simple reasons I love to keep state and behavior decoupled is to encourage independent evolution of the two and exploit the compositionality properties of functions.

Once u define the state of your domain object as an algebraic data type, you get a host of advantages out of the box:

  • Immutability (case classes are immutable by default)
    Built in abstractions that let you manipulate ADTs functionally without any boilerplates e.g. Lenses, Zippers etc.
    With an ADT you can enforce strict separation of algebra & interpretation (have a look at the Free monads chapter of the book)

  • When you keep the state separate from the behavior, it's much easier to have pure functions to model domain behaviors. The state can be passed with the function using State monads. So we avoid the huge problem of manipulating mutable state - this is a huge plus for designing concurrent abstractions.

    Finally, loosely coupled abstractions are easier to manage and test. With an implicit this pointer, we have an unnecessary coupling of methods with an object, which makes compositionality more difficult. Just think why we cannot have seamless pointfree implementations even in Scala while we can have them in Haskell. It has a lot to do with the this pointer.

    Let me know if this helps. Or you need more details on the subject ..


    Thanks for reporting .. I already got this feedback earlier and it has been fixed. Hopefully it will be available in the final version.
    I tend to choose sealed traits and case objects over Enumerations mainly because of compiler support for exhaustivity checking. Have a look at for a comparison .. enumerations have some advantages but quite a few drawbacks too ..

    As I discuss in the chapter on modularization, there are many ways to go about modularizing your code. Yes, with Scala, traits and objects are the primary techniques for modules. One strategy you can adopt is to have packages based on domain functionality at a higher level and then have traits within them. Design traits with an eye towards compositionality. And defer creation of objects from the composed traits till the last.

    Does that answer your question ? Feel free to get back if u need more details ..

    I think I now understand your question .. let me explain with an example from the book .. Listing 5.6

    sealed trait AccountRepoF[+A]  #A
    case class Query[+A](no: String, onResult: Account => A) extends AccountRepoF[A]  #B
    case class Store[+A](account: Account, next: A) extends AccountRepoF[A]
    case class Delete[+A](no: String, next: A) extends AccountRepoF[A]

    Here for the first case Query, the continuation is of the form Account => A. This is because the Query operation (or more precisely the lifted query function in Listing 5.8 ) will give you an Account instance within the free monad, which needs to be passed along with the continuation for further processing. Hence the continuation is of the form Account => A. A typical example will be a composite operation that queries an account and then does something with it. So in the chain we need to pass the Account instance as well.

    For Store and Delete, we typically model them as Unit methods - hence we don't need anything to pass along with the continuation. Just the monadic chaining will do. Hence we pass just the Next.

    There's a third type where we don't need the continuation at all. That's the leaf type of the AST. The tree ends there. So we don't need the continuation.

    Thanks for the PR. I was planning to add the Natural Transformation and CoYoneda bit in the next round of changes. Initially I was not sure if I should speak about NT or CY - You made my life easier.

    I am not sure I understand the first part of your question. Let me start w/ an explanation of the continuation part.

    The purpose of the continuation is to give you the ability to chain free monads. With the account example you can write composite actions like the following ..

    val composite =
        for {
          n <- open("a-123", "debasish ghosh", Some(today))
          _ <- credit(n, 10000)
          _ <- credit(n, 30000)
          _ <- debit(n, 23000)
    } yield (())

    because of the continuation that you pass with each of the event types. The basic idea of free monads is to decouple the AST from the evaluation. The above composite action only builds the AST and the continuation helps chaining the various nodes of the AST. So intuitively the continuation is required for the monad part of the abstraction.

    Let me know if u need more details on this.

    Thanks for the question.

    Note that I am using exceptions locally and it does not escape the function to the world. The function closeDateCheck returns a Try and the exceptions will be wrapped up within that Try. So the smart constructor never throws the exception - it always returns a computational structure Try.

    In the later chapters I demonstrate other methods that uses other computational structures like Either or scalaz.\/ instead of exceptions.

    Hope this helps.
    Good point. I think I meant the definition as a footnote to the first occurrence. Will check with the publisher how it comes in the final version. Thanks for pointing it out ..

    Yes .. all of the others sound good to me. Let me see if I can change it as chapter 2 is already in pre-production. But I guess it's not too much of oddity to use the word "prolific" smilie

    Exactly .. that's a convention I follow .. a for accumulator & e for element. But I agree a bit more verbose that u suggest is always better to some. However keeping in mind the space that I can afford in the book code snippets, I have stuck to a and e. It may not be that long in this snippet, but it often becomes difficult to fit the snippet without folding.

    The key thing to note is that in the definition of Free in scalaz we have

    abstract class Free[S[_], A] { //..

    This means S is the existential parameter and the compiler automatically puts a bound <: Any >: Nothing. An existential type means that the exact type is not known - it can be anything and code that depends on existentials also cannot assume any restriction there. Have a look here (

    I guess you cannot restrict it here ..

    May be I am missing something. When do u get the compilation error ? When I try the following, it gives green .. Currently all is in 1 project - I have plans of refactoring though ..

    Apples-MacBook-Pro:frdomain debasishg$ sbt
    [info] Loading global plugins from /Users/debasishg/.sbt/0.13/plugins
    [info] Loading project definition from /Users/debasishg/projects/frdomain/project
    [info] Set current project to FRDomain (in build file:/Users/debasishg/projects/frdomain/)
    > clean
    [success] Total time: 2 s, completed 7 Feb, 2016 12:11:49 AM
    > compile
    [info] Updating {file:/Users/debasishg/projects/frdomain/}FRDomain...
    [info] Resolving jline#jline;2.12.1 ...
    [info] Done updating.
    [info] Compiling 112 Scala sources to /Users/debasishg/projects/frdomain/target/scala-2.11/classes...
    [success] Total time: 51 s, completed 7 Feb, 2016 12:12:43 AM

    I will definitely try to have a section that touches upon some of the concerns. Still it's difficult to marry pure FP with distribution. The way I usually see it is that in most of the use cases distribution implies separate bounded context. So you can think of the entire model as a union of multiple bounded contexts. Within each BC you apply pure FP, do algebraic design, compose beautiful abstractions. But all algebra breaks down at distribution boundaries. You need to employ techniques like message passing to address this. I spoke about this in my talk at Goto Copenhagen in 2015. You can take a look at

    Let me know if this helps.

    Hello -

    Thanks for the kind words on the book. Glad that u like it.

    Regarding consistency in distributed systems and designing consistent distributed domain services is a topic in itself that needs a separate book altogether. I don't think I can make enough justice to this topic by gluing it as part of this book. Already I am under pressure from the publisher that the book is exceeding the prescribed page count smilie ..

    However I am thinking that I can try and add a section on this as part of chapter 8 where I discuss event sourcing. I certainly had ideas of discussing Akka persistence in depth, but could not do that for lack of space. Chapter 8 is already quite a bit long and has lots of code associated with it. Adding Akka persistence based implementation will add to that as well. I am also thinking if I can make an implementation based on Akka persistence as part of the online code repository - but having second thoughts as well as I have not covered any details about it (except some small stuff in the reactive streams chapter).

    Let me think ..

    Yes .. your explanation is spot on!

    You are correct! I will update the book chapter and the explanation based on scalaz's new version. Right now working on chapter 9 - will make the changes once I am done. Thanks for pointing it out.
    Thanks for your explanation and apologize for the late response ..

    The best way to understand flatMap is to start backwards.

    We need to produce a Reader[R, B]. So we start with a Reader(r => ..), where r is of type R. Note that the abstraction takes a run function which takes an R and generates an A. So we apply this r to run and run(r) gives me a A. Now f: A => Reader[R, B] - hence we apply run(r), which gives me an A, to f. Now I get a Reader[R, B], which is an abstraction that has not yet been evaluated. This Reader[R, B] takes the r through the second run and generates a B, which is, by definition a Reader[R, B].
    Thanks for the suggestion. The idea is to give a macro level overview of the general concepts. But I agree Instant may be a better abstraction. However I used DateTime more for familiarity reasons.

    Hi -

    Thanks for your comments.

    First, I would like to say that in design and modeling there's hardly anything that's considered a wrong option. Stated otherwise all modeling options suck, some just happen to work smilie.

    But I follow a principle while modeling - that's using the option that brings in the least of coupling between abstractions. In this case if u mix the Account in the AccountService, it will work no doubt. But it creates a coupling that could be avoided. And self type annotation is basically a round about form of inheritance. And inheriting an AccountService from Account doesn't make much sense intuitively.

    Thanks for reporting. The main idea behind the code fragments are to explore interactively. Since every chapter delas with a lot of topics it's not always practical to have separate main programs for each. The ideal way to explore is to use sbt console and explore. Here's a sample fragment on how to explore the free monad based implementation in chapter 5.

    Let me know if this helps and if you need any further guidance on this.

    Welcome to Scala version 2.11.7 (Java HotSpot(TM) 64-Bit Server VM, Java 1.7.0_79).
    Type in expressions to have them evaluated.
    Type :help for more information.
    scala> import
    scala> import Main._
    import Main._
    scala> import app1._
    import app1._
    scala> t
    res0: scalaz.concurrent.Task[] = scalaz.concurrent.Task@764e2d3c
    res1: = Account(a-123,debasish ghosh,Fri Dec 11 21:08:38 IST 2015,None,Balance(17000))
    scala> import app2._
    import app2._
    scala> v
    res2: List[String] = List(Storing Account(a-123,John K,Fri Dec 11 21:08:38 IST 2015,None,Balance(1000)), Deleting a-123, Query for a-123)
    Thanks for reporting. Have taken note of it. Will address in an upcoming MEAP.
    Hi -

    Thanks for reporting your thoughts. The usual practice is to report any comment here in the forum. Alternatively you can also report it as an issue in the github report, which is the repo for the online code base accompanying the book.

    Thanks for pointing out .. here's my response ..

    (a) closeDate is never used. True. To make the example simple I assumed a close date of today. Will remove the closeDate from the argument.
    (b) The query after the close is just to return the closed account. It's just a sample DSL and I defined the semantics that way. Note it's not an error - it will return the closed account.

    Thanks for pointing out .. will correct in the next MEAP ..
    I agree that Option#get should not be used. But here it's only in a sample test and I am sure that the data set up ensures that the get actually gets a result. But I would not use it in any serious code.

    It's all a matter of programming at a higher level of abstraction. With Kleisli I can separate the evaluation of the abstraction from the building of it. Consider the following example ..

    scala> def f(i: Int) = List(i, 2 * i)
    f: (i: Int)List[Int]
    scala> def g(j: Int) = List(j * 2, j * 4)
    g: (j: Int)List[Int]

    I can always do the following ..

    scala> f(10).flatMap(g)
    res6: List[Int] = List(20, 40, 40, 80)

    and evaluate my composed abstraction. Also the above expression gets complicated when you have more functions to compose. However with Kleisli you can do the following ..

    scala> import scalaz._
    import scalaz._
    scala> import Scalaz._
    import Scalaz._
    scala> import Kleisli._
    import Kleisli._
    scala> kleisli(f) >=> kleisli(g)
    res7: scalaz.Kleisli[List,Int,Int] = Kleisli(<function1>)

    Note we have just formed an expression which is the definition of the abstraction that we would like to compose. We can reuse it later and defer the execution of the composed abstraction. Like this ..

    res8: List[Int] = List(20, 40, 40, 80)

    And the other advantages of using Kleisli is that you can get access to all the combinators that Kleisli offers. It's a very canonical way to compose effectful functions of the form A => M[B] where B is a monad.

    Let me know if you need any more information.

    oops .. sorry for the typo .. I am sure we will be more careful next time onwards .. Thanks.
    Thanks for all the kind words. You are correct - it was an oversight, the monoids need to be passed explicitly. I will make the change and also include the Scala Tip as you suggested.

    Thanks again for the correction ..
    - Debasish
    Thanks for the observation .. will take care in a future MEAP ..
    At least for public API I think it's an essential practice to include the return types. For some very obvious methods they may be omitted, but a must for non trivial methods. I have a plan to include all of them before the book is released. Marking this post as an issue on Github.

    The core value of abstractions like Functor, Monad, Applicative etc. is parametricity, which means that they allow you to write generic functions. This is a huge value which you cannot do using only the Standard Scala library. Here's a very simple example that uses Functor :-

    def foo[F[_]: Functor, A, B](f: F[A])(g: A => B) = f map g

    This is a very simple example but it demonstrates the power of parametricity. The function foo is completely independent of any specific type. The only constraint it imposes is that F needs to be a Functor. So you can call this function with any F that provides an instance of a Functor. e.g.

    scala> val g: Int => Int = _ * 4
    g: Int => Int = <function1>
    scala> foo(Option(10))(g)
    res10: Option[Int] = Some(40)
    scala> foo(List(1,2,3))(g)
    res11: List[Int] = List(4, 8, 12)

    You can have this reusability only because you have the generic Functor abstraction. The book discusses many such parametric functions in domain modeling in chapters 3 and 4. Let me know if you need any more explanation.

    This is a book on domain modeling and assumes a knowledge of the functional programming principles as used in Scala. It would be impossible to explain and implement all FP abstractions and then explain domain modeling. Hence using a library that implements these abstractions needed to be used. And what better a library is there than scalaz ?

    I agree there's a learning curve involved. But this learning curve would have been much bigger if I tried to reinvent the wheel and come up with another library. One thing I believe strongly is that it's worthwhile learning scalaz if you want to do functional programming in Scala.

    When we specify an _ in a for comprehension it doesn't mean that the return type is changed to Unit. It means that I don't need the name explicitly in my code. The threading is done as usual through the Try monad. As an exercise write a for-comprehension using _ and see the de-sugared version that the compiler generates.

    Let me know if you need more details.

    Actually I was planning to have a complete example towards the end anyway. But I think this is a good suggestion. Let me think how I can do this without disturbing the general flow of the chapter.

    Thanks for the suggestion ..

    - Debasish
    I discuss exactly this topic in chapter 6, section 6.2. Using the power of type system and monad transformers you can achieve this. Feel free to get back if you need more information on this even after reading section 6.2.

    Loose coupling doesn't imply that we cannot share anything between them. Account ADT, if used in multiple modules, you can very well share. This can be either in the form an import or in the form of another module - depending on the amount of information you would like to share.

    There's also a related concept called Bounded Context, which I discuss later in the book. Bounded Contexts are of larger granularity than modules. But even they can also share information through proper model of communication. This is described in great details in Eric Evans' DDD book. I also discuss this in chapter 5 in quite some details.

    Hello -

    Actually there are quite a few options. Let me discuss one of them using which you can have a nice composition of pure computation as well as ones that can be side-effecty. The solution is to wrap your computation in a monad. In this use case I am using Try, which is a monad. Try is useful since all of the operations that we are trying to compose together can fail ..

    def receiveRequest(body: Int): Try[Request]
    def audit(r: Request): Try[Unit]
    def validate(r: Request): Try[Boolean]
    def sendEvent(r: Request): Try[Unit]
    for {
      r <- receiveRequest(100)
      _ <- audit(r)
      _ <- validate(r)
      _ <- sendEvent(r)
    } yield (())

    Note the side-effecty ones have the signature Try[Unit], Unit indicates the side-effect and the Try indicates that it can fail. There are many other advanced options e.g. using Free Monads that I discuss later in the book. Pls beware that the above code is just a rough one and I don't have a compiler right now. Please feel free to get back if u need more details.

    Hello -

    Thanks everybody for contributing towards the betterment of Functional and Reactive Domain Modeling. I have been receiving lots of feedback on how to improve the contents and have been working on them.

    Just thought of discussing one point regarding the exercises and quizzes. It would be wonderful if u take some of your time and let me know your views on this. I have a few questions which should decide how to best have the quizzes in the book.

  • In the beginning I had some quizzes in chapter 1. There was a mixed feedback on the style in which the quizzes were presented. The answers were there in the book but after a few paragraphs so that the reader gets enough time to think of the answer before looking at it. Does this look a viable option ? Or you have any better suggestion ?

  • Besides quizzes I am plugging in some solid exercises which involve comprehensive modeling and programming. Hope you find them useful. The answers to those questions will not be part of the book - I am planning to have them as part of the online repository though. What do you think of this approach ?

  • Thanks.
    We can do that. The technique is used to enforce type level restriction to prevent illegal states in entities. There's an entire section devoted to using this technique. Have a look at section 4.5.2 where we use Phantom Types to do implement this for another use case.

    In functional programming the most widely used polymorphism is the parametric polymorphism, where we parametrize the function with some type. This has a lot of advantages in making your code more typesafe. This property is popularly known as parametricity. I discuss this in some detail in the book. And there are some additional readings on this topic which you can go through for further details.

    (a) a presentation by Tony Morris (
    (b) Theorems for Free (A paper by Philip Wadler -

    In short it constrains your type to the one which you need for your model without giving you any additional power. Hence your code automatically benefits from some free theorems which Wadler discusses in the above paper.

    Let me know if you need more clarifications on this subject. Very recently I have written a simple blog post explaining parametricity in simple terms. You can also have a look at this .. ...

    Thanks for the question. With the constraints specified as part of the generic type, you get more precise returned type information. Have a look at the following example ..

    scala> :paste
    // Entering paste mode (ctrl-D to finish)
    sealed trait Shape
    case class Square(s: Int) extends Shape
    case class Rectangle(l: Int, b: Int) extends Shape
    def makeList(s1: Shape, s2: Shape) = List(s1, s2)
    def makeList1[S <: Shape](s1: S, s2: S) = List(s1, s2)
    // Exiting paste mode, now interpreting.
    defined trait Shape
    defined class Square
    defined class Rectangle
    makeList: (s1: Shape, s2: Shape)List[Shape]
    makeList1: [S <: Shape](s1: S, s2: S)List[S]
    scala> makeList(Square(12), Rectangle(13, 12))
    res5: List[Shape] = List(Square(12), Rectangle(13,12))
    scala> makeList1(Square(12), Rectangle(13, 12))
    res6: List[Product with Serializable with Shape] = List(Square(12), Rectangle(13,12))
    scala> makeList1(Square(12), Square(12))
    res7: List[Square] = List(Square(12), Square(12))

    With the version makeList1, you get the exact type while with makeList you get the upper bound always.

    Thanks a lot for the PR. Yeah .. I am incorporating all the changes - will be there in the next MEAP for sure.
    Thanks for reporting .. will fix ASAP .. logging an issue on Github ..
    Thanks for reporting .. will correct in the next MEAP ..
    In the example, the trait Analytics is parametric in 3 types. To make a concrete type, you need to fill in the placeholders with meaningful concrete types. In the example, the generic types are not bounded - hence you can put arbitrary types in there. In fact mocks are often set up that way for testing. The idea is to apply proper semantics with the types that you fill up with.

    Another alternative would have been to define some protocols for each of the types, which the concrete types must honor, e.g. we can have a TransactionProtocol and we can make the generic type Transaction <: TransactionProtocol in the algebra of the trait. That would have constrained the parametric polymorphism of the trait that we are defining. But if you would like to apply such constraints that's perfectly valid (constrained polymorphism). Leaving it open is also a valid option - leave it up to the user's imagination to provide valid types.

    One thing is important to remember - when we define a trait (not the implementation), we are just defining the algebra. It's up to the implementor to provide the semantics.


    The steps that you have listed are quite intuitive and works with minor variations across various web applications. The challenge is to model the steps that you have listed under Domain Layer in a functional way. The fact that you have listed them in sequence implies that you are thinking of them as sequential steps. Now you map how to model them using pure expressions in a functional way. Hint: Think in terms of monads and Kleislis.

    Agreed .. those are for simplicity and you can very well replace them with RT versions. But, as you rightly said, it will make implementations complex and you can question the perceived value that it adds. Of course with Haskell you will be forced to do it as the type system will not allow you otherwise.

    Regarding breaking RT, I usually follow the principle that an API has to be RT. There can be mutation as part of the implementation. But so long I have an RT API, it's fine. In fact there are many instances where mutation is required in the implementation for performance reasons. But it never leaks out in the API.

    DayOfTheWeek's method toString is unsafe and that's clearly mentioned in the text. Hence we have the smart constructor method in the companion object named dayOfWeek which should be used to construct a valid DayOfWeek. That's the precise purpose of a smart constructor. All other methods in the companion object are private and hence not accessible to the user.

    Also you mention about ADT. We don't have any algebraic data type here. trait DayOfWeek is merely a module that abstracts some logic.

    Let me know if this clarifies your question.

    Good suggestion. Could you please point out the sections where you think these should be changed ?

    Visitor is one of the most convoluted patterns in GoF. Every one has expressed lots of concerns over the complexity of using this pattern. Just google it and you will find lots of links. However a more interesting topic is to judge how pattern matching is a better alternative when structuring your code. Here are 2 links which you may find useful :-

    1. In Defense of Pattern Matching by Martin Odersky (
    2. Matching Objects with Patterns (

    The second paper gives a comparative analysis of the advantages of using pattern matching over the Visitor pattern in Scala.

    good catch Jisoo smilie .. Thanks for reporting. Will fix soon ..
    Hi Tim -

    Thanks for your comments.

    Regarding review of the book you need to be in touch with the publisher Manning - they decide all reviewers for the book.

    Now back to your thoughts on the mathematical concepts in domain modeling. In the book I talk about many of the mathematical stuff that you mentioned but mostly without the rigor of the theory. By theory I mean category theory which models many of the concepts that I use in the book.

    One of the things which has been emphasized a lot in the book is the concept of algebraic design - I view the model contract as an algebra based on the types and morphisms (or functions) and derive lots of structures from there. e.g. I talk about monoids and demonstrate how various domain model artifacts can be modeled as monoids. I talk about functors, applicatives and monads and map them to real world domain modelling structures.

    In brief the book has an underlying mathematical undercurrent though the discussions are purely based on implementations using the functional programming techniques.

    No. The book is not based on ReactiveX or any other API. It describes how to build domain models using a functional programming language like Scala and make it reactive as per the attributes laid down in the Reactive Manifesto ( ..

    You are thinking along right lines - the Reader monad is the way to go. And in case you want to handle reading of context values and using it to update state, there's a more powerful abstraction in scalaz, called ReaderWriterState. has an example. But if you just want to pass context for reading, Reader monad should just be fine.

    Hi pjan -

    I am not sure what I was thinking. I should not have replied from my phone while traveling with a kid smilie .. You are absolutely correct. The bind is over the Reader, which doesn't fail and hence we get the failure message of the last call to balance.

    Thanks for pointing this out. I will correct it in the next MEAP - change the example to one w/o the Reader.

    - Debasish
    Let me try to explain again ..

    When you are within the Reader monad, things are handled by the Reader. After the for comprehension, if you invoke op("noNotExists"), you get a Reader - no failure till now since all we have is a computation, which has not yet been evaluated.

    The moment you do op("noNotExists") run repo, you are out of the reader monad and the run combinator asks for the evaluation of the computation, which the Reader was holding on till now. If we think of the Reader as just a Function1, it's actually a function AccountRepository => Try[Balance]. So the moment you run it by giving the argument "repo", it starts the evaluation of the for comprehension.

    And in the first step, the "credit" method fails since what credit first tries to do is look up for the account no. It fails and since Try is a monad, it's a fail-fast computation. Hence the entire operation fails and by the signature of the entire computation, which is AccountRepository => Try[Balance] (the de-sugared version of Reader[AccountRepository, Try[Balance]]), we have a Failure reported with an exception.

    Hi pjan -

    When I do op(“noNotExists”) run repo for a non-existing account, I run the Reader monad. When u run the Reader monad, the for comprehension will get executed and the Failure will result. Note that when we do the comprehension, we get a composed Reader over the entire sequence (Reader[AccountRepository, Try[Balance]]), which gets executed in the run. To see the example in action, get the source code from the Github repo (link given in Manning book page) and try executing as per the gist Note the return types in each step and the final Failure that we get for a non-existing account.

    Hope it clarifies. Let me know if you need more help.

    I think you meant listing 4.5 .. Thanks for reporting. Will correct ..
    I think you have mixed up the book that u r reading smilie .. This is not from my book .. there's no section 1.15 in Functional and Reactive Domain Modeling ..

    Thanks a lot for the feedback. I am aware that the current format is not the most reader-friendly. And I am still experimenting with a few options. I went through your suggestion and it looks quite good. I have also started introducing a few exercises (programming problems), which you will get in the next MEAP.

    Hopefully we will have something that doesn't disturb the flow of the reader.