OOP, DCI and Ruby - what your system is vs. what your system does

If you've read my previous article on the tip of the iceberg with DCI then you might be curious to find out more about it. You read a very simple example that was only used to make a point. But for sure, you're left wondering a bit:

  • Why not make some other class to manage this if we want to separate functionality?
  • Is this at odds with the program's "connascence"
  • And another good question came up in the comments: what about performance?

All of these, and more, are good questions. But first you need to get your head around why you would use a pattern like DCI. What does the approach with Data, Context, and Interaction have to do with Object Oriented Programming?

Your first challenge

You need to unlearn what you know to be OOP.

Not quite all of it, but you need to stop and reconsider what it means to be "object oriented."

It's very likely that you've been building programs that are "class oriented" and not "object oriented." Take a moment and look at your latest project and consider if that's true.

Class Oriented Programming

You might call your program "class oriented" if your classes define both what an object is and what it does. If you need something else to happen to or with that object then what's your answer?

Do you add more methods to it's class or do you make a new class of thing to abstract the management of some task?

If you do either of those, it could be called class oriented.

Think about what this comment from Joe Armstrong might mean in Coders at Work: "The problem with object-oriented languages is they've got all this implicit environment that they carry around with them. You wanted a banana but what you got was a gorilla holding the banana and the entire jungle."

Can that be applied to your objects?

Object Oriented Programming

When we program with OOP we care about objects, not only classes.

What DCI attempts to do is separate what the system is from what the system does. With this type of approach you'll get a truer mental model of your program. And your code concepts will be far closer to what you experience in the real world.

"But classes represent what things are! And different things behave in certain ways" you might say. This may, on the outset, seem to be true, but if you think about the things in the real world, they don't do anything unless they are playing some role.

Objects and Roles

The problem with relying upon classes and inheritance to define object behavior is that the behavior is fixed at compile time and your objects are limited in what actions they perform.

For example, you might have a Person class that is the basic model representation of a human being. That person is also a student, so you make a Student subclass. Eventually the student needs to get a job to pay for books and you need to make an Employee class. Should employees inherit from students? No, that's not right. Should the student inherit from employees? That's not right either.

Really, what you have is an object, a person, that needs to play many roles. With DCI, the approach is obvious: when the object needs to play a role, it is given that role.

Obvious Code

Yeah, but how often does this happen? Isn't your example contrived?

This type of inheritance problem may not happen often in your programs, but it does happen and you're likely to run into it at some point. And when it does happen what will you do?

I'll make another object to manage the work. Maybe the Person has many tasks and I can just make a TaskPerformer object to handle that.

While making another object to handle behavior may solve the problem with better encapsulation, your code is less representative of how the real world actually works. By making your code less like the real world, it makes it more difficult for you and others to reason about it's functions. And by introducing an abstract object, you've introduced something that doesn't make sense in the real world. Does a person have a task performer or does a person just perform a task?

The benefit in approaching objects and roles like this is that it makes your code more obvious. In the context of some event, an object is given a role and has methods defined to perform some action. By explicitly assigning roles to objects in a context, your code is instantly more decipherable.

Let's look at a simple example:

current_user.extend Admin
    current_user.grant_permission(other_user)
    current_user.extend Notifier
    current_user.send_thank_you_to(other_user)

In the example code, we see when an object gets a role, and performs some action. Looking at that code, one could assume that the modules used to extend the objects define the methods used.

Compare that code with this:

current_user.grant_permission(other_user)
    current_user.send_thank_you_to(other_user)

Are those methods defined on the class? Are they in another module that's included in the class? Perhaps. And yet you might need to break out grep to look around for def grant_permission in your project to find out exactly what that method does.

By defining these methods directly in an object's class, we're convoluting what the system is with what the system does. By separating the actions in to Roles, we're drawing a clean line between our representation of data (what it is) and the use cases that our system is designed to implement (what it does).

Separate Object from Role

With Ruby, we can easily just define new methods on an object with extend. This gives you the ability to easily break apart your concerns.

Here's something to try: the next time you need to implement a feature begin by writing some pseudo-code. Take the details of what needs to be done and add them as comments. Then gather your objects and assign to them some roles that make sense for what needs to be done.

# A user submits a form to request permission to access our system
    # If there is an available opening
    team.extend AvailabilityChecker
    if team.available_opening?
      applicant = User.find(params[:id])
      applicant.extend Applicant
      # and the user has completed the application, send it to processing queue
      if applicant.completed_application?
        applicant.prepare_for_acceptance
      else
        # If the user hasn't completed the application, ask them to complete it
        render 'edit', :notice => 'Oops! Please finish this.'
      end
    else
      # If there is no available opening, display a message
      redirect_to :index, :notice => "Bummer, dude! We're all out of space."
    end

The sample I gave is completely unreal and off the top of my head; don't read too much into it. But if you take a glance you'll see that it's pretty obvious where I intend these methods to be defined. It's likely that others on my team would find it obvious too.

Another benefit is that I didn't add any new methods to my user class. My code won't infect any other feature or test developed by others and by not adding more methods to the user class I don't add any overhead to understanding it.

Once I have that pseudo-code that describes the behavior, I can comment it out and start writing tests while I re-implement each tested piece.

Try this approach to see whether the experience is good or bad. (Note that this doesn't have to happen in your controller. You might implement this in a separate object, a Context which coordinates the knowledge of these objects and roles).

Does it make your code more obvious? Did it make testing any easier? What did others on your development team think when they saw it?