Isolate components for better testing with mockA

Introduction

This blog post is strongly connected to the presentation that has been given by Damir Majer and Martin Steinberg during SAP Inside Track 2014 in Munich.

The presentation focuses on solving a code kata using Test-Driven Development (TDD).

The purpose is to show how TDD can lead to better tests coverage, thus more robust and maintainable software components. This blog post focuses not about the solution of the code kata itself, but rather on how to test each component separately from each other using mockA.

The Kata

Summarized, the Kata says:

Implement a simple String calculator class that accepts a flexible amount of numbers in a string. The numbers should be summed up and the sum needs to be returned.

Examples:

  • An empty string returns “0”
  • For single numbers, the number itself will be returned, e.g. for “1”, the sum 1 will be returned
  • For “1,2”, the sum 3 will be returned
  • Also multiple delimiters will have to be accepted, e.g. “1;2\3;1” will lead to 7
  • This also applies to line breaks like “\n”. “1\n2,3” results in 6
  • Delimiters might have variable length. “//***\1***2***3\2***2” results in 10
  • Raise an exception in case negative numbers are passed to the method
  • Numbers bigger than 1000 should be ignored

The Kata requires you to implement the code step by step, without skipping steps. Every step should contain

  • A unit test that tests the requirement and will fail at the first run
  • An implementation that covers the requirement
  • A new unit test run that will succeed
  • Refactoring
  • Running the test again to ensure nothing broke

The Solution

The solution class can be found in the attachments (“zcl_string_calculator.txt”).

The class ZCL_STRING_CALCULATOR contains

  • One protected method that replaces all delimiters with a comma (“replace_delimiter_with_comma”)
  • One protected method that sums up the consolidated string (“compute”)
  • One public method to rule them all (“add”)
  • Several attributes

“add” basically delegates the task of replacing all delimiters with commas to a specific protected method. It uses its output to sum up the values.

The Unit Test report “Unit Test v1.txt” shows the corresponding unit tests.

Isolate helper methods from the add-method

While “replace_delimiter_with_comma” and “compute” are helper methods, the public “add”-method delegates its own calls to these methods. Thus, it is dependent from the helper methods.

In some point of time, it might be helpful to check, if the “add”-method works as expected, which means, that it delegates its calls correctly to the helper method.

Think of the following unit test, which does not directly link to the code kata, but may ensure code quality:

  • Test the “add” method with the string “<1sd2,3rtrt,4”
  • Ensure, that “add” calls “replace_delimiter_with_comma” with “<1sd2,3rtrt,4”
  • The call will return “1,2,3,4”
  • Ensure, that “compute” will be called with “1,2,3,4”
  • Ensure, that the output of compute is returned without modification (result will be 10)

Such a test will need you to subclass ZCL_STRING_CALCULATOR and redefine the helper methods with hard coded return values based on specific inputs. Furthermore, some logic behind “compute” should allow you verify if the method has been called with the correct parameters.

Subject to the test will be the subclass of ZCL_STRING_CALCULATOR, which will partly contain so called faked functionality regarding “replace_delimiter_with_comma”. But it will also contain some mock features, as “compute” should not only conditionally return some values based on its input, but it should also allow you to determine, if it has been called with the expected input.

mockA allows you to skip this subclassing and lets you focus on the test. It will create a subclass at runtime for you, which follows constraints like conditional method output. These constraints can be hard coded by you. It will also allow you to verify method calls of mocked methods.

“Unit Test v2.txt” shows you how to do it. Keep a look a test method “test_add”.

The first call

lo_mocker = zcl_mocka_mocker=>zif_mocka_mocker~mock( ‘ZCL_STRING_CALCULATOR’ ).

lo_mocker->method( ‘replace_delimiter_with_comma’

)->with_changing( ‘<1sd2,3rtrt,4’

)->changes( ‘1,2,3,4’

).

tells mockA to fake the method “replace_delimiter_with_comma”, while

lo_mocker->method( ‘compute’

)->with( ‘1,2,3,4’

)->returns( 10

).

tells mockA to fake the output of “compute”.

Subject to the test will be the object generated by mockA (which is a subclass of ZCL_STRING_CALCULATOR in reality)

go_string_calculator ?= lo_mocker->generate_mockup( ).

After the call of “add”, the result is verified in the unit test. But besides this verification, you may also ensure, that “compute” has been called correctly with the input value “1,2,3,4”:

DATA lv_has_been_called_correctly TYPE abap_bool.

lv_has_been_called_correctly = lo_mocker->method( ‘compute’ )->has_been_called_with( ‘1,2,3,4’ ).

assert_not_initial( lv_has_been_called_correctly ).

Further information

You may find further information of the presenters at

damir-majer.com / @majcon

http://about.me/martin.steinberg / @SbgMartin

attachments: https://code.google.com/p/uwekunath-wordpress-com/source/browse/#git%2FCodeKATA

A Vise for ABAP

Introduction

This week I needed to refactor some existing code. This step turned out necessary since future developments dealing with this specific part of the code grew more and more complicated. With every new change it took longer and longer to get the expected results.
In addition the outcome was obviously getting more and more fragile, which means, every change I made caused side effects in other parts of the code which I didn’t expect.
All of these signs typically call for a refactoring session, which I started after some weeks of hesitating.
The first goal of the refactoring was to get the source code under control, which means I wanted to apply unit tests as fast as possible to afterwards clamp the behavior of the code by applying various tests.
Once the unit tests would be in place, the change itself could be implemented much safer and more robust. If the unit tests are testing the right things, they would immediately show, what went wrong with one of the last changes.
But applying unit tests to existing code which had no unit tests before is often not easy.
The reason for this is dependencies. To give you an example: Every time you use validation logic around database calls, you couple dependencies too close, which means you move far away from a testable architecture.
Consider this example:
METHOD is_open.
DATA lv_status TYPE z_doc_status.
SELECT SINGLE status FROM z_doc INTO lv_status WHERE id = mv_id.
IF lv_status = ‘O’ OR lv_status = space.
rv_is_open = abap_true.
ELSE.
rv_is_open = abap_false.
ENDIF.
ENDMETHOD.

To test such a method in a unit test, you would have to insert a corresponding record to the database before, in order to have something to select from within the method implementation.
This method should be refactored in order to apply a unit test. One possibility is to define its own class for accessing database entries. This kind of class is called “repository”.
With an existing repository, the method could look like this:
METHOD is_open.
DATA lv_status TYPE z_doc_status.
lv_status = mo_doc_repository->get_status( mv_id ).
IF lv_status = ‘O’ OR lv_status = space.
rv_is_open = abap_true.
ELSE.
rv_is_open = abap_false.
ENDIF.
ENDMETHOD.

The repository now takes care of selecting what we are asking for. In Unit Tests, you could fake this repository by using a local class implementation which returns hard coded values instead of database calls. This makes testing easier, more robust and repeatable.
However, we didn’t have a unit test yet. This step was just the preparation to apply a unit test to this method. How do we know, that the code didn’t already break yet?

Using a vise to ensure that nothing broke

The concept of a vise is something I didn’t invent. I just read Michael Feathers blog posts which dealt with the question how one can ensure that nothing broke during one of the last changes in source code.
The concept is quite easy: Before a change to the code, you record the content of a freely chosen variable. After the change, you ensure that the value stays the same.
With the vise implementation I wrote for ABAP, capturing the value of a specific variable is quite simple: Just call ZCL_VISE=>GRIP( … ) like in this example:
METHOD is_open.
DATA lv_status TYPE z_doc_status.
SELECT SINGLE status FROM z_doc INTO lv_status WHERE id = mv_id.
ZCL_VISE=>GRIP( lv_status)
IF lv_status = ‘O’ OR lv_status = space.
rv_is_open = abap_true.
ELSE.
rv_is_open = abap_false.
ENDIF.
ENDMETHOD.

Running this code for the first time causes Vise to write the contents of the status value to the database. By the way, this also works, if you perform a ROLLBACK WORK later on.
Running the coding after the change should not cause Vise to throw an exception. But if the value of lv_status had changed, it would. Of course you should make sure to process the same entity, which means the same document ID, in order to get the same status.
To run the code, you could either apply a unit test with a hard-coded document ID or just run it using an application UI (which is yet another form of a test, although not the same like a fine grained unit test)
METHOD is_open.
DATA lv_status TYPE z_doc_status.
lv_status = mo_doc_repository->get_status( mv_id ).
ZCL_VISE=>GRIP( lv_status)
IF lv_status = ‘O’ OR lv_status = space.
rv_is_open = abap_true.
ELSE.
rv_is_open = abap_false.
ENDIF.
ENDMETHOD.

After you figured out, that nothing has changed and the behavior of the code stayed the same, you can apply tests. As I mentioned, applying unit tests fixes the specification against your application behavior and makes future changes to the code easier and safer, hence less costly.
Do not forget to remove the calls to Vise after the changes. As a development tool, Vise is not intended to be shipped within productive code!

Specific considerations on ABAP

In ABAP you may have work areas and internal tables, besides objects and primitive data types.
Vise can track also the contents of internal tables and work areas. In addition it allows you to skip certain fields when validation against previous recordings is done.
All you have to do is to specify the components which are to be ignored. You can pass this information with the GRIP( ) – call:
DATA lt_ignore TYPE string_table.
APPEND ‘PRICE’ to lt_ignore.
ZCL_VISE=>GRIP( i_data = lt_sflight it_ignore = lt_ignore ).

Tracking object contents is not yet supported. Feel free to join the project and implement such a comparison e.g. for all public get-methods and attributes.

Where to get Vise

Vise is currently hosted on Github

Links

Vise for Java
How to write trustworthy and effective unit tests in general
How to bring legacy code under test coverage

Stories about Repositories and Number Ranges

Introduction

This week, I faced a situation where I needed to implement a new business object for one of our applications. As business objects usually need persistence, I created a new repository for it to handle the database access. This repository covered the standard database functions like Create, Update, Read and Delete (CRUD).
While implementing the repository, the question of what should happen to new business objects during the Create-operation came up, specifically who should create a new document number. I decided to request new number ranges for new business objects while in the SAVE-Method of the repository and then the trouble started to grow…

Bad design

The first design was a mess:
v1
In this fictitious example, ZCL_BOOKING represents an entity, a booking record, which will be saved to the database.
ZCL_BOOKING_REPOSITORY takes care of saving this entity correctly, with some more functions like retrieving those entities back from database or delete them from the DB.
The SAVE-Method looked like this:
CLASS ZCL_BOOKINGS_REPOSITORY IMPLEMENTATION.
...
METHOD ZIF_BOOKING_REPOSITORY~save.
DATA lv_bookid TYPE S_BOOK_ID.
DATA ls_header TYPE bookings.
lv_bookid = io_booking->get_bookid( ).
ls_header = io_booking->get_booking( ).
IF lv_bookid IS INITIAL.
CALL FUNCTION 'NUMBER_GET_NEXT'
EXPORTING
nr_range_nr = 'ZFL_BOOKID'
object = '01'
IMPORTING
number = rv_bookid
EXCEPTIONS
interval_not_found = 1
number_range_not_intern = 2
object_not_found = 3
quantity_is_0 = 4
quantity_is_not_1 = 5
interval_overflow = 6
buffer_overflow = 7
OTHERS = 8.
* implement proper error handling here...
io_booking->set_bookid( lv_bookid ).
CALL FUNCTION 'Z_FM_INSERT_BOOKINGS' IN UPDATE TASK
EXPORTING
iv_bookid = lv_bookid
is_header = ls_header.
ELSE.
CALL FUNCTION 'Z_FM_UPDATE_BOOKINGS' IN UPDATE TASK
EXPORTING
iv_bookid = lv_bookid
is_header = ls_header.
ENDIF.
ENDMETHOD.
...
ENDCLASS.

Running out of time? I’M RUNNING OUT OF NUMBERS!!!

Everything went well, until I started to write a unit test for the SAVE-method.
I started to write a test which created a new instance of type ZCL_BOOKING without any BOOKID. I expected the instance to be inserted to the database. This worked pretty well, but when I tried to implement the TEARDOWN method I had some issues. As Unit tests need to be processed as often as you need, and should leave the system in the same state from where you started from. This means, the inserted record needed to be deleted.
As the SAVE-method requests a new number for each object that has not yet an ID, I don’t really know which ID has just been inserted.
I could have asked the instance of type ZCL_BOOKING which ID has been set for it. This would solve at least the issue that I need to clean up the database after the test-insert.
But, more severe, the current number in the number range interval has increased by one with each unit test. This was not acceptable.
So the unit test revealed the bad design: In fact, the repository had a dependency on a number range object. Actually, it should not care about it.

Refactored design

This step introduces a new class to the design, which is called ZCL_NUMBER_RANGE_REQUEST. It implements an interface ZIF_NUMBER_RANGE_REQUEST which is now used by ZCL_BOOKING_REPOSITORY to handle its number range requests.
The number range object is created before ZCL_BOOKING_REPOSITORY is getting created in order to hand it over to the constructor of the repository.

v2

The result is: Instead of creating new document numbers by its own, the repository asks another object for it.
This has a huge benefit: As the number range object is specified by an interface, we can fake this interface in a unit test and pass it to the repository’s constructor. The fake object of course does not request a real number from a real number range but returns “1” all the time.

Unit Test Setup

So that’s what the new implementation of the classes looks like:
CLASS ZCL_BOOKING_REPOSITORY IMPLEMENTATION.
...
METHOD ZIF_BOOKING_REPOSITORY~save.
DATA lv_bookid TYPE S_BOOK_ID.
DATA ls_header TYPE bookings.
lv_bookid = io_booking->get_bookid( ).
ls_header = io_booking->get_booking( ).
IF lv_bookid IS INITIAL.
lv_bookid = mo_number_range->get_next_number( ).
io_booking->set_bookid( lv_bookid ).
CALL FUNCTION 'Z_FM_INSERT_BOOKINGS' IN UPDATE TASK
EXPORTING
iv_bookid = lv_bookid
is_header = ls_header.
ELSE.
CALL FUNCTION 'Z_FM_UPDATE_BOOKINGS' IN UPDATE TASK
EXPORTING
iv_bookid = lv_bookid
is_header = ls_header.
ENDIF.
ENDMETHOD.
METHOD constructor.
mo_number_range = io_number_range.
ENDMETHOD.
ENDCLASS.

The implementation of the number range object requests the current number by using the standard function module. The input paramaters for this function module have been provided in the CONSTRUCTOR of the object.

CLASS ZCL_NUMBER_RANGE_REQUEST IMPLEMENTATION.
...
method GET_NEXT_NUMBER.
CALL FUNCTION 'NUMBER_GET_NEXT'
EXPORTING
nr_range_nr = mv_nrobj
object = mv_nrnr
IMPORTING
number = rv_number
EXCEPTIONS
interval_not_found = 1
number_range_not_intern = 2
object_not_found = 3
quantity_is_0 = 4
quantity_is_not_1 = 5
interval_overflow = 6
buffer_overflow = 7
OTHERS = 8.
* implement proper error handling here...
endmethod.
ENDCLASS.

How can I really get a fake?

Fake objects can be created using local classes in the unit test. As an alternative, mocking Frameworks can help to automate this task by providing a declarative API.
In the real life use case I created the fake object using a mocking framework with this call:
mo_number_range_request ?= /leos/cl_mocker=>/leos/if_mocker~mock( ‘/leosb/if_number_range_request’ ) ->method( 'GET_NEXT_NUMBER' )->returns( 1 )->generate_mockup( ).

I hate puzzles

This kind of architecture eventually leads to tons of classes, each having their own responsibility. In a real life application you would need to set up the repository instances and their dependencies at least in some specific method at startup:
CREATE OBJECT lo_number_range TYPE ZCL_NUMBER_RANGE_REQUEST
CREATE OBJECT lo_repository TYPE ZCL_BOOKING_REPOSITORY EXPORTING io_number_range = lo_number_range.

IoC Containers help you in managing these dependencies by allowing you to register specific classes for each interface in customizing. Their purpose is to resolve all dependencies for a root object, when a root object like a repository is requested by the application. The container creates this root object and hands it back to the caller with just one single line of code.

Related links

IoC Container
Mocking Framework

Namespace Refactoring in ABAP

Refactoring towards other namespaces

In the past few years, I was involved in several projects which required moving an existing ABAP application to a new namespace. There might be several reasons for that, but in most of the cases an application that has been formerly written in Z*-Namespace needed to be moved to an SAP partner or customer namespace which starts with “/”.
As of 7.02, SAP provides no automatic solution to do a namespace refactoring, however, with the right tools and some experience you will be able to convert at least the most important development objects in a semi-automatic way.

Development objects in scope

This blog post deals with the conversion of

  • DDIC-Objects: domains, data elements, structures, tables, table types, views and F4-Helps
  • Report sources: reports, function groups, classes
  • UI components: Dynpros and Webdynpro components

Tools

For a namespace refactoring you need the following tools:

  • Advanced Find and Replace
  • Notepad++
  • SAPLink with installed plugins

Get on overview of the object list

Before you start, figure out, which packages are involved in the application you would like to convert.
Get a complete list of all development objects that need to be converted either by selecting the objects from TADIR table (provide attribute DEVCLASS as the package name) or by choosing the option “Add Objects from a Package” in SAPLINK.
Having a complete object list will be the first step towards a renaming matrix. Copy all objects to an excel sheet. Do not forget to also manually include function modules since they do not appear in TADIR or SAPLINK’s package overview.

Building the renaming matrix

Namespace refactorings are not successful of you try to replace only the prefix of all your development objects in the object list.
If you have a complete objects list which is the outcome of the previous step, start to manually rename the objects in an Excel sheet in a separate column.
Pay attention on:

  • No generic string replacement fits all needs! Instead, replace objects from the object list one by one – try to prevent renaming them by a generic replacement like ZWM -> /LEOS/. Otherwise this will srew up your or SAP’s naming conventions for development objects
  • Do not forget to also rename function modules one by one – as they do not have their own object entry in SAPLink or TADIR you will need to manually include them in the renaming matrix
  • Naming convention for development objects like function group includes may change: e.g. LZWMFG_TA turns to /LEOS/SAPLFG_TA
  • Include interfaces of webdynpro components in the renaming matrix. Even if they are not relevant for extraction later on, you might have used them in interface controller declarations in your source code, for example
  • DATA lo_interfacecontroller TYPE REF TO ziwci_wdyn_test.
    lo_interfacecontroller = wd_this->wd_cpifc_users( ).

  • If you are not sure what the name of the generated target webdynpro component interface would be, correct these syntax errors in the post processing step

Do Not:

  • Rename function group includes or sources directly if you have no strong knowledge on the naming conventions – in this case try to create an empty function group in the target namespace with all includes and function modules in order to see what the naming conventions are
  • violate typical workbench object name restrictions like their length

The result should look like this.
Renaming Matrix
Extract your renaming matrix to a CSV file as this would be the requirement for a batch replacement.

Extract you development objects

SAPLINK helps you to extract all development objects you want to convert. Just use the option “Add Objects from a Package” and extract each package to its own nugget file – this helps you in having a good overview of what has already been extracted and what has not yet been extracted.
Do Not:

  • extract Webdynpro component interfaces since they are automatically regenerated (they have *IWCI_* in their names)
  • Extract ICF nodes that have been generated for Webdynpro-Applications. Just like webdynpro component interfaces, they are going to be recreated automatically for you in the target namespace.
  • extract function groups for generated maintenance views – try to regenerate them based on the objects in the target namespace

Rename the objects

Copy all the nugget files to another workspace folder.
Start Advanced Find and Replace or a similar batch replacing tool. Include *.nugg files to your file mask, upload the CSV-Renaming matrix and set the workspace folder which contains the copies of your nugget files.
Batch Replace

Execute the replacing loop.
Take a look at the new nugget files using Notepad++ and search for prefixes in the old namespace. If you found some, restore the nugget files in the workspace folder from the extracted version, update your renaming matrix and restart the replacing loop. Execute this step as often as needed, until you find no more prefixes in the old namespace.

Import the new nugget files

SAPLINK helps you to import your new nugget files to your system. Pay attention on error messages that may arise because of length violations or naming conventions. Usually, SAPLINK restores your objects to the $TMP package or asks for a target package.
Rebuild the objects list of the local object’s package and the target packages, to see all the created development objects.
Assign the objects, that are assigned to the $TMP package, to the target package.
Finally, activate them. Start with DDIC-Objects, then function modules or classes and end with UI components such as webdynpro components or reports that contain dynpros.

Postprocessing

Postprocessing will always be needed. If you followed this approach, you will probably need to

  • manually translate class based texts in the new namespace
  • manually translate exception class based texts in the new namespace
  • regenerate maintenance views and view clusters
  • If you missed to include development objects in your renaming matrix that are usually generated when its main development object is created, syntax errors might have occured. E.g. Webdynpro components have their own generated interface. Wrong interface controller declarations in your webdynpro component’s source code may need to be adjusted if you missed to replace the interface definition with its new version

Effort

To give you a feeling of how fast or slow such a namespace refactoring could be done, I figure out what the experience of past refactorings was:

  • one package and 55 development objects (in terms of 55 replacing pairs in the renaming matrix) took half a day for one developer
  • 6 packages with about 1100 development objects (in terms of 55 replacing pairs in the renaming matrix) took four days for one developer

As you can see, the effort does not really proportionally increase with the amount of the development objects. This is because the initial effort, which is needed to get an overview of all the development objects depends more the number of packages, than on the number of development objects.
Building the renaming matrix may correlate with the number of development objects but this is usually not the main issue. Usually the most of the time is needed in the batch replacing procedure which is repeated iteratively until no old namespace object is found in the converted nugget files. The SAPLINK import procedure usually also takes some time as the development objects might have names that are too long for a specific object type. In this case you would also have to update your renaming matrix, and redo the batch replacing procedure.
However, if you finalize the process, chances are high that you fetched all the objects and converted them consistently without gaps.