No comment!

Are Comments overrated?

Comments are a wonderful tool to document what you intended with certain program logic. You can place it directly where that logic resides: In the source code. As comments are non-functional, they are not getting compiled and hence, not executed. However, I’ve got the feeling that sometimes we should really think about how to use comments and how not to use them. These points are neither fundamentally new nor innovative to developers at all, but I still have the feeling, that these are the most commonly violated principles.

Antipattern #1 – Comments do not replace program structure

Usually, developers should tell the reader of their code why they are doing something, not what they are doing. Before you start to comment by describing what the next few lines of code should do, stop and think about it.
Consider the following example. What do you think would be better to read?

*   parse header from excel

lv_row = 2.

lv_column = zcl_excel_common=>convert_column2int( lv_range_start_column ).


lv_current_column = 0.

WHILE lv_column convert_column2alpha( lv_column ).



ip_column = lv_col_str

ip_row    = lv_row


ep_value = lv_value



ASSIGN COMPONENT ls_fcat-fieldname OF STRUCTURE ls_scale TO .

CHECK sy-subrc = 0.

CASE ls_fcat-inttype.


= zcl_excel_common=>excel_string_to_date( lv_value ).


= lv_value.


lv_column = lv_column + 1.



lv_column = zcl_excel_common=>convert_column2int( lv_range_start_column ).

lv_row = lv_row + 1.

*             …Some more coding…

lv_highest_row = io_excel_worksheet->get_highest_row( ).


*   parse items from excel

WHILE lv_row convert_column2alpha( lv_column ).



ip_column = lv_col_str

ip_row    = lv_row


ep_value = lv_value


lv_catyp = lv_value.


lv_column = 1.

lv_col_str = zcl_excel_common=>convert_column2alpha( lv_column ).



ip_column = lv_col_str

ip_row    = lv_row


ep_value = lv_value


*                           some more coding…

lv_row = lv_row + 1.


Basically, the comments tell me what the next few lines of code will do. But there is no benefit. You just don’t understand immediately, what the code does, no matter if there are comments available or not.
Even worse, more comments would not make clear what the code does. Furthermore, as you shift code to another part of your program, the comments might lose their relationship to the code – as they are non-functional and not taken into account by the compiler. This will eventually lead to comments that are misleading users in the best case.
Consider this refactored example:
parse_header_from_excel( io_excel_worksheet = io_excel_worksheet io_scale = ro_scale ).
parse_items_from_excel( io_excel_worksheet = io_excel_worksheet io_scale = ro_scale ).

Actually, replacing comments by special methods with a useful name does not reduce complexity – but it increases readability dramatically.
By the way, that’s why I think that code scanners which measure the ratio of comments and source code are useless – as the goals that they proclaim to address cannot be addressed by such simple measurements.
Normally, I try to produce code which has no comments in the middle of some logical program unit like a method. If there are comments, they are at the top of the code – or nowhere.

Antipattern #2 – Commented code

Having code that is getting useless by the time will lead to the end of its lifecycle. Usually most of the developers just comment this piece of code out. How often have they ever read this program logic again?
I personally do not read it as it might lose track of its relationship to the surrounding source code by the time. My eyes just went over these comments. Whenever I need to delete some coding that I might miss in the future, I generate a new version of the ABAP source code and delete the code. Code Versioning systems are meant for this kind of tasks. When I miss something in the code, I take a look at the versions – This immediately allows me to compare two versions by the changes that have been applied in the meantime.


Enqueues and Update Tasks

What doesn’t tell us

I thought, SAP is pretty clear in what they tell us about the _SCOPE-parameter that can be passed to every enqueue-function modul.
The parameter can have 3 values, which means, according to

Meaning of the _SCOPE Values

Value Description
_SCOPE = 1 The lock belongs only to the dialog owner (owner_1), and therefore only exists in the dialog transaction. The DEQUEUE call or the end of the transaction, not COMMIT WORK or ROLLBACK WORK,cancels the lock.
_SCOPE = 2 The lock belongs to the update owner (owner_2) only. Therefore, the update inherits the lock when CALL FUNCTION ‘…‘ IN UPDATE TASK and COMMIT WORK are called. The lock is released when the update transaction is complete. You can release the lock before it is transferred to the update using ROLLBACK WORK. COMMIT WORK has no effect, unless CALL FUNCTION ‘…‘ IN UPDATE TASK has been called.
_SCOPE = 3 The lock belongs to both owners (owner_1 and owner_2). In other words, it combines the behavior of both. This lock is canceled when the last of the two owners has released it.


_SCOPE Parameter (SAP Library – Architecture Manual)

The most interesting statement is the description for _SCOPE = 2. It says that the lock is released when the update transaction is complete.
So I understood that COMMIT WORK will implicitly release the lock. As I found out, it is not necessarily the case.

The test report

Consider the following test report.
The report is going to request a lock for an ABAP report. The lock argument is ‘1’ as a dummy value. Keep in mind, that Z_UPDATE_TASK_DUMMY is an update function module that has no execution logic in my case.

DATA lt_enq TYPE TABLE OF seqg3.
name = '1'
_scope = '2'
foreign_lock = 1
system_failure = 2
IF sy-subrc 0.
MESSAGE ID sy-msgid TYPE sy-msgty NUMBER sy-msgno WITH sy-msgv1 sy-msgv2 sy-msgv3 sy-msgv4.
IF pa_upd = abap_true.
gclient = sy-mandt
guname = sy-uname
enq = lt_enq
communication_failure = 1
system_failure = 2
IF sy-subrc 0.
MESSAGE ID sy-msgid TYPE sy-msgty NUMBER sy-msgno WITH sy-msgv1 sy-msgv2 sy-msgv3 sy-msgv4.
WRITE 'No locks existing'.
WRITE: -gname,

The report offers a parameter PA_UPD to the user. If checked, the update function module will be called. If not checked, there is no update task registered. Afterwards, COMMIT WORK AND WAIT is always triggered.
In the end, the report shows any locks that the current user has.

The ugly truth

If the report is called with the enabled checkbox, the dummy function module will be called. In the end, all locks are released.
No Locks existent

If the report should not execute any update function module (checkbox is deactivated), the locks are not released at all. COMMIT WORK AND WAIT has no influence here as there is no update task registered. I also included the output of SM12 in the screenshot.
No Locks released

The moral of the story

Locks of the update owner are not necessarily released, when COMMIT WORK has been called. It depends on wether or not an update task has been called at all. is not wrong with the description about _SCOPE = 2, however, someone who reads the description might get a wrong idea of how ABAP works in this very case. It took me some time to figure out what happens here. So I hope this blog post helps other developers saving some time.

IoC implemented


Yesterday we had an interesting discussion about the following requirement: There is an application which consists of a webdynpro UI and a backend model. The backend model is described by an interface ZIF_SOME_BACKEND_MODEL which is implemented by some specific classes ZCL_BACKEND_MODEL_A and ZCL_BACKEND_MODEL_B. Backend A and Backend B are performing their tasks quite differently, however, the UI does not really care about this fact, as it uses only the interface ZIF_SOME_BACKEND_MODEL.
IoC implemented - Initial situation

Model A and B may be switched depending on some constraints which can be defined by the user but which we are not discussing here.
In case the application uses that backend model A in the backend, everything is okay and the application does what it is expected to do.
But: In case ZCL_BACKEND_MODEL_B does its work, an output stating e.g. that the backend data is derived by an external system should be shown as an info message in webdynpro.

This simple requirement led to two approaches.

Approach #01

The first approach is to apply specific methods which allow the UI to get some kind of a message from the backend.
IoC implemented - Approach 01
This means, method GET_INFO_MESSAGE ( ) needs to be called by the UI and depending on wether or not some message has been specified, the message has to be displayed.
This is a quite simple approach.

Disadvantage of approach #1

But there is also a disadvantage to this approach: By extended the interface with the GET_INFO_MESSAGE-method, we introduce a new concern to the application, which is completely useless for backend model A. Only backend model B will ever provide some messages to its caller (which is, the UI).
Even worse: Every implementer of ZIF_SOME_BACKEND_MODEL will have to implement at least an empty implementation of that method to avoid runtime exceptions when a not implemented method is called.

Approach #2

The second approach makes use of the Inversion of Control principle. Instead of asking the backend something, the UI tells it to tell its message manager what it thinks is newsworthy to tell.
How does it work? The most crucial difference is the existence of a new interface ZIF_I_AM_COMMUNICATIVE. It is implemented only by Backend Model B.
IoC implemented - Approach 02

What happens in the application? The UI tries to downcast the backend model to an instance of type ZIF_I_AM_COMMUNICATIVE. If backend model A is currently used in the application, the downcast will fail and the exception CX_SY_MOVE_CAST_ERROR may be catched silently.
In case the downcast succeeds, the webdynpro message manager of the UI component will be transferred to the backend model.
This gives backend model B the opportunity to show a warning / info message to the UI informing the user that backend model B is active. This could happen at a time point of time which can be controlled by the backend, not by the UI.

Disadvantage of approach #2

But also approach #2 has its disadvantages. By introducing a webdnypro interface to the backend, we have not only a dependency of the UI to the backend, but also bring in a dependency in reverse. At least as far it concerns ZCL_BACKEND_MODEL_B.
Usually you should avoid having such kind of dependencies as they might imply problems when working with dependency tracking techniques such as IoC Containers.
Also, the coupling between backend model B and the webdynpro runtime is increased by introducing the interface IF_WD_MESSAGE_MANAGER to the backend class.
To avoid these issues you might consider to wrap IF_WD_MESSAGE_MANAGER inside another, more abstract class, e.g. a generic message manager interface whose implementers may also work with classic dynpro screens. Instead of the webdynpro message manager, this wrapper class instance would then be injected to the backend. However such an approach might be kind of over-engineering in the first step.

What to do?

We decided to go for approach #2 as it was the least invasive and the most flexible one. We might consider implementing the refactoring mentioned in the disadvantages section of approach #2 in the future. However, the approach works like a charm right now.

mockA tutorial – How to create mocks

The previous blog post dealt with the question how fakes can be created using the new open source mocking framework mockA.
Fakes are mocked instances that return specific values when certain methods are called. In comparison to mocks, they do not validate if certain methods have been called or not.
The creation of fakes does not differ from the creation of mocks when you use mockA. Take a look at the setup routine of the unit test report ZTEST_CL_MOCKA_FLIGHT_OBSERVER that is ships with the framework.
In this unit test, a sample flight observer is to be tested. The flight observer depends on two other components, which are an object that provides some flight information as well as an object that processes alerts on specific late flights. The flight observer is determined to get the delay status of some sample flight data and delegate the creation of some alerts to the alert processor conditionally. While the flight information object serves as fake only, as it returns only sample data for specific flights (that is, method inputs), the alert processor serves as mock object that is subject to unit test assertions.
Please note, that MO_ALERT_PROCESSOR_MOCKER is a member attribute which can be used for later use. The created mock object does not need to return any values. This means, the object is only created but no method output is specified.
** Member attributes
* DATA mo_alert_processor_mocker TYPE REF TO ZIF_MOCKA_MOCKER.

* create an empty alert backend (we just need to track the number of method calls)
mo_alert_processor_mocker = ZCL_MOCKA_MOCKER=>ZIF_MOCKA_MOCKER~mock( iv_interface = 'ZIF_MOCKA_FLIGHT_ALERT_PROCESS' ).
* this call creates the alert processor mockup
mo_alert_processor ?= mo_alert_processor_mocker->generate_mockup( ).

If a certain method has been invoked on the mock object MO_ALERT_PROCESSOR can be checked later by the following call on its creator.
DATA lv_has_been_called TYPE apa_bool.
lv_has_been_called =
mo_alert_processor_mocker->has_method_been_called( 'ALERT_DELAY' ).

This kind of verification is usually needed in the assert-section of the unit test. Alternatively, you can access the exact method call count by calling this method:
DATA lv_method_call_count TYPE i.
lv_method_call_count =
mo_alert_processor_mocker->method_call_count( 'ALERT_DELAY' ).

And that’s it. Besides the verification of a method call count, no other validation can be carried out against mock objects yet. In future releases, the verification if a specific method has been called with specific values could be implemented. This is one of the top priority functional gaps which exist for the framework right now.

mockA tutorial – How to create fakes


mockA, an open source ABAP mocking framework has been released recently. The today’s blog post gives you a brief introduction of the features of mockA.
Mocking Frameworks are usually used in unit tests. Their main task is the creation of test double instances more easily than e.g. manually creating local test classes which implement an interface from which the system under test depends.
This tutorial will show you how to create so called “fakes”. Fakes usually show some kind of behavior when called. In contrast “Mocks” usually also verify that some interaction with them took please. Mocks will be subject to a future blog post.

Where to get mockA

You can download the mocking framework from Github.
Install the daily build using the latest SAPLink release.
If you like it, feel free to participate in the development of the tool.

Mocking methods with returning parameters

Interface ZIF_MOCKA_IS_IN_TIME_INFO is the interface which is to be mocked in this tutorial. The interface ships with the release of mockA. Please note that no global class implements this interface. Nevertheless, we will create local classes at runtime which allow us to create objects which can eventually be called within the same report.
Before we can create a mock object, we need to tell mockA which interface is subject to mocking:
DATA lo_mocker TYPE REF TO zif_mocka_mocker.
lo_mocker = zcl_mocka_mocker=>zif_mocka_mocker~mock( 'zif_mocka_is_in_time_info' ).

In the second step, we need to tell the mocker, which method will be mocked. Please note, that you can easily use the fluent API be directly telling the mocker, which input should lead to which method’s output:
lo_mocker->method( ‘GET_DELAY’ )->with( i_p1 = ‘LH’ i_p2 = ‘123’ i_p3 = ‘20131022’ )->returns( 30 ).

In the end, the local class implementing ZIF_MOCKA_IS_IN_TIME_INFO can be created and the mock object will be instantiated.
DATA lo_is_in_time_info TYPE REF TO zif_mocka_is_in_time_info.lo_is_in_time_info ?= lo_mocker->generate_mockup( ).

Calling the mock object with some registered method input will return the specified output.
DATA lv_delay TYPE int4.
"will return lv_delay = 30
lv_delay = lo_is_in_time_info->get_delay( iv_carrid = 'LH' iv_connid = '123' iv_fldate = '20131022' ).

Please note, that registering the same method call pattern twice leads to different method output when the method is called multiple times with parameters that fit to the registered pattern:
lo_mocker = zcl_mocka_mocker=>zif_mocka_mocker~mock( 'zif_mocka_is_in_time_info' ).
lo_mocker->method( 'GET_DELAY' )->with(
i_p1 = 'LH' i_p2 = '123' i_p3 = '20131022'
)->returns( 30 ).
lo_mocker->method( 'GET_DELAY' )->with(
i_p1 = 'LH' i_p2 = '123' i_p3 = '20131022'
)->returns( 15 ).

lo_is_in_time_info ?= lo_mocker->generate_mockup( ).
"will return lv_delay = 30
lv_delay = lo_is_in_time_info->get_delay(
iv_carrid = 'LH' iv_connid = '123' iv_fldate = '20131022' ).
"will return lv_delay = 15
lv_delay = lo_is_in_time_info->get_delay(
iv_carrid = 'LH' iv_connid = '123' iv_fldate = '20131022' ).

Mocking methods with exporting parameters

EXPORTING parameters can be returned by using the EXPORTS-Method
lo_mocker = zcl_mocka_mocker=>zif_mocka_mocker~mock(
'zif_mocka_is_in_time_info' ).
lo_mocker->method( 'GET_BOTH' )->with(
i_p1 = 'LH' i_p2 = '123' i_p3 = '20131023'
)->exports( i_p1 = 2 i_p2 = abap_true ).
lo_is_in_time_info ?= lo_mocker->generate_mockup( ).

"will return lv_delay = 2, lv_is_in_time = 'X'
iv_carrid = 'LH'
iv_connid = '123'
iv_fldate = '20131023'
ev_delay = lv_delay
ev_is_in_time = lv_is_in_time

Mocking methods with changing parameters

CHANGING parameters can serve both as input and output:
lo_mocker = zcl_mocka_mocker=>zif_mocka_mocker~mock( 'zif_mocka_is_in_time_info' ).

lo_mocker->method( 'IS_IN_TIME_BY_CHANGING_PARAM' )->with(
i_p1 = 'LH' i_p2 = '123' )->with_changing( i_p1 = lv_fldate
)->changes( '20131025' )->exports( i_p1 = abap_true ).

lo_is_in_time_info ?= lo_mocker->generate_mockup( ).

"will return lv_fldate = '20131025', lv_is_in_time = 'X'
EXPORTING iv_carrid = 'LH' iv_connid = '123'
IMPORTING ev_is_in_time = lv_is_in_time
CHANGING cv_fldate = lv_fldate ).

Raise exceptions

Often unit tests also test unusual situations which are handled by raising and catching an exception. The mocker allows you to register a to-be-raised exception by using the methods RAISES( … ) or RAISES_BY_NAME( … )
lo_mocker = zcl_mocka_mocker=>zif_mocka_mocker~mock( 'zif_mocka_is_in_time_info' ).

lo_mocker->method( 'IS_IN_TIME' )->with(
i_p1 = 'LH' i_p2 = '123' i_p3 = '20131024' )->raises_by_name( 'zcx_mocka_in_time_exception' ).

lo_is_in_time_info ?= lo_mocker->generate_mockup( ).

iv_carrid = 'LH' iv_connid = '123' iv_fldate = '20131024' ).
CATCH zcx_mocka_in_time_exception.
BREAK-POINT."program flow will halt here
CATCH cx_root.
BREAK-POINT."will not be called

Useful links

The Art of Unit Testing
Unit Tests in general

mockA released – a new ABAP Mocking Framework

Some good news…

The ABAP Mocking Framework presented last year is now Open Source.
The namespace has been changed to Z* in order to allow every interested developer to participate in the development of this tool.
Feel free to participate in the development process and visit the project page at Github.
Current Features

  • Mocking of ABAP interfaces and non-final classes
  • Conditional returning of RETURNING, EXPORTING and CHANGING parameters for methods based on specific parameter combinations of IMPORTING and CHANGING parameters
  • You can even define different output each time when the method is called multiple times with the same parameter values
  • raiseable exceptions
  • verification of mocked method call count

How to start

  • Visit us on Github
  • Check Out the daily build and import it to your SAP System using SAPLink
  • As there is no documentation yet, the Unit Test ZTEST_CL_MOCKA_MOCKER describes the features of the tool
  • The Demo report ZTEST_CL_MOCKA_FLIGHT_OBSERVER also demonstrates some features described last year

Functional gaps

  • Verification of mocked method calls against certain expected parameter input
  • Register pattern based method signatures for mocked methods


Thanks to leogistics GmbH, the formerly internal project is now open source. leogistics has decided to provide it as open source software to strengthen the SAP Netweaver and its ABAP development capabilities, and, of course to allow the community to benefit from this tool and make it even better.

The issue with having many small classes


The previous blog post was about applying design principles to the architecture of a simple parser application, which served as an exercise application. It led to quite a few, small classes in the end which all had their own responsibility.
Big Picture


As most of the classes deal with solving a certain aspect of the exercise, one class has the responsibility to arrange the work of the other classes to solve what needs to be solved. This class implements the interface ZIF_OCR_PARSING_ALGORITHM . An instance of that class is given to every consumer which needs to parse some sample input.
Every low level class that this class is dealing with to “orchestrate” the program flow, is registered as an dependency. Dependencies can be declared in the constructor of this course grained class. This makes creation of that class impossible without satisfying its dependencies.


Providing dependencies during object creation by an external caller is called “dependency injection” (DI) as opposed to having classes that create their dependencies on their own. As the creation of the low level instances is not in the control of the class that orchestrates the parser flow, this paradigm is also called “Inversion of control” (IoC).
However, this approach has another challenge ready. That issue has been described very clearly by Oliver as a reply to the previous blog post
“Injection in the constructor – even to an interface – looks like quite tight coupling to me as it leaves the consumer of the coarse-grained class (the algorithm, in your case) to instantiate the composites.”


Of course the consumer of the composite object model should not create it on its own. One possibility would be to utilize the factory pattern. Instead of composing the main object and all of its dependent objects on its own, the consumer delegates this task to a factory method, which could be a static method call in its most simple realization:

Within CREATE_NEW_PARSER( ) the same magic takes place as before: in the first step, all low level objects are created, in the second step the main parser object is created, while its dependencies declared by the CONSTRUCTOR are satisfied. This approach is called “Poor man’s dependency injection”

*alternative 1: poor man's DI
DATA lo_ocr TYPE REF TO zif_ocr_parsing_algorithm.
DATA lo_char_sep TYPE REF TO zif_ocr_characters_seperator.
DATA lo_char_parser TYPE REF TO zif_ocr_character_parser.
DATA lo_file_access TYPE REF TO zif_ocr_file_access.
DATA lo_lines_seperator TYPE REF TO zif_ocr_lines_seperator.
CREATE OBJECT lo_char_sep TYPE zcl_ocr_characters_seperator.
CREATE OBJECT lo_char_parser TYPE zcl_ocr_character_parser.
CREATE OBJECT lo_file_access TYPE zcl_ocr_gui_file_access.
CREATE OBJECT lo_lines_seperator TYPE zcl_ocr_lines_seperator.

CREATE OBJECT lo_ocr TYPE zcl_ocr_parsing_algorithm
io_character_parser = lo_char_parser
io_characters_seperator = lo_char_sep
io_file_access = lo_file_access
io_lines_seperator = lo_lines_seperator.

IoC Container

However, the task of manually creating dependent objects in order to hand them over to a course grained object when it is created, can be automated. Think of the facts that are known:

  • all low level interfaces like ZIF_OCR_CHARACTER_PARSER have one class that implements each of them
  • instances of these classes could be created dynamically, as their constructors currently require no input parameter
  • the course grained interface ZIF_OCR_PARSING_ALGORITHM also has a specific class that implements it
  • however, its constructor requires some instances of the low level interfaces
  • as these required parameters are instances of the low level interfaces, these dependencies could be satisfied automatically

As a brief summary, this is exactly what an IoC Container should do, also refered to as DI containers.
The registration of the interfaces and classes is done programmatically in the following example. However, it may be moved to customizing in order to “free” the code of having hard references to explizit class names

*alternative 2: IoC Container
DATA lo_ioc_container TYPE REF TO /leos/if_ioc_container.
lo_ioc_container = /leos/cl_ioc_container=>create_empty_instance( ).
lo_ioc_container->add_implementer( iv_contract = 'zif_ocr_characters_seperator' iv_implementer = 'zcl_ocr_characters_seperator' ).
lo_ioc_container->add_implementer( iv_contract = 'zif_ocr_character_parser' iv_implementer = 'zcl_ocr_character_parser' ).
lo_ioc_container->add_implementer( iv_contract = 'zif_ocr_file_access' iv_implementer = 'zcl_ocr_gui_file_access' ).
lo_ioc_container->add_implementer( iv_contract = 'zif_ocr_lines_seperator' iv_implementer = 'zcl_ocr_lines_seperator' ).
lo_ioc_container->add_implementer( iv_contract = 'zif_ocr_parsing_algorithm' iv_implementer = 'zcl_ocr_parsing_algorithm' ).

No matter, how the classes have been registered, either in the code or by customizing, the caller now only needs to call the container and request an instance from it at startup:

DATA lo_ocr TYPE REF TO zif_ocr_parsing_algorithm.
*create the container either by reference to customizing or by the registration coding shown above...
lo_ocr ?= lo_ioc_container->get_implementer( iv_contract = 'zif_ocr_parsing_algorithm' ).