Tony Marston's Blog About software development, PHP and OOP

Evolution of the RADICORE framework

Posted on 1st June 2022 by Tony Marston

Amended on 4th February 2023

1st version in COBOL
From library to framework
Dealing with RISC
Dealing with the Y2K enhancement
2nd version in UNIFACE
3rd version in PHP
Building a Prototype
Building a complete framework
Design Decisions which I'm glad I made
Practices which I do not follow
How using OOP increased my productivity
From personal project to open source
Building a customisable ERP package
Levels of customisation
Maintaining the unmaintainable
Amendment History


I did not pull the design of my RADICORE framework out of thin air when I started programming in PHP, it was just another iteration of something which I first designed and developed in COBOL in the 1980s and then redeveloped in UNIFACE in the 1990s. I switched to PHP in 2002 when I realised that the future lay in web applications and that UNIFACE was not man enough for the job. PHP was the first language I used which had Object-Oriented capabilities, but despite the lack of formal training in the "rules" of OOP I managed to teach myself enough to create a framework which increased my levels of productivity to such an extent that I judged my efforts to be a great success. In the following sections I trace my path from being a junior programmer to the author of a framework that has been used to develop a web-based ERP application that is now used by multi-national corporations on several continents.

1st version in COBOL

When I joined my first development team as a junior COBOL programmer we did not use any framework or code libraries, so every program was written completely from scratch. As I wrote more and more programs I noticed that there was more and more code that was being duplicated. The only way I found to deal with this when writing a new program was to copy the source code of an existing program which was similar, then change all those parts which were different. It was not until I became a senior programmer in a software house that I had the opportunity to start putting this duplicated code into a central library so that I could define it just once and then call it as many times as I liked. Once I had started using this library it had a snowball effect in that I found more and more pieces of code which I could convert into a library subroutine. This is now documented in Library of Standard Utilities. I took advantage of an addition to the language by writing Library of Standard COBOL Macros which allowed a single line of code to be expanded into multiple lines during the compilation process. Later on my personal programming standards were adopted as the company's formal COBOL Programming Standards.

By using standard code from a central library it made each programmer more productive as they had less code to write, and it eliminated the possibility of making some common mistakes. One of the common types of mistake that was eliminated was keeping the definition of certain data buffers, such as those for forms files and database tables, in line with their physical counterparts. This was taken care of with the COPYGEN utility which took the external definitions and generated text files which could then be added to a copy library so that the buffer definitions could be included into the program at compile time. Incorporating changes into the software therefore became much easier - change the forms file or database, run the COPYGEN utility, rebuild the copy library from the generated text files, then recompile all programs to include the latest copy library entries.

One of the first changes I made to what my predecessors had called "best practice" was to change the way in which program errors were reported to make the process "even better". Some junior programmers were too lazy to do anything after an error was detected, so they just executed a STOP RUN or EXIT PROGRAM statement. The problem with this was that it gave absolutely no indication of what the problem was or where it had occurred. The next step was to display an error number before aborting, but this required access to the source code to find out where that error number was coded. The problem with both of these methods was that any files which were open, and this included the database, the discernible and any KSAM files, which were not explicitly closed in the code would remain open. This posed a problem if a program failed during a database update which included a database lock as the database remained both open AND locked. This required a database administrator to logon and reset the database. The way that one of my predecessors solved this problem was to insist that whenever an error was detected in a subprogram that instead of aborting right then and there that it cascaded back up the stack to return control to the starting program (where the files were initially opened) so that they could be properly closed. This error procedure was also supposed to include some diagnostic information to make the debugging process easier, but it had one serious flaw. While the MAIN program could open the database before calling any subprograms, each subprogram had the data buffers for each table that it accessed defined within its own WORKING-STORAGE section, but when that subprogram performed an exit its WORKING-STORAGE area was lost. This was a problem because if an error occurred in a subprogram while accessing a database table then the database system inserted some diagnostic information into that buffer, but when the subprogram returned control to the place from which it had been called then this diagnostic information was lost, thus making the diagnostic information incomplete and virtually useless. This to me was unsatisfactory, so I came up with a better solution which involved the following steps:

This error report showed what had gone wrong and where it had gone wrong using all the information that was available in the communication areas. As it had access to the details for all open files it could close them before terminating. The database communication area included any current lock descriptors, so any locks could be released before the database was closed. Because of the extra details now included in all error reports this single utility helped reduce the time needed to identify and fix bugs.

Up until 1985 the standard method of building a database application which contained a number of different user transactions (procedures or use cases in the UML world) was to have a logon screen followed by a series of hard-coded menu screens each of which contained a list of options from which the user could select one to be activated. Each option could be either another menu or a user transaction. Due to the relatively small number of users and user transactions there was a hard-coded list of users and their current passwords, plus a hard-coded Access Control List (ACL) which identified which subset of user transactions could be accessed by each individual user. This arrangement had several problems:

From library to framework

This arrangement was thrown into disarray in 1985 when, during the design of a new bespoke application, the client's project manager specified a more sophisticated approach:

I spent a few hours one Sunday in designing a solution. I started building it on the following Monday, and by Friday it was up and running. My solution had the following attributes:

A later enhancement to the Link Editor, which took the relocatable binaries produced by the compilation process and merged them into a single executable, overcame the limit on the size of the memory required by all the subprograms by allowing the single program file to be split into a number of partitions. This improvement was made possible by faster processor speeds and bigger, cheaper memory. When control was passed from one partition to another at runtime the memory used for the first partition was written to a swap file on disk so that it could be overwritten by the second partition. When control was passed back to the first partition then the second partition's memory was swapped out so that the memory for the first partition could be swapped in. The trick was to group the subprograms into the correct partitions so as to reduce the amount of memory swapping.

In the first version it was necessary for each new project to combine the relocatable binaries of this new framework with the relocatable binaries of their application subprograms in order to create a single executable program file. It was also necessary to append the project's VPLUS form definitions to the same disk file used by the framework. When I tried to separate the two forms files I hit a problem as the VPLUS software was not designed to have more than one file open in the same program at the same time. The solution was to design a mechanism using two program files, one for the MENU program and a second called MENUSON for the application, with data being passed between them using an Extra Data Segment (XDS) which is a portion of shared memory.

This framework was on top of a library of subroutines and utility programs which I developed earlier. This included a COPYGEN program which produced copylib members for both the IMAGE/TurboIMAGE database and the VIEW/VPLUS forms which helped reduce common coding errors by mistakes being made when altering the structure of a database table or a VIEW/VPLUS forms file and failing to update the appropriate data buffer correctly. Calling the relevant intrinsics (system functions) for these two pieces of software was made easy by the creation of a set of subroutines for accessing VPLUS forms plus a set of macros (pre-complier directives) for accessing the IMAGE database. All these documents are available on my COBOL page.

After that particular client project had ended, my manager, who was just as impressed with my efforts as the client, decided to make this new piece of software the company standard for all future projects as it instantly increased everyone's productivity by removing the need to write a significant amount of code from scratch. This piece of software is documented in the following:

Dealing with RISC

Here I am referring to the movement to Reduced Instruction Set Computing (RISC) which was implemented by Hewlett-Packard in 1986 with its PA-RISC architecture. Interestingly they allowed a single machine to compile code which ran under the Complex Instruction Set Computing (CISC) architecture in what was known a "compatibility mode", or it could be compiled to run under the RISC architecture using "native mode". This required the use of a different COBOL compiler and an object linking mechanism as well as changes to some function calls. As a software house we had to service clients who had not yet upgraded their hardware to PA-RISC, but we did not want to keep two versions of our software. This is where my use of libraries of standard code came in useful - I was able to create two versions of this library, one for CISC and another for RISC, which contained the function calls which were correct for each architecture. I then created two jobstreams to compile the application, one for CISC and another for RISC, which then took the same source code and ran the relevant compiler, library and linker to produce a program file for the desired architecture. This then hid all the differences from the developers who did not have to change their source code, but gave the client the right program for their machine.

More details can be found in The 80-20 rule of Simplicity vs Complexity.

Dealing with the Y2K enhancement

While everybody else regarded this issue as a "bug" we developers saw it as an "enhancement" to a method that had worked well for several decades but which needed to be changed because of hardware considerations.

The origin of this issue was the fact that in the early days of computing the cost of hardware was incredibly expensive while the cost of programmers was relatively cheap. When I started my computing career in the 1970s I worked on UNIVAC mainframe computers which cost in excess of £1million each, and this meant that we had to use as few bytes as possible to store each piece of data. This meant that dates were usually stored in DDMMYY format, taking up 6 bytes, where the century was always assumed to be "19". It was also assumed that the entire system would become obsolete and rewritten before the century change to "20".

In the 1980s while working with HP3000 minicomputers we followed the same convention, but as storing values in DDMMYY format made it tricky to perform date comparisons I made the decision, as team leader, to change the storage format to YYMMDD. The IMAGE database did not have an SQL interface, so instead of being able to sort records by date when they were selected we had to ensure that they were sorted by date when they were inserted. This required defining the date field as a sort field in the database schema.

Instead of storing YYMMDD dates using 6 bytes I thought it would be a good idea, as dates were always numbers, to store them as 4-byte integers, thus saving 2 bytes per date. That may not sound much, but saving 2 bytes per record on a very large table where each megabyte of storage cost a month's salary was a significant saving. This is where I hit a problem - the database would not accept a signed integer as a sort field as the location of the sign bit would make negative numbers appear larger than positive numbers. This problem quickly disappeared when a colleague pointed out that instead of using the datatype "I" for a signed integer I could switch to "J" for an unsigned integer. The maximum value of this field also allowed dates to be stored using 8 digits in CCYYMMDD format instead of the 6 digits in YYMMDD format. As I had already supplied my developers with a series of Date Conversion macros it was then easy for me to change the code within each macro to include the following:

IF YY > 50
  CC = 19
  CC = 20

This worked on the premise that if the YY portion of the date was > 50 then the CC portion was 19, but as soon as it flipped from 99 to 00 then the CC portion became 20.

This meant that all my software was Y2K compliant after 1986. The only "fix" that users of my software had to install later was when the VPLUS software supplied by Hewlett Packard, which handled the screen definitions, was eventually updated to display 8-digit dates instead of 6-digit dates.

2nd version in UNIFACE

In the 1990s my employer switched to UNIFACE which is a proprietary language which is a Component-based and Model-driven language which was based on the Three Schema Architecture with the following parts:

UNIFACE was the first language which allowed us to access a relational database using Structured Query Language (SQL). The advantage of UNIFACE was that we did not have to write any SQL queries as they were automatically constructed and executed by the Database Driver. The disadvantage of UNIFACE was that these queries were as simple as possible and could only access one table at a time. This meant that writing more complex queries, such as those using JOINS, was impossible unless you created an SQL View which could then be defined in the Application Model and treated as an ordinary table.

In UNIFACE you first defined your database structure in the Application Model, then generated the SQL scripts to create those tables in your chosen DBMS. You then used the Graphical Form Painter to create form/report components which identified which entities and fields you wished to access. When using the GFP the whole screen is your canvas onto which you paint rectangles called frames. You then associate each frame with an object in the Application Model starting with an entity. Inside each entity frame you can either paint a field from that entity or another entity. If you construct a hierarchy of entities within entities this will cause UNIFACE, when retrieving data, to start with the OUTER entity then, for each occurrence of that entity, use the relationship details as defined in the Application Model to retrieve associated data from the INNER entity. After painting all the necessary entity and field frames the developer can then insert proc code into any of the entity or field triggers in order to add business logic.

After I had learned the fundamentals of this new language I rebuilt my development framework. I first rebuilt the MENU database, then rebuilt the components which maintained its tables. After this I made adjustments and additions to incorporate the new features that the language offered. This is all documented in my User Guide.

I started with UNIFACE Version 5 which supported a 2-Tier Architecture with its form components (which combined both the GUI and the business rules) and its built-in database drivers. UNIFACE Version 7 provided support for the 3-Tier Architecture by moving the business rules into separate components called entity services, which then allowed a single entity service to be shared by multiple GUI components. Each entity service was built around a single entity in the Application Mode, which meant that each entity service dealt with a single table in the database. It was possible to have code within an entity service which accessed another database table by communicating with that table's entity service. Data is transferred between the GUI component and the entity service by using XML streams. That new version of UNIFACE also introduced non-modal forms (which cannot be replicated using HTML) and component templates. There is a separate article on component templates which I built into my UNIFACE Framework.

Whilst my early projects with UNIFACE were all client/server, in 1999 I joined a team which was developing a web-based application using recent additions to the language. Unfortunately this was a total disaster as their design was centered around all the latest buzzwords which unfortunately seemed to exclude "efficiency" and "practicality". It was so inefficient that after 6 months of prototyping it took 6 developers a total of 2 weeks to produce the first list screen and a selection screen. Over time they managed to reduce this to 1 developer for 2 weeks, but as I was used to building components in hours instead of weeks I was not impressed. Neither was the client as shortly afterwards the entire project was cancelled as they could see that it would overrun both the budget and the timescales by a HUGE margin. I wrote about this failure in UNIFACE and the N-Tier Architecture. After switching to PHP and building a framework which was designed to be practical instead of buzzword-compliant I reduced the time taken to construct tasks from 2 weeks for 2 tasks to 5 minutes for 6 tasks.

I was very unimpressed with the way that UNIFACE produced web pages as the HTML forms were still compiled and therefore static. When UNIFACE changed from 2-Tier to 3-Tier it used XML forms to transfer data between the Presentation and Business layers, and the more I investigated this new technology the more impressed I became. I even learned about using XSL stylesheets to transform XML documents, but although UNIFACE had the capability of performing XSL transformations it was limited to transforming one XML document into another XML document but with a different format. When I learned that XSL stylesheets could actually be used to transform XML into HTML I did some experiments on my home PC and I became even more impressed. I could not understand why the authors of UNIFACE chose to build web pages using a clunky mechanism when they had access to XML and XSL, which is why I wrote Using XSL and XML to generate dynamic web pages from UNIFACE.

3rd version in PHP

I wrote about this earlier in My career history - Another new language.

I could see that the future lay in web applications, but I could also see that UNIFACE was nowhere near the best language for the job, so I decided to switch to something more effective. I decided to teach myself a new language in my own time on my home PC, so I searched for software which I could download and install for free. My choices quickly boiled down to either Java or PHP. After looking at sample code, which was freely available on the internet, I decided that Java was too ugly and over-complicated and that PHP was simple and concise as it had been specifically designed for writing database applications using dynamic HTML. I learned the language by reading the online manual in combination with some books and online tutorials. I then built a small prototype application as a proof of concept (PoC) with the following objectives in mind:

I discovered later that by creating all my HTML output in a separate component instead of spitting out small fragments during code execution I had in fact created an implementation of the The Model-View-Controller (MVC) Design Pattern. This was by accident, not by design (no pun intended).

My understanding of OOP told me that the resulting software was 2-Tier by default in that after creating a class with methods you must have a separate component to instantiate that class into an object and then call the methods on that object. It was obvious to me that the class being called existed in the Business/Domain layer while the component which called that class existed in the Presentation layer. This is why each table in my database has its own Model class in my Business/Domain layer and each user transactions has its own Controller object in the Presentation layer. This followed the UNIFACE convention of having a separate entity in the Application Model for each database table and a separate entity service component for each entity.

Building a Prototype

As I knew that I was going to be building enterprise applications containing large numbers of database tables I started by designing a small database with just a few tables in various relationships - one-to-many, many-to-many, and a recursive tree structure - then set about building the code to deal with these tables and relationships. My aim was not just to write code to move data into and out of database tables using HTML documents at the front end, but to take advantage of the OO capabilities of PHP in order to produce as much reusable code as possible. Writing code which was not reusable would, in my view, be wasting a golden opportunity as well as being against one of the aims of OOP.

When I wrote the code to maintain the first table I put all the code into a single Model class without using inheritance or any other fancy OO techniques. As a follower of the KISS principle I wanted to start simple and only add complexity as and when it became necessary. As I already knew that the only operations which can be performed on a database table are Create, Read, Update and Delete (CRUD) I created equivalent methods called insertRecord(), getData(), updateRecord() and deleteRecord(). Instead of creating a separate class property for each column I decided to take a shortcut and pass in the entire contents of the $_POST array as a single $fieldarray argument. This then removed the need for having separate getters and setters for each table column, and allowed me to change the contents of the array without having to change any method signatures. It also allowed me to handle as many rows as I liked.

I hit my first problem when I converted the contents of the $_POST array into an SQL INSERT query as it did not recognise the submit button as a valid column name. I got around this problem by creating a class property called $fieldlist which I manually populated with a list of valid column names in the class constructor. I then modified the code which built the SQL query to filter out anything in the $_POST array which did not also exist in the $fieldlist array.

I then realised that I should validate the contents of the $_POST array before passing it to the query builder. It is better to detect that the data is invalid and throw it back to the user with a suitable error message before sending it to the database as a failure when executing the SQL query is a non-recoverable error. In order to do this I needed to know the data type for each field so that I could check that the value for that field was valid for its data type, so I decided to change the one-dimensional $fieldlist array into the two-dimensional $fieldspec array where the value for each field was an array of specifications. As soon as I had written the code to validate the data for the first table I immediately saw that I could move it to a central validation object and expand it to deal with every possible data type so that it could perform its task on every database table. Although I originally created the $fieldspec array by hand for each table class I eventually realised that this procedure could be automated, which is what drove me to create my Data Dictionary with its import and export functions.

I decided to build a single reusable View object to build every web page as the procedure would always be the same:

Originally I had a separate XSL stylesheet for each web page as it contained a hard-coded list of column names plus the code to build the relevant HTML control. I later refactored this to put the code to build each control into its own XSL template, a form of reusable subroutine. This reduced the code in each XSL stylesheet to basically nothing more that a list of column names and template calls. With some more refactoring I found that I was able to specify each column's control as an attribute in the XML document after obtaining it from the $fieldspec array. This meant that I could switch the control between a radio group and a dropdown list in the code which built the XML document instead of having it fixed inside the XSL stylesheet. With some more refactoring I found that I could also move the list of column names to be displayed from the XSL stylesheet to the XML document by loading in the information from a screen structure script so that it could be copied into the XML document and then processed in another XSL template within the stylesheet. So instead of having separate XSL stylesheets for each table I managed to produce a small set of reusable XSL stylesheets which could be used with any table.

As I had already discovered in my COBOL framework the benefits of having a separate task for each of the CRUD operations instead of having one large task which could switch from one mode of operation to another, I decided to do exactly the same thing in my PHP application. This meant that when creating the tasks to maintain the contents of a table I would build a family of forms as shown in Figure 1:

Figure 1 - A typical Family of Forms


Note: each of the boxes in the above diagram is a clickable link.

Each of these tasks had its own dedicated Controller which called the relevant methods on the Model, after which the Model was given to the View so that the data could be extracted and transformed into HTML. Each of these controllers originally was hardwired to a particular database and XSL stylesheet.

After finishing the code for the first table I then created the code for the second table. I did this by copying the code and then changing all the table references, but this still left a large amount of code which was duplicated. In order to deal with this I created an abstract class which I then inherited from each table class. I then moved all the code which was duplicated from each table class into the abstract class until each table class contained nothing but a constructor. For the Page Controllers I quickly discovered, after a little experimentation, that I could replace each instance of a hard-coded table name and XSL stylesheet name with variables name which could be passed down from another script. This then enabled me to create a small component script for each user transaction so that each Controller script could be reused with any Model in the application.

When I realised that there were some occasions when I needed to insert some custom code into the processing flow I recalled a technique which I had read about years before which involved the creation of additional "pre" and "post" processing methods around particular standard methods. This was easy to implement as I was already using standard methods which were inherited from an abstract class, and as each of these methods performed a series of separate steps where each step was a submethod it was easy to insert some new method calls into the processing sequence. This was made incredibly easy as the each table's data was passed around in a single $fieldarray variable instead of a separate hard-code variable for each column. To make it obvious that these methods were for custom code I gave them all a "_cm_" prefix to identify them as customisable methods. I defined them as empty concrete methods in the abstract class so that I did not have to define them in any subclass unless I actually wanted to provide an implementation. A colleague (now an ex-colleague) later tried to make me change all these methods into abstract methods just to satisfy some notion about "code purity", but luckily I had the sense to ignore him. I later discovered that this technique of using "pre" and "post" methods was in fact an implementation of the Template Method Pattern.

Now that I had produced a set of reusable Controllers and reusable XSL stylesheets I could see a way of combining them by defining a catalog of Transaction Patterns similar to the component templates which I had produced in UNIFACE. As time when by and some of the transactions which I built had different structures which were more complex I found that I could deal with this by either creating more XSL stylesheets and/or more Controllers. This has led to an enormous amount of reusable code so that my main ERP application has over 400 database tables and 4,000 user transactions which are served by just 12 XSL stylesheets and 40 Controllers.

This prototype application, which I published in November 2003, had only a small number of database tables with a selection of different relationships, but the code that I produced showed how easy it was to maintain the contents of these tables using HTML forms. Although it had no logon screen, no dynamic menus and no access control, it did include code to test the pagination and scrolling mechanism, and the mechanism of passing control from one script to another and then back again.

Building a complete framework

My next step was to take what I had learned with the prototype and rebuild my old framework in this new language. This was done in several steps:

Originally I created the table class files, table structure files, component scripts and screen structure scripts by hand as they were so small and simple, but after doing this for a while on a small number of tables and with the prospect of many more tables to follow I realised that the entire procedure could be speeded up by being automated. Where UNIFACE had an internal database known as an Application Model to record the structure of all the application databases I created my own version which I called a Data Dictionary. However, I changed the way that it worked:

While the objectives may have been the same, the way in which those objectives were implemented was totally different, with my PHP implementation being much faster. While it took some effort and ingenuity to build the PHP implementation, I considered this effort to be an investment as it reduced the time taken to generate table classes and the tasks needed to maintain their contents. This is why I was able to create my first ERP package containing six databases in just six months - that's one month per database.

Design Decisions which I'm glad I made

When I began to produce my PHP framework I did not base it on any ideas from other people, I simply built upon what I had produced earlier in COBOL and UNIFACE, then modified it according to the additional capabilities offered by the PHP language, namely that of programming with objects. Fortunately for me I did not go on any formal OOP training courses, instead I read the PHP manual and ran through some sample code which I found in various online tutorials and books which I purchased. I learned the mechanics of creating classes and how to share code using inheritance. I struggled initially with the concept of polymorphism as the descriptions were vague and examples were virtually non-existent, but I got there in the end. I say "fortunately" as I later discovered that what was being taught as the "proper" way to implement the principles of OOP was far from being the "best" way. Programming is an art, not a science, so it requires artistic talent and not the slavish following of rules which can be read in books. There is no such thing as "one true design methodology" or "one true programming style", so instead of following the suggestions of others I decided to build my new framework based on my own experience, instincts and intuition. These decisions are identified below:

  1. Using XSL to generate web pages

    The first XSL stylesheet that I created worked specifically for a single web page, but as I built more and more web pages and more and more XSL stylesheets I could see more and more places where I could replace repeating code with a call to a library routine. Fortunately XSL offers the following facilities:

    You can see this in action in The XSL stylesheet for a DETAIL form. You should notice that the table name is hard-coded, as well as the name of every column which is to be displayed on the screen.You may notice in this early example that I am using the ability to add data to the stylesheet by using parameters in the XSL transformation process. I later changed that to place ALL data into the XML file which then made it easier to use my XSL debugger.

    After a period of hard-coding a separate XSL stylesheet for each web page I thought to myself that there must be a better way. I noticed that the only difference between one web page and another was the table name and the list of column names, so I wondered if it could be possible to provide this information inside the XML document where it could then be processed during the XSL transformation. With a bit of experimentation I discovered that it could, so instead of having to define the table and column names inside the XSL stylesheet I now define it outside using the following mechanism:

    This then meant that instead of having to create a separate XSL stylesheet for each web page I could create a small set of reusable XSL stylesheets which provide a common structure with just the differences being described in a screen structure file. I currently have 12 XSL stylesheets which I have used to create over 4,000 web pages in my main ERP application. This means that I do not have any PHP code in my software which spits out any HTML.

    I also decided to load the contents of this file into memory at the start of each script instead of right at the end, thus giving me the opportunity of modifying the structure before it is processed.

    There are two distinct advantages of using XSL transformations to create your HTML pages:

    When I eventually added the capability of producing PDF output I found myself adopting the same approach by using a report structure file to identify which bits of application data should go where on the page.

  2. Using the 3-Tier Architecture

    I found that implementing the 3-Tier Architecture using PHP and objects was surprisingly easy as programming with objects is automatically 2-tier to begin with. This is because after creating a class for a business/domain component with properties and methods you must also have a separate component which instantiates that class into an object and then calls whatever methods are required. The business/domain object is what I now refer to as a Model in my infrastructure while the component which instantiates it and calls its methods is what I refer to as a Controller.

    In my prototype implementation I had methods within each table class which accessed the database directly, but when MySQL version 4.1 was released I needed a mechanism to switch between using either the original "mysql_" functions or the improved "mysqli_" functions. All I had to do was to create a separate database class for each different set of functions then modify each table class so that the method which accessed the database then passed control to a separate DBMS object instead. This was easy to do as each table class inherited those methods from an abstract table class which meant that all the changes were confined to that single abstract class. This made it very easy later on to add support for additional DBMS engines, starting with PostgreSQL, then Oracle and later SQL Server.

    With the creation of a separate component which used XML and XSL to create all HTML pages I had effectively split my Presentation layer into 2 separate pieces - a Controller and a View - which you should recognise as being parts of the Model-View-Controller design pattern.

  3. What objects should I encapsulate into classes?

    The starting point of OOP is the creation of classes which act as the containers (or capsules) for an entity's properties (data) and methods (operations). You need to create classes so that you can instantiate them into objects, then you can call an object's methods. This leads to the question "How do I identify something for which I should create a class?" This is supposed to be the result of a process called Abstraction which can result in two types of class:

    By putting common methods in an abstract class which is then inherited by multiple concrete classes you then have access to polymorphism. You can then take advantage of polymorphism by using dependency injection.

    To make the situation even more confusing an experienced developer will tell you that there are basically only two types of object:

    Some languages include a 3rd option known as a VALUE OBJECT, but I ignore them as PHP supports only primitive data types. This seems logical to me as neither SQL nor HTML deal with value objects.

    As far as I am concerned entities belong only in the Business/Domain layer while all the other layers should consist of nothing but services. The components in the RADICORE framework fall into the following categories:

    It should also be noted that:

    Object Oriented Programming requires that you first create classes with methods (operations) and properties (data) so you can instantiate them into objects, after which you can call an object's methods to manipulate its properties. The act of creating classes is known as Encapsulation which can be defined as:

    The act of placing data and the operations that perform on that data in the same class.
    To me this means that ALL the data for an object and ALL the operations that can manipulate that data should be placed in a single class. This means the same class. If the lowest form of object in a database is a table then it makes sense, to me at least, to create a separate class for each table. This is also confirmed by the fact that the standard CRUD operations are performed on individual tables, not on individual columns or collections of tables. A "table" is a collection of "columns" which identify the data that is stored for a particular type of entity, and each row in a table represents a different instance of that entity. If my principle of "one class for each database table" is wrong then what are the alternatives? I can only think of the following:

    It was also obvious to me as an experienced developer, but perhaps not so obvious to a clueless newbie, that "all the operations that perform on that data" meant "all the operations that perform on the raw data" (eg: business rules) and not "operations which transform the raw data into another format". Being already familiar with the 3-Tier Architecture I was aware that the code which deals with moving data and in and of the database belongs in the Data Access layer while the code which deals with moving data to and from the user interface belongs in the Presentation layer. All the code which processes the business rules for each entity belongs in the Business layer and is (or should be) totally unconcerned and unaware of what happens to the data in the other layers. The code which transforms data to and from an SQL query does not belong in the Business layer. The code which transforms data from HTML input or HTML/CSV/PDF/Image output does not belong in the Business layer.

    That is why I started my framework by creating a separate class for each database table, and why I am still doing so 20 years later.

    I have subsequently read several articles by people who seem to think that creating a separate class for each database table is totally wrong as database tables do not represent complete objects in the real world, just parts of objects. They say that the rules of OOP require that you create objects which model the real world, which means creating classes that are responsible for handling as many database tables as it takes to represent each real-world entity. These people have got it backwards. Just because you can write software which models the real world does not mean that you should. When you write software which communicates with objects outside of itself it makes sense, to me at least, to communicate with those objects directly instead of indirectly through an intermediary. When you are writing a database application you are writing software which communicates with objects in a database, not objects in the real world, and these database objects are called tables. You do not manipulate any real-world objects, either directly or indirectly through a database table, you simply manipulate the data that you hold on those objects inside tables in your database. Anyone who cannot grasp this simple concept is making a fundamental mistake, and if the foundation of your software is built on a misunderstanding then it won't be long before the cracks begin to show in your application and the entire edifice starts crumbling in front of your eyes.

  4. How do I use inheritance?

    Inheritance is an OO technique for sharing code between classes. You can define a piece of code once in a superclass and inherit it into as many subclasses as you like. That code then "appears in" or "is made available to" the subclass when it is instantiated into an object just as if it was coded directly into the subclass. Note that it is written once and shared many times, not written many times.

    I did not bother trying to create any superclasses until I found some pieces of code which were duplicated and therefore ripe for being shared. In order to create the family of forms for my first database table I create a table class which supported the basic CRUD operations, then created the page controllers which dealt with each of those tasks (use cases, user transactions or units of work). I then wrote the code until every component in this family did what it was supposed to do.

    The fun started when I created another family of forms for the next database table. I duplicated both the page controllers and the table class, then modified the second set of scripts to change all table references from table#1 to table#2. I then created a superclass to hold the shared set of methods and properties, and began moving what was duplicated from the subclasses to the superclass. When I was finished there was nothing left in the subclasses except for the constructor which looked like the following:

    require_once '';
    class #tablename# extends Default_Table
        // ****************************************************************************
        // class constructor
        // ****************************************************************************
        function __construct ()
            $this->dbname    = '#dbname#';
            $this->tablename = '#tablename#';
            $this->fieldspec = array(....);
        } // __construct
    // ****************************************************************************
    } // end class
    // ****************************************************************************

    This to me is an example of the process of abstraction which is described in The meaning of "abstraction"?

    The act of performing an abstraction means that you separate the abstract from the concrete, the general from the specific. You need to look for patterns of similar characteristics in different objects. You cannot look at a single object in isolation and perform this process, you must look at groups of objects and identify all those characteristics which they have in common and separate out the differences. Everything which is similar can then be classed as abstract as it is non-specific and can be applied to all those objects, while everything which is different is unique to a particular concrete object. In computer software these similarities can be contained in an abstract class while the differences are limited to a concrete subclass. In my framework the abstract table class contains the common characteristics that can be applied to any table subclass while each concrete table class specifies the unique details for a specific database table. This separation between the general and the specific, the similar and the unique, is implemented using the Template Method Pattern where all invariant methods are defined in the abstract class and all variable "hook" methods are defined in each subclass.

    Some developers seem to think that the creation of a single class is the result of performing an abstraction just because you can have multiple instances of the same blueprint. Identifying an object for which the business needs to have its data stored in the database is not a special process which requires special rules, it is as simple as saying "we need to store data on Products, Customers, Orders, etc", creating a database table for each of those objects, then creating a class for each table. The only tricky part is examining the mass of data which you want to store for each of those objects and applying the rules of Data Normalisation which may require the splitting of that data across several related tables. Once you have created a table the structure is fixed, but each row (instance or occurrence) on that table will have a unique set of values which adhere to that structure.

    Note that although I sometimes create a subclass of a concrete class this is never to create a class for a different table, it is only to provide a different implementation in some "hook" methods. For example in the DICT subsystem I have the following class files:

  5. How do I use polymorphism?

    My early research into Polymorphism was initially unproductive as I found the descriptions to be less than informative. Here is one such description:

    Polymorphism is the ability to send a message to an object without knowing what its type (class) is.

    This to me is rubbish for two reasons:

    Here is another description which I found to be of no use whatsoever:

    Polymorphism is the ability of a message to be displayed in more than one form.

    WTF!! OOP is NOT messaging software, it is NOT about sending messages, and it is certainly NOT about displaying messages. I have seen many other descriptions, but I find them to be just as confusing and less than informative. The most useful description which I eventually found was this:

    Same interface, different implementation. This means that different classes may contain the same method signature, but the result which is returned by calling that method on a different object will be different as the code behind that method (the implementation) is different in each object.

    That immediately told me that my use of an abstract table class which supported the standard CRUD methods which were then shared by every concrete table class was a shining example of polymorphism as every method in the abstract superclass automatically appears in every concrete subclass. Take the following code which appears in several Page Controllers as an example:

    require "classes/$";
    $dbobject = new $table_id;
    $fieldarray = $dbobject->getData($where);

    The getData() method will produce an SQL query which defaults to the following:

    SELECT * FROM $this->tablename WHERE $where;

    The value inside $this->tablename is set within the constructor of each subclass, so what is returned in $fieldarray will be different for each subclass. This clearly shows that calling the same method on different objects will produce different results. In case you have still not grasped the benefit that this provides, it means that the code which calls the getData() method instantly becomes reusable. I can use it hundreds of times with a different value for $table_id and it will produce a different result each time. Because of this each of my 40 page controllers can be used with any of my 400 Model classes. This provides me with 16,000 (40 x 400) opportunities for polymorphism.

  6. How do I use Dependency Injection?

    Once you have created a number of objects which share a set of common methods you have enabled polymorphism, but how can you take advantage of what this has to offer? The answer is Dependency Injection. The first question is "What is a dependency?". If ModuleA calls a method on ModuleB then ModuleA requires access to ModuleB in order to complete its processing. In other words ModuleA is dependent on ModuleB. ModuleB is not dependent on ModuleA but it is a dependency of ModuleA.

    As an example suppose we have modules M1, M2, M3 and M4 which all share the methods insertRecord(), getData(), updateRecord() and deleteRecord(). In order to call these methods we could have a separate version of the calling module C for each of the objects M1, M2, M3 and M4, such as:

    module C1:
    require 'classes/';
    $object = new m1;
    $result = $object->insertRecord($_POST);
    module C2:
    require 'classes/';
    $object = new m2;
    $result = $object->insertRecord($_POST);
    module C3:
    require 'classes/';
    $object = new m3;
    $result = $object->insertRecord($_POST);
    module C4:
    require 'classes/';
    $object = new m4;
    $result = $object->insertRecord($_POST);

    This means that for each of the M objects you will need a separate version of the C object. That's a lot of duplication, especially if you have 400 versions of the M object. You can make huge savings by having just one version of the C object as follows:

    require 'classes/$';
    $object = new $module_id;
    $result = $object->insertRecord($_POST);

    This works by using whatever object identity is contained within the variable $module_id. This can be set using code such as the following:

    $module_id = 'm1';  // or 'm2' or 'm3' or 'm4' or 'm999'
    require '';

    This instantly makes object C (the Controller) reusable with any version of object M (the (Model). In my ERP application I have 400 Models and 40 Controllers, so that means I can use the same Controller 400 times instead of having 400 versions. Does that meet the definition of "reusable code"?

  7. What properties should be put in each class?

    As I was used to passing complete rows of data from one component to another I decided against the idea of defining each database column as a separate property in each class and use a single property called $fieldarray instead. I thus avoided the need to require a collection of getters and setters for each column. This single property could also contain as many or as few columns as I liked, and as many or as few rows as I liked. When I saw the first example of using getters and setters I thought to myself "What a stupid idea! Why should I waste time in unpacking the $_POST array into its component parts and then insert them one column at a time when I can pass in the entire array in one fell swoop?"

    I did not like the idea of having a separate class property for each table column as I could immediately see the disadvantage of having separate bits of code to deal with each individual column. I could also see that it would restrict each object to holding data for just one row, and I knew enough about databases to realise that a database query can return any number of rows, even no rows at all. My experiments with PHP showed that when the various pieces of data are sent from an HTML form on the client to a PHP script on the server they are presented as elements within the $_POST variable which is an associative array. I also noticed that when reading data from the database it appears as another array (in fact an indexed array of associative arrays) with a separate index number for each row. I then asked myself a simple question: if the Presentation layer deals with multiple rows and columns of data in a single array, and the Data Access layer deals with multiple rows and columns of data in a single array, do I need code in the Business layer to deconstruct and reconstruct this array, or can I remove the need for extra code and access the contents of the array directly? Some people may call me an idiot because of my programming style, but wasting time by writing code that I don't need to write seems more idiotic to me.

  8. What methods should I put in each class?

    After deciding that each database table should have its own class the next step was to decide what methods to put in each class. As the only operations that can be performed on a database table, regardless of what data it contains, are Create, Read, Update and Delete (CRUD) I decided to support these four methods in each table class using a standard set of method names - insertRecord(), getData(), updateRecord() and deleteRecord(). The idea of using unique names within each class, such as createCustomer(), createProduct() and createOrder() never occurred to me. Unlike procedural functions which must have unique names within the entire application, with OOP it is possible for the same method name to be duplicated in any number of different classes. This is why I chose $customer->insertRecord(), $product->insertRecord() and $order->insertRecord().

    I noticed during my initial development that there was a lot of boilerplate code involved in each of these methods, so rather than having to duplicate it in each table class I decided to put both the methods and the boilerplate code in an abstract table class so that it could be inherited and therefore shared by each concrete table class. I also decided to include the $fieldarray variable as an input and output argument on each method.

    Other programmers choose to have separate public methods called load(), validate() and store(). This is not a good idea as it allows for more data to be inserted after the validate() has been performed, which could lead to errors during the store(). In my framework I do not treat these as separate operations as they must always be executed together and in a particular sequence. In other words they form a group operation in which they are separate steps within that operation. If you look at either insertRecord() or updateRecord() the load() is performed by passing all the data in as an input argument while the validate() and store() are performed internally. Note that the store() method is only called if the validate() method does not detect any errors. For fans of design patterns this is an example of the Template Method Pattern where the abstract class contains all the invariant methods and allows variable/customisable methods to be defined within individual subclasses.

    This technique of using common method names in different objects is described in Robert C. Martin's article The Dependency Inversion Principle where his "Copy" program uses multiple device objects each of which supports the same read() and write() methods, but with different implementations for each device object. This is also an example of polymorphism in action. Without polymorphism you cannot have dependency injection.

  9. How do I validate data before it gets written to the database?

    One thing I learned early on in my programming days was to never trust data provided by the user as it could be full of errors, by which I mean values that could not be inserted into the database because they did not match the column's data type. It is better to check each value in the software before it is sent to the database so that you can inform the user when it is wrong and give him the opportunity to correct it instead of having the entire program come to a halt because of a failure with the query.

    I already knew that within the database schema each table's structure contained a list of field names and their data types, so what I needed was a method of validating each field's value against its data type. I was already passing the data around in a single associative array called $fieldarray which contained an array of field names with their values, so it struck me that it would be useful to have a second array of field names and their data specifications, which would then make it possible to write a standard routine to iterate through these two arrays comparing each field's value with its specifications. I started off by writing this list of field specifications by hand, but this became so boring and repetitious I decided to automate it. Just as I had done in my COBOL days with my COPYGEN utility I wrote a program which read each table's structure from the database schema and produced a file of field specifications which could be read into the table's object at runtime. Instead of creating this file directly from the database schema I decided to import it into an intermediate database of my own design called a Data Dictionary from which it could be exported to a disk file. I chose to do it this way as I knew that I would want to include additional details in this structure file that are not available in the database schema. I have functions within my Data Dictionary which enable me to add, and therefore extract, as much additional information as I like.

    This method now means that the framework now takes care of both creating the table's structure file and validating the user's data against this structure file without having to write a single line of code. This is what I refer to as primary validation. For the uninitiated this is also an example of declarative programming where I am declaring the rules that need to be followed without actually performing them. This is done later in the framework's validation object.

  10. Where do I put the business rules?

    When I became aware of the 3-Tier Architecture and saw it in action with the UNIFACE language I instantly saw the benefits of splitting the logic of an application into several distinct layers:

    Apart from the fact that the description of the 3-Tier Architecture specifically states that all business rules should reside in the Business layer, the fact that all the objects in the other layers are services and not entities should drive you to the obvious conclusion that business rules do not belong in a service object. Services are (at least in the RADICORE framework they are) pre-built and reusable, which means that they do not contain any information regarding application entities, which includes their business rules.

    When a developer comes to build an application using the RADICORE framework he only need concern himself with building Model classes in the Business/Domain layer as all the other components have been pre-built and come supplied in the framework. All business logic, which includes table structures, validation rules, business rules and task specific behaviour, are confined to and should not exist anywhere else but in the Business layer. All the other layers are (or should be) comprised of service objects which should only contain the logic which performs that service.

    If it is necessary to add additional processing rules to any table class then this can be done using any of the available "hook" methods. These only exist because I chose to create an abstract class which then enabled me to implement the Template Method Pattern.

    I have subsequently been made aware that other programmers have totally different ideas on what pieces of logic should go where. There still are debates on whether we should have Fat Models and Skinny Controllers, or Fat Controllers and Skinny Models (with the "fat" identifying where the business rules should exist). I have heard it said that data validation should not be performed within an object as it is wrong to insert data into an object that has not been pre-validated. I have heard it said that each business rule should go into its own class. As far as I am concerned all these different theories, regardless of how clever their arguments appear to be, are violating the basic principles of programming in general and OO programming in particular.

    Putting business rules in the Business/Domain layer is also correct according to Martin Fowler who, in his article AnemicDomainModel, says the following:

    It's also worth emphasizing that putting behavior into the domain objects should not contradict the solid approach of using layering to separate domain logic from such things as persistence and presentation responsibilities. The logic that should be in a domain object is domain logic - validations, calculations, business rules - whatever you like to call it.
  11. How do I insert non-standard or custom code?

    While the framework can take care of all standard processing there will always be times when you will want to perform some additional processing or data validation that cannot be performed automatically. The standard processing flow is handled by the methods in the abstract table class, so what is needed is a mechanism where you can say "when you get to this point in the processing flow I want you to execute this code". This is where my use of an abstract table class provided a simple and elegant solution. My experiments with inheritance had already proved to me that when you inherit from one class (the superclass) into another (the subclass) the resulting object will contain the methods from both classes. The method in the superclass will be executed unless you override it in the subclass. This means that in certain points of the processing flow I can call a method which is defined in the superclass but which does nothing, but if I want to I can copy that method into my subclass and insert whatever code is necessary. This then replaces at runtime a method in the superclass which does nothing with a method in the subclass which does something. To make it easy to identify such methods I have them a "_cm_" prefix which standards for customisable method. Some of them also include "pre_" or "post_" in the prefix to identify that they are executed either before or after the standard method of that name.

    Here is an example of an empty method in the abstract class:

    function _cm_whatever ($fieldarray)
    // perform custom processing at .....
        // customisable code goes here
        return $fieldarray;
    } // if

    Here is some sample code which can be inserted into the subclass to compare the value in one field with that in another field:

        if ($fieldarray['start_date'] > $fieldarray['end_date']) {
            // 'Start Date cannot be later than End Date'
            $this->errors['start_date'] = getLanguageText('e0001');
            // 'End Date cannot be earlier than Start Date'
            $this->errors['end_date']   = getLanguageText('e0002');
        } // if

    Note here that errors are indicated by inserting an entry into the $this->errors array and NOT by throwing an exception. Data validation errors can be corrected by the user whereas true exceptions indicate a fault in the code which can only be corrected by changing the code. Another reason is that if you throw an exception it can only report a single error whereas an array can contain as many error as you encounter.

    It was not until several years later that I discovered that what I had done was to provide an implementation of the Template Method Pattern which, according to the Gang of Four, is one of the most important patterns for a framework.

  12. How do I call the numerous tasks (use cases) within the application?

    In my early COBOL days it was common practice to have a separate program which handled all the aspects of a particular area of business. This resulted in a small number of large programs, each of which handled multiple responsibilities. As discussed in my COBOL experience this method began to generate problems, and after some thought I realised that the simplest and best solution would be to change from having a small number of large programs which handled multiple responsibilities to have a large number of small programs which handled a single responsibility each. This solved all the known issues without creating new ones, so it became a philosophy which I carried forward when I switched from COBOL to UNIFACE and then to PHP.

    I have seen comments from other developers who consider my family of forms to be a single use case which therefore should be covered by a single program component. I disagree most strongly. By treating each of those six operating modes (List, Search, Insert, Update, Delete and Enquire) as a separate component (unit of work, use case or task) I end up with the following advantages:

    1. Each component has its own entry the TASK table in the MENU database.
    2. Each TASK has its own PHP script in the file system which can be activated by its own URL in the browser.
    3. Each task can then be added to the relevant MENU or NAVIGATION-BUTTON table in the MENU database.
    4. Any number of ROLES can be created so that TASKS can be made accessible to those ROLES via the ROLE-TASK table.
    5. USERS of the application can be assigned to any number of ROLES via the USER-ROLE table which then allows a user to access specified tasks.
    6. A USER's access to a task is via either a MENU button or a NAVIGATION button, but when the framework is defining which of those buttons can appear in the current screen it can remove the buttons for those tasks which the user is not allowed to access.
  13. Do I have a separate Controller for each Model?

    A lot of the code samples which I read while experimenting with PHP had a separate Controller for each Model which could only be used with that Model. This was because it had hard-coded references to a particular Model, hard-coded references to properties within that Model (either via getters and setters or argument names on method calls), and hard-coded references to methods which were unique to that Model. This meant having a bespoke Controller for each Model, but I wanted to have something that was more reusable. I had already worked out that in a database application each task, regardless of the data which it manipulates and the complexity of that manipulation, is responsible for performing one or more operations on one or more database tables. In my COBOL days I had already noticed that after writing a program which "did something" with one database table it was often a subsequent requirement to write another program which did exactly the same thing but with a different table. This requirement could only be satisfied by copying the original program then changing all the table references, but this still resulted in a lot of similar code.

    When coding my first PHP Controller it had the name of the class which was instantiated into an object hard-coded within its bowels, so I looked to see if there was a way to remove this hard-coded reference. It only took me five minutes to discover that there was, so instead of having code like this:

    require "classes/";
    $dbobject = new product;
    I could replace it with code like this:
    require "classes/$";
    $dbobject = new $table_id;
    All I then had to do was assign the value "product" to the variable $table_id before activating the Controller. This is performed in what I call a component script which looks like the following:
    $table_id = "person";                      // identify the Model
    $screen   = '';    // identify the View
    require '';                // activate the Controller

    You should be able to see at this point the advantages of (a) having a separate class for each database table, (b) using common method names in each table class and (c) passing data in and out in a single array instead of individual properties. For example, I have an ADD1 Controller which adds a single record to a database table, but this Controller does not contain either the name of the database table nor the names of any columns on that table. The code which it does contain looks like the following:

    require "classes/$";
    $dbobject = new $table_id;
    $fieldarray = $dbobject->insertRecord($_POST);
    if (!empty($dbobject->errors)) {
    } else {
    } // if

    The insertRecord() method performs several steps in its processing cycle., among which is primary and secondary validation as well as pre-insert and post-insert processing.

    This means that instead of having a separate Controller to handle all the use cases for a particular Model (table class), which would make the Controller unreusable with other Models, I have a separate Controller which handles a single use case for an unspecified Model, which means that each Controller can be used with any Model. It also means that any Model can be used with any Controller, thus making them both reusable.

  14. How many Models can a Controller access?

    It was not long after I started to publish articles on my framework that I was told that my idea of creating Controllers which accessed more than one Model was totally wrong. What that critic failed to understand was that just because he had only seen sample code where a Controller accessed only one Model did not mean that a Controller could only ever access a single Model. Such a restriction has never existed, and those who suggest it can never come up with a valid reason to justify its existence other than "that is the way I was taught". There are numerous situations where what you want to display on the screen comes from more than one database table, such as displaying a sales order where details from the ORDER_HEADER are displayed at the top with rows from the ORDER_LINE table displayed underneath, so treating each of those areas as separate zones which require separate accesses of the database was common practice even as far back as my COBOL days in the 1980s. With UNIFACE it was much the same - you painted an entity frame on the top of the screen to display a single row from the ORDER_HEADER entity below which you created an entity frame for the ORDER_LINE entity which showed multiple rows from the database.

    Because this practice of allowing a screen to be broken down into several zones, each of which dealt with rows from a different database table, had been standard practice for the 20 years before I switched to an OO language, I saw absolutely no reason why I should switch to an alternative practice when no such practice had ever been documented or justified. Just because my critic had not seen it done did not mean that it should not be done.

  15. How many scripts do I have for each task?

    Some of the early code samples which I saw showed a task which performed an insert or update operation using two separate scripts - the first which performed a GET operation to build and display the screen, and a second which was activated by pressing the SUBMIT button which then performed a POST operation to handle both the data validation and the database update. I instantly took can immediate dislike to this idea. I much prefer to have all the aspects of each task, both the GET and the POST, handled in a single script.

    The only exception to this idea is when a field on the screen requires to be selected from a list and the contents of this list is either too large or too complex for a dropdown list. In this case I use a separate task which I first encountered in my UNIFACE days called a Popup.

  16. How many different documents can a single task produce?

    In my early COBOL days it was common practice to write a single large program which could be switched from one mode to another (from LIST to ADD, for example) which sometimes required a different screen. This practice became obsolete when I decided to switch from a small number of large programs to a large number of small programs each of which handled a single mode with a single screen. Using my PHP framework each task can produce no more than one output document. This document is usually HTML, but could be CSV, PDF or even some other format. In some cases there is nothing output at all as the script is required to do no more than perform some sort of update and then return control to the task from which it was activated. I do not have any tasks where the user can choose what output format he wants while running the task as each task has a fixed format of output. Just as there is one task to output an HTML document in a LIST screen which shows multiple rows going across the page there is another task which outputs a DETAIL screen which shows a single row going down the page. There is also have another task which produces CSV output and yet another which produces PDF output.

  17. How do I jump from one task to another?

    In some of the early PHP samples which I examined I saw that the way to jump from something like a LIST form, which showed summary details for multiple database rows which were displayed horizontally across the page, to an ENQUIRE/UPDATE/DELETE form for a selected row which showed the full details was via a hyperlink on each row. I did not like this idea for several reasons:

    To get around these limitations I decided to switch to the POST method which involved the following changes:

    While this code took a bit of time and effort to build I knew that it would be a good investment as it would provide standard functionality that could be used with every new task that I wrote. This has resulted in library functions called scriptNext() and scriptPrevious() which are used extensively in the framework.

  18. What directory structure do I use?

    When I first developed my system of menu and security components I put all the files in a directory called MENU which I placed in the web server's DocumentRoot. Under this I created a series of subdirectories, one for each file type, to contain the various files or scripts required by that system. When I started to write applications which ran under my menu program I decided to put the files into a separate directory so as not to mix them up. I also decided that each application would have its own dedicated database instead of having a single database to contain everything. This now means that I regard the system as a whole as being a collection of interconnected subsystems where each subsystem can share the components of any other subsystem without the need for duplication. The directory structure for every subsystem resembles the following:

    • default
      • classes
      • reports
        • en
        • language2
        • language3
      • screens
        • en
        • language2
        • language3
      • sql
        • logs
        • mssql
        • mysql
        • oracle
        • postgresql
        • sqlsrv
      • text
        • en
        • language2
        • language3
      • xsl

    This means that every subsystem has its own database and all its files in its own subdirectory so that different sets of developers can work on different subsystems at the same time without getting in each other's way. It also makes it very easy to copy an entire subsystem into a single zip file so that you can install that subsystem onto another server. I have seen other frameworks which do not understand the concept of subsystems which means that all the various scripts are intermingled and jumbled up together. I'm glad I do not have to work with such primitive frameworks.

  19. Do I produce UML diagrams for each task?

    The first time I encountered a team of developers who insisted of drawing UML diagrams for each and every use case the more exasperated I became as it took longer for them to draw the diagrams than it took me to write the code which implemented those diagrams. These diagrams became more complicated than they needed to be and contained a lot of duplication, so as an avid follower of the KISS and DRY principles I wanted something simpler and better. I know that some developers struggle with words alone and sometimes need a pretty picture to clear the fog from their minds, so I looked for the simplest diagram possible which covered as many use cases as possible. I had already determined that every use case, regardless of its complexity, can be boiled down to performing one or more operations on one or more database tables, and that the only operations which can be performed on a database table are Create, Read, Update and Delete (CRUD), so it seemed obvious to me that all I needed to do was create a single set of UML diagrams which covered these four operations. These diagrams can be found in UML diagrams for the Radicore Development Infrastructure. Note that these diagrams clearly show the following:

  20. Building a Data Dictionary

    This again was a huge investment in time and effort which few other developers would make, but I wanted to automate the process by which I extracted each table's structure information out of the database schema and made it available to the PHP code. Doing it manually was tedious, boring and prone to errors, and as I knew that I would be constantly adding new tables to my application database I knew that in the long run the investment would pay dividends. What I basically did was to take an Extract-Load process and extend it into an Extract-Transform-Load process by creating a simple database called a Data Dictionary in the middle between the two ends. I then used the RADICORE framework to build the maintenance screens.

    The advantage of using an intermediate data store instead of just copying the contents of the database's INFORMATION_SCHEMA verbatim is that I can add whatever extra information I like to provide additional functionality. At first it was simple things like identifying which HTML control should be used for each column, and for controls like dropdown lists and radio groups which require a list of options the name of the variable which contains those options.

    The framework gives the application developer the ability, via the _cm_pre_getData() method, of extending the SQL query beyond the simple SELECT * FROM $this->tablename WHERE .... When dealing with a table which was the child in a parent-child relationship I found myself many times in having to write extra code to include one or more fields in the SELECT list, as in the following example:

    SELECT $this->tablename.*, parent.column1, parent.column2
    FROM $this->tablename
    LEFT JOIN parent ON (parent.primary_key=$this->tablename.foreign_key)
    WHERE ...

    The Data Dictionary already contained the basic information regarding each relationship, so I added the parent_field and calc_field columns which then enabled the framework, when constructing the SQL query, to automatically add the parent column(s) to the SELECT list and insert a LEFT JOIN. Yet another example of a little bit of investment in time and effort up front which pays for itself in the long term.

    When I built the function which extracted a table's data from the Data Dictionary and made it available to the PHP script I deliberately chose NOT to write it directly to the $fieldspec variable inside the class file. Instead I wrote it to a separate structure file which is loaded into the object using the standard loadFieldSpec() method within the class constructor. This is because the class file may have been amended to include code inside any of the "hook" methods, and I don't want to lose any of those amendments. This also means that at any time after creating the class file for a table I can change the structure of that table and make those changes available to the PHP code with nothing more than two button clicks:

    In this way I can also keep my software structure synchronised with the database structure which in turn means that I do not have to waste any time with that abomination called an Object Relational Mapper (ORM).

  21. Creating a library of Transaction Patterns

    Once you start down the road of building software to automate manual tasks you sometimes find that with all the steps that you have already taken you can take just one more step and add yet another level of automation. And so it was with the RADICORE framework. I had already automated the construction of my Model components via my Data Dictionary, I had already built a reusable set of View, Controller and Data Access Objects, but there were still some manual steps which had become boring and tedious:

    Because I consider a task (user transaction or use case) to be nothing more than performing a set of operations on a database table, where each table is defined within the Data Dictionary and each set of operations is defined within a pre-written and reusable controller script, it turned out to be an easy procedure to automate. I added a new task to my Data Dictionary so that you can select a table, select a pattern, fill in a few fields, then press a button which will generate the scripts and update the MENU database all in one go.

    A full description of all these patterns is provided in Transaction Patterns for Web Applications.

Because the RADICORE framework contains so many pre-written and reusable components, and because I have automated as many of the tedious and boring manual procedures as I can, I have produced a framework which really does provide Rapid Application Development (RAD). If you don't believe me then consider the following - after building a brand new table in my database I can create a standard family of forms to maintain and view the contents of that table in just 5 minutes without having to write a line of code - no PHP, no HTML, no SQL.

Practices which I do not follow

It was not until several years after I had got my framework up and running that some so-called "experts" in the field of OOP informed me that everything I was doing was wrong, and because of that my work was totally useless. When they said "wrong" what they actually meant was "different from what they had been taught" which is not the same thing. The fact that they were taught one way to do things does not mean that it was the ONLY way, the one TRUE way, and that any other way is automatically wrong. When I examined some of these principles and practices more closely I discovered that a large number of them were based on experiences with the Smalltalk language which was built for educational use by a bunch of academics in the 1970s. I have never seen any evidence that this language has ever been used to build database applications for the enterprise, so a lot of the code samples and programming techniques which I have seen are totally irrelevant. When you also consider the large number of different OO languages which have been created in the last 45+ years, each of which was created by people who had a different interpretation of how code should be written, you should realise that not all of these principles are relevant or even practical in all of these languages.

I chose PHP as my new development language as it was designed specifically for building applications with HTML forms at the front-end and a relational database at the back-end. I liked the simple, easy to learn syntax, and my experiments proved that it could do all that I needed it to do. The decisions that I made when constructing my framework were based on my decades of prior experience, mixed with common sense and intuition, which made me follow practices which had proved to be sound.

Below is a list of "best practices" which I refuse to follow simply because, in my humble opinion, they are not actually "best" at all.

  1. I don't model the real world.

    I do nothing but write database applications for businesses, which are also known as enterprise applications, and this type of software does not interact with objects in the real world, it interacts with objects in a database, and these objects are called tables. The sole purpose of these applications is to put data into and get data out of a database, which is why they were originally called Data Processing Systems. It does not matter that each object in the real world has a totally unique set of properties and operations, when data about those objects in stored in a database it is reduced to a set of tables and columns upon which the only operations that can be performed are Create, Read, Update and Delete (CRUD). It was also obvious to me, after decades of experience, that every user transaction follows the same basic pattern - it performs one or more CRUD operations on one or more tables and involves the movement of data from the GUI to the database, and then from the database to the GUI. While this basic pattern can be implemented using common code provision must also be made for the addition of custom code within specific points of the processing cycle to deal with extra business rules.

  2. I don't use a separate methodology to design my software.

    I know from years of experience that the most important part of a database application is the database design, after which you can then structure your software around that design. Get the database structure right first, then write the software to follow that structure. If your database design is wrong then it will make it more difficult to write the software, or, as Eric S. Raymond put it in his book "The Cathedral and the Bazaar":

    Smart data structures and dumb code works a lot better than the other way around.
    The idea of using different and incompatible design methodologies for the database and the software strikes me as being questionable. The idea of deliberately creating two parts of the application which are incompatible, then getting round this problem by introducing another piece of software known as an Object Relational Mapper (ORM) strikes me as being incomprehensible. As a devout follower of the KISS Principle I would never dream of doing it that way, not in a million years.

    My framework is built around a combination of the 3-Tier Architecture and the Model-View-Controller (MVC) Design Pattern which means that all application code, all business rules, are confined to the Business/Domain layer, or the Model in MVC. The components in the remaining Presentation and Data Access layers are completely application-agnostic in that they do not contain any business rules or any other knowledge of the application, which means that I have been able to implement them as pre-built and reusable services which need no further design. Every object in the Business/Domain layer is responsible for one object in the database, so because each object "IS-A" database table I created an abstract superclass to hold common behaviour and characteristics from which I can create many concrete subclasses which only need hold the behaviour and characteristics which are specific to one table. Among this information is the table's structure which is extracted directly from the database schema, which means that I can keep my software structure completely synchronised with my database structure.

  3. I don't create deep class hierarchies.

    In OO theory class hierarchies are the result of identifying "IS-A" relationships between different objects, such as "a CAR is-a VEHICLE", "a BEAGLE is-a DOG" and "a CUSTOMER is-a PERSON". This causes some developers to create separate classes for each of those types where the type to the left of "is-a" inherits from the type on the right. This is not how such relationships are expressed in a database, so it is not how I deal with it in my software. Each of these relationships has to be analysed more closely to identity the exact details. Please refer to Using "IS-A" to identify class hierarchies for more details on this topic.

  4. I don't use object interfaces.

    By this I mean the use of the keywords interface and implements.

    PHP4 did not contain support for interfaces, so I did not know that such things existed. I later read where some developers claimed that they were an "important" element in OOP, but after investigating them I concluded that they were actually "irrelevant" as they provided zero benefit from the effort involved in changing the code to use them. When I tried to find out where the idea of interfaces originated I was surprised to discover that they were created decades ago to deal with a problem in statically typed languages which could not provide polymorphism without inheritance. PHP is dynamically typed and does not have this problem, so the use of object interfaces is actually redundant. This topic is discussed further in Object Interfaces.

    Not only are interfaces redundant as their reason for being no longer exists, they have actually been superseded by abstract classes which provide genuine benefits:

    This topic is discussed further in The difference between an interface and an abstract class.

  5. I don't design classes to deal with associations.

    Objects in the real world, as well as in a database, may either be stand-alone, or they have associations with other objects which then form part of larger compound/composite objects. In OO theory this is known as a "HAS A" relationship where you identify that the compound object contains (or is comprised of) a number of associated objects. There are several flavours of association:

    Please refer to Using "HAS-A" to identify composite objects for more details.

  6. I don't use object composition

    Shortly after I released my framework as open source I received the complaint from someone asking "Why are you using inheritance instead of object composition?" My first reaction was "What is object composition and why is it better than inheritance?" Eventually I found an article on the Composite Reuse Principle (CRP) but it did not explain the problem with inheritance, nor did it explain why composition was better. Those two facts alone made me conclude that the whole idea was not worth the toilet paper on which is was printed, so I ignored it. Please refer to Use inheritance instead of object composition for more details on this topic.

  7. I don't need to design any Model classes.

    Each table in the database has its own Model class in the Business/Domain layer, and I don't need to spend time working out what properties and methods should go in each class as every one follows exactly the same pattern:

    I quickly realised when coding the class for my second database table that there was much in common with the code I had written for the first database table, and the idea of having the same code duplicated in every other table class I immediately recognised as being undesirable as it violates the DRY principle. Question: How do you solve this problem of code duplication in OOP? Answer: Inheritance. I built an abstract class which could then be inherited by every table class, and moved as much code as I could from each table class to the abstract class. At the end of this exercise I had removed every method out of each table class until there was nothing left but the constructor. This meant that the abstract class had code which dealt with an unspecified table with an unspecified structure while it was the table class which identified a specific database table and its structure, thus turning the abstract into the concrete.

    When it came to inserting custom code within each table class I followed the examples I had encountered in UNIFACE and a brief exploration into Visual Basic. In both of these languages you could insert into your object a function with a particular name and the contents of that function would automatically be executed at a certain point in the processing cycle. This told me that the runtimes for both those languages had code which looked for functions with those names, and either executed them or did nothing. How do you duplicate this functionality using OOP? Execute special methods in the abstract class which are defined in the abstract class but devoid of any code, then allow the developer to override each of those methods in the subclass. Easy Peasy Lemon Squeezy. It wasn't until several years later that I discovered I had actually implemented the Template Method Pattern.

  8. I don't create a separate method for each use case.

    I was never trained to use Domain Driven Design (DDD) to design the objects in my Business/Domain layer which is precisely why I do not repeat the mistakes that it advocates. I started to read it to find out if I was missing something important, but I got as far as the statement "create a separate method for each use case" when the alarm bells starting ringing in my ears and a huge red flag started waving in front of my eyes. If I were to do such a foolish thing I would be closing the door to one of the most useful parts of OOP, that of polymorphism. As an example let's assume that I have objects called PRODUCT, CUSTOMER and ORDER and I want to create a new record for each of them. Under the rules of DDD I would have to do the following:

    require 'classes/';
    $dbobject = new customer;
    require 'classes/';
    $dbobject = new product;
    require 'classes/';
    $dbobject = new order;

    You should notice that both the class name and the method name are hard-coded, which means that each of those 3 blocks of code would have to be in a separate controller. Instead I do the following:

    $table_id = 'customer';
    require "classes/$";
    $dbobject = new $table_id;
    $table_id = 'product';
    require "classes/$";
    $dbobject = new $table_id;
    $table_id = 'order';
    require "classes/$";
    $dbobject = new $table_id;

    In this arrangement it is only the first of the 4 lines in each of these blocks that would have to be hard-coded. In my framework this is done in a separate component script. This script will then activate the same controller script which calls the insertRecord() method on whatever object it is given. If you look you should see that the last 3 lines of code in each of those blocks is identical, which means that you can define them in a single object which you can reuse as many times as you like.

    If you are familiar with the MVC design pattern you should know that the purpose of the Controller can be described as follows:

    A controller is the means by which the user interacts with the application. A controller accepts input from the user and instructs the model and view to perform actions based on that input. In effect, the controller is responsible for mapping end-user action to application response.

    As a simple example a user may request a task which implements the use case to "create a customer" while the controller translates this into "call the insertRecord() method on the customer object". By changing the hard-coded name of the object to a variable which is injected at runtime I now have a controller which can call the insertRecord() method on any object in my application.

    If instead of using shared method names I used unique names I would be removing any opportunities for polymorphism, which would mean no dependency injection, which would therefore mean less opportunity for having reusable objects like my controller scripts. OOP is supposed to increase reusability, so by using a method which decreases reusability seems like anti-OOP to me.

    My approach is the result of my having built hundreds of user transactions in dozens of different applications in several different languages and spotting one common factor - regardless of the overall effect of a user transaction it is always based on the same foundation - it performs one or more of the CRUD operations on one or more database tables and only incidentally executes specific business rules. Instead of having a separate method for each use case (aka unit of work, user transaction or task) I do the following:

  9. I don't create a separate class property for each column.

    While learning PHP I discovered the $_GET and $_POST variables which made data sent to the client's browser available to the PHP script on the server. I also discovered that when reading data from the database the result was delivered as an indexed array of associative arrays. I was quite impressed with PHP arrays as they are far more flexible and powerful than what was available in any of my previous languages, so imagine my surprise when all the sample code which I saw had a separate class property for each column, then a separate getter and setter for each of those columns. I asked myself a simple question:

    If the data coming into an object from the Presentation layer is given as an array, and the data coming in from the Data Access layer is given as an array, is there a good reason to split the array into its component parts for its passage through the Business layer?

    With a little bit of experimentation I discovered that it was very easy within a class to deal with all that column data in an array, so I saw absolutely no advantage in having a separate property for each column. There is no effective difference between the following lines of code:


    Not only would there be no advantage, I quickly identified a series of disadvantages which would make the writing of all that extra code a complete waste of time:

    I do not need to provide answers to these questions as my practice of using a single $fieldarray property to hold all data for that table does not cause any problems. Not only that, it also provides for loose coupling which is one of the characteristics of good software design. The concept of Coupling describes how modules interact with one another. Tight coupling is considered to be bad as it forces a ripple effect where changes in one module cause corresponding changes in other modules. As an example, take the following ways in which data can be inserted into and extracted from an object:

    1. As separate arguments on a method call, as in:
      $result = $object->method($column1, $column2, $column3, ...);
    2. As separate properties within the class, each with its own setter and getter, as in:
      class foobar {
          var $column1;
          var $column2;
          var $column3;
          function setColumn1 ($column1) {
              $this->column1 = $column1;
          function getColumn1 () {
              return $this->column1;
          function setColumn2 ($column2) {...}
          function getColumn2 () {...}
          function setColumn3 ($column3) {...}
          function getColumn3 () {...}
      $object = new foobar;
      $column1 = $object->getColumn1()
    3. As a single array, as in:
    4. $object = new $table_id;
      $fieldarray = $object->insertRecord($_POST);
      $fieldarray = $object->getData($where);
      $fieldarray = $object->getFieldArray();

    Now ask yourself this question: If I were to add or remove a column from a database table, how much effort would be required to make the software deal with that change? If you look at option 1 above you will see that I would have to change the method signature, which would also require changing every place where that method is called. Option 2 above would require even more work as each column has its own pair of getters and setters. Option 3 requires no work at all as any changes to the contents of the array do not require any changes to the method signature. Options 1 and 2 have a ripple effect while option 3 does not.

    What happens if the array contains invalid data? That is automatically taken care of by the framework when it calls the validation object. If I ever change the structure of a table all I have to do is reimport the revised structure into my Data Dictionary then run the export process to recreate the table structure file.

    By using an array I can also tell the difference between a column being present with a NULL value and a column not being present at all.

    I can also deal with any number of columns which are returned from a SELECT query, even columns from other tables, as I can change the contents of the array at will without affecting any method signatures or class properties. There is no ripple effect.

    It should also be noted that all the methods in the abstract class, both variant and invariant, pass the $fieldarray variable around as both an input and an output argument. In this way each method knows precisely what data it has to work with, and each key in the array does not require to be defined as a separate class property.

  10. I do not create separate Controllers for each Model.

    Some junior developers are taught that the six components in my family of forms constitute a single use case. That it what I was taught in my COBOL days. However, as I worked on more and more applications where the use cases got bigger, more complex and more numerous, I realised that the task of writing and maintaining the code was becoming more and more difficult. In order to make the programs simpler I had to make them smaller, and in order to do this I came to the conclusion that each member in that forms family should be treated as a separate use case in its own right and not part of a bigger use case. I knew that it would result in a larger number of programs, but I considered that it would be worth it in the long run - and so it came to pass. Some of my colleagues said that it would result in the same code being duplicated in many programs, but they obviously did not know how to create reusable modules.

    Having a separate module as a controller for each of those use cases was indeed a step in the right direction. Not only do I have a separate Controller for each member of that forms family, each of those Controllers can be used with any Model in the application. I do not have to have a separate version of a Controller for each Model as the Controllers have been specifically built to operate on any Model in the entire application.

    Splitting a compound use case into individual tasks also made it much easier to implement Role Based Access Control as all the logic for checking a user's access to a task was moved out of the task itself and into the framework. As a task could only be activated by pressing its button, either on the menu bar or the navigation bar, it became easy to hide the buttons to those tasks to which the user did not have permission to access.

  11. I do not use Design Patterns

    When I started working with PHP I did not follow any design patterns for the simple reason that I did not know that they existed. I kept hearing about them so I bought the GoF book just to see what all the fuss was about. I was not impressed. Instead of describing implementations that could be reused it simply described designs which you had to implement yourself. Most noticeable by its absence was my favourite pattern, the 3-Tier Architecture. Instead there was a collection of patterns which dealt with situations which I had never encountered in my experience of writing enterprise applications. It appeared to me that these patterns were written around using a compiled language which used a bit-mapped display for software other than enterprise applications with HTML at the front end and an SQL database at the back end. As I could not find anything of interest to me I put the book on a shelf where it lay, unread and gathering dust, for years.

    While some people seemed to think that design patterns were the best thing since sliced bread I began to notice that others held an opposite opinion, as shown in Design Patterns - a personal perspective. The GoF book itself actually contains the following caveat:

    Design patterns should not be applied indiscriminately. Often they achieve flexibility and variability by introducing additional levels of indirection, and that can complicate a design and/or cost you some performance. A design pattern should only be applied when the flexibility it affords is actually needed.

    In the article How to use Design Patterns there is this quote from Erich Gamma:

    Do not start immediately throwing patterns into a design, but use them as you go and understand more of the problem. Because of this I really like to use patterns after the fact, refactoring to patterns.

    One comment I saw in a news group just after patterns started to become more popular was someone claiming that in a particular program they tried to use all 23 GoF patterns. They said they had failed, because they were only able to use 20. They hoped the client would call them again to come back again so maybe they could squeeze in the other 3.

    Trying to use all the patterns is a bad thing, because you will end up with synthetic designs - speculative designs that have flexibility that no one needs. These days software is too complex. We can't afford to speculate what else it should do. We need to really focus on what it needs. That's why I like refactoring to patterns. People should learn that when they have a particular kind of problem or code smell, as people call it these days, they can go to their patterns toolbox to find a solution.

    This sentiment is echoed in the article Design Patterns: Mogwai or Gremlins? by Dustin Marx:

    The best use of design patterns occurs when a developer applies them naturally based on experience when need is observed rather than forcing their use.
  12. I do not use a Front Controller.

    When I began coding with PHP I followed the technique which I had seen in all the code samples I found and created a separate script for each web page, and then put the location of this script into my browser's address bar. When I was told by a colleague that I should be using the Front Controller pattern I asked "Why?" His response was: Because all the big boys use it, so if you want to become a big boy like them then you must use it too. I thought his answer was total garbage, which is why he is now an ex-colleague. I asked my self the question "If running a web page in PHP is so easy then why would someone make it more complicated than it need be by inventing such a ridiculous procedure?" Then I remembered how COBOL programs worked. While a compiled program may contain a large number of subprograms in a single file it is not possible to execute a particular subprogram - you must RUN/EXECUTE the program file and then instruct it to CALL the relevant subprogram. This is done by passing an argument on the run command such as action=foobar, then having a piece of code called a router which calls the subroutine which is responsible for that action. It seemed that a lot of programmers who had started to use PHP had previously used a compiled language where a front controller was a necessity and assumed, quite wrongly, that it was the only way, the proper way, that it should be done. What idiots!

    PHP is not a compiled language, therefore it does not need a front controller and a router. I can break down a large application into a huge number of small components, with a separate script for each component, and all I have to do is insert the script name into the URL of the browser's address bar. This can be done using the standard PHP header() function. This simple technique is supported by Rasmus Lerdorf who, in his article The no-framework PHP MVC framework said the following:

    Just make sure you avoid the temptation of creating a single monolithic controller. A web application by its very nature is a series of small discrete requests. If you send all of your requests through a single controller on a single machine you have just defeated this very important architecture. Discreteness gives you scalability and modularity. You can break large problems up into a series of very small and modular solutions and you can deploy these across as many servers as you like.
  13. I do not use an Object Relational Mapper (ORM)

    When I started coming across articles which promoted the use of Object Relational Mappers my immediate reaction was WTF! The idea that a developer would use one methodology to design the software then another methodology to design the database and then, finding out that the two were incompatible, created an ORM as the "solution" to the problem struck me as the worst invention since creating a square peg for a round hole. In my days as a COBOL programmer I was exposed to several different methodologies for designing the software, but I quickly learned that the best way to design a database was to follow the rules of Data Normalisation. Building software with an incompatible structure proved to be problematical, but these problems were eliminated after I went on a Jackson Structured Programming course which taught the following lesson:

    Start with the data structures of the files that a program must read as input and produce as output, and then produce a program design based on those data structures, so that the program control structure handles those data structures in a natural and intuitive way.

    The benefits of handling the two structures in a natural and intuitive way became immediately apparent to me, so the idea of ignoring this valuable lesson was not something that I was willing to entertain. That is why when I redeveloped by framework in an OO-capable language I started with the database design and then wrote my software to follow this design with the aim of eliminating and avoiding any collisions or incompatibilities.

    Imagine my surprise when later I was told by my critics that my approach was wrong simply because I was not following what was being taught in Object-Oriented Design (OOD), Domain Driven Design (DDD) and Object-Oriented Programming (OOP). As I read those documents I saw nothing but problems being generated instead of solutions, so I chose to ignore them completely and stick with what worked. My rationale for following such an unorthodox and heretical approach can be summed up in that ancient but invaluable saying Prevention is better than cure. Instead of using two different methodologies which create a problem in the shape of incompatible structures, then using an ORM as a "cure" for that problem, it would be better not to create that problem in the first place. This means NOT using two different methodologies and NOT producing two structures that are incompatible. I keep my software structure synchronised with my database structure using the following steps:

    I have absolute confidence in the rules of Data Normalisation, so that will always be my starting point. When I looked at the code which I had already produced using basic but sound implementations of Encapsulation, Inheritance and Polymorphism and saw how much damage would be done by butchering it to carry out what was being taught in OOD, DDD and OOP I decided to take the pragmatic approach and stick with what worked in practice instead of what worked in theory, especially when I saw nothing but faults in that theory. More of my thoughts on this subject can be found in Object-Relational Mappers are EVIL!

  14. I do not use Value Objects

    One of the critics of my framework complained that it wasn't 100% object oriented. When I asked for an explanation he said In the world of OOP everything is an object, so if you have something which is not an object then it's not 100% object oriented. He pointed out that "proper" OO languages had support for value objects, so if I was using a language which did not support value objects then my work could never be 100% object oriented and therefore unacceptable to OO purists. I choose to ignore such an argument as the idea that everything is an object was never part of the original definition of what makes a language object oriented, it was one of those later additions by idiots who excel at taking a simple idea and making it more complicated than it need be.

    The PHP language does not support value objects, the proof being that they are not mentioned anywhere in the manual. This has not stopped several developers from creating their own libraries of value objects, but I have no intention of using any of them. Even if they became part of the official PHP language I would still not use them. Why? Because they are an artificial construct which do not model anything which exists in the real world where every value is a scalar or a primitive. Converting such values into objects within the software would require a great deal of effort for absolutely no benefit for the simple reason that those value objects do not exist in the outside world.

    As an example I shall take some blog post I came across recently which stated that currency values should be defined as value objects so that the value and its currency code could be kept together so that the currency code could not be accidentally changed without a corresponding change in the value. While this sounds like a good idea in theory it falls down flat in practice. Why? Because value objects do not exist in either the GUI or the database. In an HTML form you cannot insert a value object which has two values, you have a separate field for each value. You do not enter "125.00 USD" into a single field the GUI, you enter "125.00" and "USD" into separate fields. You do not store "125.00 USD" in a single column in the database, you store "125.00" and "USD" in separate columns. The notion of converting these two separate values into an object while they exist in the the Business layer, then converting them back into separate values before they are passed to the Presentation and Data Access layers would be all cost and no benefit, so would automatically fail a cost-benefit analysis. I don't know about you, but in my world the result "zero benefit" equates to "not a snowball's chance in hell".

How using OOP increased my productivity

Productivity is defined as:

a ratio between the output volume and the volume of inputs. In other words, it measures how efficiently production inputs, such as labour and capital, are being used in an economy to produce a given level of output.

In the world of software development the usual measurements are time and money, i.e. how long will it take to complete and how much will it cost? After having worked for several decades in software houses where we competed for development contracts against rival companies the client would always look more favourably on the one which came up with the cheapest or quickest solution. As the biggest factor in software development is the cost of all those programmers, it is essential to get those programmers producing effective software in the shortest possible time and therefore the lowest cost. The way to cut down on developer time is to reuse as much code as possible so that there is less code to write and less code to test. I became quite proficient at creating libraries of reusable software, and when I upgraded this to build a fully-fledged framework on one particular project my boss was so impressed that he made it the company standard on all future projects. When the company switched languages from COBOL to UNIFACE I redeveloped that framework to take advantage of the new features offered by that language and reduced development times even more. When I decided to make the switch to the development of web applications using PHP I was convinced that I could reduce my development times even more. Although this was my first incursion into the world of OOP it seemed to be right decision as it promised so much:

The power of object-oriented systems lies in their promise of code reuse which will increase productivity, reduce costs and improve software quality.
OOP is easier to learn for those new to computer programming than previous approaches, and its approach is often simpler to develop and to maintain, lending itself to more direct analysis, coding, and understanding of complex situations and procedures than other programming methods.

As far as I am concerned any use of an OO language that cannot be shown to provide these benefits is a failure. Having been designing and building database applications for 40 years using a variety of different programming languages I feel well qualified to judge whether one language/paradigm is better that another. By "better" I mean the ability to produce cost-effective software with more features, shorter development times and lower costs. Having built hundreds of components in each language I could easily determine the average development times:

How did I achieve this significant improvement in productivity? Fortunately I did not go on any formal training courses, so I was not taught a collection of phony best practices. Instead I used my previous experience, intuition, common sense and my ability to read the PHP manual to work out for myself how to write the code to get the job done, then move as much code as possible into reusable modules. I already knew from previous experience that developing database applications involved two basic types of code:

This leads to two methods of developing your application:

The RADICORE framework makes use of the 2nd method. Of the four classes of object that together form a task (use case, user transaction or unit of work)) all the Controllers, Views and Data Access Objects are pre-built and come supplied with the framework. This just leaves the Model components which exist in the Business/Domain layer. These can be generated for you from within the Data Dictionary after importing table details directly from the database schema. Using the same Data Dictionary you can then build basic tasks based on any of the Transaction Patterns. These tasks will have all you need to insert, read, update and delete records in the database table which then leaves the developer with only one task - insert the business rules into the relevant "hook" methods which have been built into the abstract table class and which can be overridden in every concrete table class. In this way the application developer need spend minimum time dealing with the low-value background code and maximum time on the high-value business rules.

I have been criticised by many developers for not following their ideas on what constitutes "best practices", but I consider their rules to be anything but the best, so I ignore them. I am a pragmatist, not a dogmatist, which means that I judge whether my methods are successful or not based on the results which they achieve. A dogmatist, on the other hand, will insist on blindly following a set of rules, or a particular interpretation of those rules, and automatically assume that their results will be acceptable. This to me is a false assumption and leads to the creation of hordes of nothing more than Cargo Cult Programmers. Writing code which is acceptable to other programmers is not the aim of the game, it is writing code which is acceptable to the paying customer. If I can achieve significantly higher levels of productivity by breaking someone's precious rules then how can they possibly claim that their rules are better than mine? Any methodology which fulfills the promises made for OOP can be regarded as excellent while everything else can be regarded as excrement, poop, faeces, dung or crap. When implemented properly OOP is supposed to increase code reuse and decrease code maintenance, but I have yet to see any implementation which produces anywhere near the same levels of reusability as the RADICORE framework. If their results are inferior to mine, by what measurement can they claim that their methods are superior to mine?

If you think that my claims of increased productivity are false and that you can do better with your framework and your methodologies then I suggest you prove it by taking this challenge. If you cannot achieve in 5 minutes what I can, then you need to go back to the drawing board and re-evaluate your entire methodology.

From personal project to open source

Also in May 2004 I published A Role-Based Access Control (RBAC) system for PHP which described the access control mechanism which I had built into my framework. This provoked a response in 2005 when I received a query from the owner of Agreeable Notion who was interested in the functionality which I had described. He had built a website for a client which included a number of administrative screens which were for use only by members of staff, but he had not included a mechanism whereby access to tasks could be limited in any way. He had also looked at my Sample Application and was suitably impressed. Rather than trying to duplicate my ideas he asked if he could use my software as a starting point, which is why in January 2006 I released my framework as open source under the brand name of RADICORE.

Unfortunately he spent so much time in asking me questions on how he could get the framework to do what he wanted that he decided in the end to employ me as a subcontractor to write his software for him. He would build the front-end website while I would build the back-end administrative application. I started by writing a bespoke application for a distillery company which I delivered quite quickly, which impressed both himself and the client. Afterwards we had a discussion in which he said that he could see the possibility of more of his clients wanting such administrative software, but instead of developing a separate bespoke application for each, which would be both time consuming and costly, he wondered if I could design a general-purpose package which would be flexible enough so that it could be used by many organisations without requiring a massive amount of customisations. Thus was born the idea behind TRANSIX, which was a collaboration between my company RADICORE and Agreeable Notion.

I knew from past experience that the foundation of any good database application is the database itself, and that you must start with a properly normalised database and then build your software around this structure. This knowledge came courtesy of a course in Jackson Structured Programming which I took in 1980. I had recently read a copy of Len Silverston's Data Model Resource Book, and I could instantly see the power and flexibility of his designs, so I decided to incorporate them into the TRANSIX application. I started by building the databases for the Party, Product, Order, Inventory, Shipment and Invoice subsystems, then built the software to maintain those databases. The framework allowed me to quickly develop the basic functionality of moving data between the user interface and the database so that I could spend more time writing the complex business rules and less time on the standard boilerplate code. I started building this application in 2007, and the first prototype was ready in just 6 man-months. If you do the maths you will see that this meant that I took an average of only one month each to develop those subsystems. It took a further 6 months to integrate this into a working website for an online jewellery company as I had to migrate all the existing data from its original database into the new database, then rewrite the code in the front-end website to access the new database instead of the old one. This went live in May 2008.

As well as developing application subsystems with the framework I also added several subsystems which became part of the framework. Theses were:

Building a customisable ERP package

While the RADICORE framework is open source and can be downloaded and used by anyone, the TRANSIX application which I developed was always proprietary and designed as a software package for which users could only purchase licences. Anyone who has ever developed a software package will tell you that although it can be designed to provide standard functionality that should be common to many organisations, there will always be those organisations who have non-standard requirements that can only be satisfied with custom code. What I did not want to do was insert any of this custom code into the same place as the core package code, so I designed a mechanism whereby any custom code could be kept in a separate custom-processing directory with is further subdivided by a separate directory for each project code. Each customer has his own project code so that his customisations can be kept separate from anyone else's customisations as well as being kept separate from the core package code. Because the abstract table class, which is inherited by every concrete table class, has an instance of the Template Method Pattern for every method called by a Controller on a Model, it was easy to insert some code in front of every call to a variant method to ask the question "Does this project have any custom code for this method?" and if the answer is "yes" then it will call that custom variant method instead of the standard variant method. In the case of screen structure files or report structure files each standard file in the standard directory can be replaced with an alternative version in a custom processing directory.

My collaboration with Agreeable Notion and the TRANSIX application ceased in 2014 as they could not find enough clients. Their business model involved finding someone who wanted a new front-end eCommerce site and offering TRANSIX as the support application as the back-end. At about that time I had begun a conversation with a director of Geoprise Technologies who were a USA-based software company with offices in the far east. They had already used my open source framework to build one of their own applications, and when I mentioned that I had already built a entire ERP application called TRANSIX they expressed an interest as they operated in the same business area. One of their directors flew into London so that I could give him a demonstration of what I had produced, and he was impressed enough to suggest that we form a partnership so that his company could sell the application using the brand name GM-X. This was quickly agreed, and in a short space of time we had our first client who was a large aerospace company.

Since that time I have made quite a few improvements to the framework as well as adding new subsystems to the ERP application. This is now a multi-module application where each client only needs to purchase a licence for those modules which they actually want to use. As it is a web application which runs on a web server, which could either be local or in the cloud, there is no fee per user, just a single fee per server regardless of the number of users. This multi-module application now consists of the following modules/subsystems:

This ERP package also has the following features as standard which are vital to software which is used by multi-national corporations:

Levels of customisation

Anybody who has ever built a software application as a package, which is akin to "off the shelf" rather than "bespoke", does so in the hope that they can sell copies of that package to multiple customers at a lower price than at full price to a single customer, yet still make a profit at the end of the day. When customers are looking for a software application they would rather pay a lower price for a package than an enormous price for a bespoke solution. While a software package is designed to follow common practices which should be familiar to most organisations there will always be those potential customers who have their own way of doing things and discover that the package is not quite a 100% fit, in which there are two choices - either the organisation changes its practices to fit the package, or the package is customised to fit the organisation. If customisations are required then how easily can they be developed and at what cost? Fortunately the RADICORE framework has been built in such a way that customisations to the GM-X package can be implemented relatively quickly and cheaply. This has been achieved in the following ways:

Because RADICORE was designed and developed to be a Rapid Application Development framework (hence the RAD in RADICORE) it means that adding new subsystems into the standard package follows exactly the same procedure as adding a bespoke subsystem to deal with a client's non-standard requirements:

Maintaining the unmaintainable

I have been told by my critics that because I am not following their ideas on what constitutes "best practices" that my work must surely be bad, and if it's bad then it must surely be unmaintainable. As is usual their theories fall short when it comes to practice. As well as being the author of the framework I am also the author of the ERP application that was built using this framework, and sometimes a new requirement comes along which would best be served by enhancing the framework instead of adding to the application code. Among the changes I have made to the framework are:

Another recent change was to aid in the customisation abilities of the GM-X package in the form of User Defined Fields (UDF). For some while it was felt that some customers might want to record more pieces of data than was allowed on the core tables so over a period of several year I have added additional tables called XXX_EXTRA_NAMES and XXX_EXTRA_VALUES (where 'XXX' identifies the original core table). Each of these has its own set of maintenance tasks.

While this arrangement worked it did mean that that the extra values were displayed on a separate screen and not with the standard values. It was also not possible to perform a search using any of these extra values. One of my business partners, who uses this software himself, said that it would be nice if the extra values could be automatically mixed in with the standard values so that the user did not have to keep jumping to and from other screens. After he promised to buy me a beer I decided to look into the possibility and work my magic. After 2 week this is what I achieved:

By being able to add all this functionality into the framework it means if at any time in the future I add a pair of XXX_EXTRA_NAMES and XXX_EXTRA_VALUES tables to any of the core tables in the application then their contents will automatically be handled by the framework without any additional coding in any of those application table classes.


Different developers have different ideas on the true meaning of Object Oriented Programming, but the only description which I use is as follows:

Object Oriented Programming is programming which is oriented around objects, thus taking advantage of Encapsulation, Inheritance and Polymorphism to increase code reuse and decrease code maintenance.

The design decisions which I made while building my framework, though described as heretical by my critics, have enable me to be significantly more productive than I was with any of my previous languages.

  1. I implemented Encapsulation by having a separate class for each table in the database.
  2. I implemented Inheritance by creating an abstract table class to hold all the properties and methods which can be shared by any concrete table class.
  3. I implemented Polymorphism by having each concrete table class share the same set of method signatures to support the standard CRUD functions which are the only operations which can be performed on a database table regardless of what data it holds.
  4. I achieved high cohesion by basing my entire framework on the 3-Tier Architecture, which incidentally implements the Single Responsibility Principle (SRP).
  5. I achieved loose coupling by having application data passed around in a single $fieldarray property instead of a separate property for each column.
  6. By using an abstract class I could implement the Template Method Pattern, which is a powerful design pattern for any framework.
  7. By enabling polymorphism I could use dependency injection to inject Model names into my Controllers, thus making it possible to use any Model with any Controller.
  8. By using XSL stylesheets to create all HTML output I was able to build a single View object to extract the data from any Model(s), convert it to XML then transform that XML into HTML.
  9. By splitting my Presentation layer into two separate components, the Controller and the View, I found myself implementing the Model-View-Controller (MVC) design pattern.
  10. I was later able to refactor my XSL stylesheets so that instead of a separate one for each web page I now have just 12 reusable XSL stylesheets from which I can produce thousands of different pages.
  11. I was able to make each table class aware of the structure of its associated table by building a table structure file that could be loaded into every table class file.
  12. By having the table's structure known to its class file, and by passing all application data around in a single $fieldarray variable, I was able to build into the framework a standard validation object which automatically checks all user input for errors which would cause the SQL INSERT or UPDATE query to fail.
  13. I then built a Data Dictionary so that I could generate the table class file and table structure file by pressing a button instead of doing it manually.
  14. By having sharable Controllers and a sharable View object which uses sharable XSL stylesheets I was able to build a library of Transaction Patterns.
  15. I could then modify my Data Dictionary to generate the component scripts and screen structure scripts by pressing a button instead of doing it manually.

As you can see I did not build all this sophistication into the framework in one go, I started small and simple, and each decision that I made opened the door to more opportunities. OOP is supposed to provide more reusability and less maintenance, and my humble efforts, which have not been corrupted by the teachings of so-called OO "experts", has produced the following set of reusable components which are instantly available to any any application which I care to build:

This set of reusable components has been used to create a large ERP application which contains over 400 database tables and 4,000 tasks. That is a huge amount of reuse from such a relatively small number of components.

Here endeth the lesson. Don't applaud, just throw money.


The following articles describe aspects of my framework:

The following articles express my heretical views on the topic of OOP:

These are reasons why I consider some ideas to be complete rubbish:

Here are my views on changes to the PHP language and Backwards Compatibility:

The following are responses to criticisms of my methods:

Here are some miscellaneous articles:

Amendment History

04 Feb 2023 Added Dealing with RISC
Added Dealing with the Y2K enhancement
05 Jan 2023 Added I do not use Design Patterns
Added I do not use a Front Controller
Added I do not use an Object Relational Mapper
Added I do not use Value Objects
01 Nov 2022 Added Design Decisions which I'm glad I made
Added Practices which I do not follow
Added From personal project to open source
Added Building a customisable ERP package
Added Maintaining the unmaintainable