It’s possible to write at least ten articles about the company’s productivity and still this topic will not be covered in all its parts.
In the last chapter we talked about productivity and ways to measure and improve it using metrics typical of the workshop about which you can already find a lot of information (we don’t need to invent anything). Today instead we will focus on technical office, the department where it is difficult to measure the performance in term of “produced pieces”.
Obviously, this is not an unexplored issue either, since there are entire treaties about it, but here nobody is certain to have the good solution for everyone. I don’t want to deal here with the theory, for that I refer you to ChatGPT, but I’ll tell you what we are doing here in Alexide.
Before we continue, there is a clarification to be made. What you find here is our method that we have achieved by putting into practice an approach to "successive approximations", since the techniques proposed by the aforementioned theory for us have proved to be unsuccessful. It’s definitely not even the definitive method, because we’re constantly developing it, and if I wrote this article in six months, maybe I would propose a revised version.
The first thing to ask is whether it is really possible to measure the productivity of a technical office that writes source code, designs new products, makes drawings and also uses creativity.
A wise friend told me a few months ago that basically the only way to measure productivity is to look at economic performance. In fact, it is an almost unbeatable argument even if a small objection emerges: sometimes it is a value that arrives very late and that can be easily influenced by other indicators (for example, it can affect the commercial department, which I do not want to talk about today, because you have to treat one tough guy at a time 😉).
As you probably know, we don’t design components or draw with CAD software, but we basically produce lines of source code. However, before thinking that we have nothing to do with the issues of a technical design office, it is important to know that we have two main activities. The first is the development of the core of our product (SolidRules), which is similar to a technical department that designs standard products. The second activity concerns projects for customers, which is more like what a technical office does that works by order or designs special products and that involves the same difficulties (for example try to bring everything back to the standard, check if we have already done something similar, etc)
To measure our productivity, we also decided to adopt the OEE - Overall Equipment Effectiveness index, which is based on the multiplication of the three factors of availability, productivity and quality.
However, we found that calculating the three indices in an office like ours is more complex than it looks, and that before we could do that, we needed to have some essential data. We have therefore introduced three fields ES, AC and ETC but before talking I open another parenthesis. Our technical department works with several basic elements, including Tickets, Tasks, Interventions, Events and Projects (which also include Sprints). In this text, I will simplify and talk only about Ticket, which is the element we use to record activities.
On the Ticket we have therefore introduced the 3 fields:
ES - Estimated Time = estimated time to realize the task
AC - Actual Time = time actually recorded
ETC - Estimated Time To Complete = time we think it will take to complete the task (it is a value independent of the other two)
To derive the three factors of OEE, we used the three indicators described above, suitably mixed.
According to AI, availability is the percentage of time that the system works properly. It is calculated as the ratio between the time actually used and the total time available.
In our case, the available hours are those provided by the brand-time company (although we use the entity SolidRules Logbook to simulate it). The actual hours are those recorded by the user (on tickets or in events) and that end up enhancing the AC field.
The AI tells us that productivity indicates how much the system produces with respect to its maximum potential and is calculated as the ratio between actual production and the maximum that the system could theoretically achieve.
Obviously, we can’t measure it in terms of actual lines of code compared to the lines of code provided because this would reward those who "write more" while often the synthesis is winning (you’re thinking about it too reading this long and boring post).
Therefore, we measure productivity as estimated hours (ES) divided by hours recorded (AC). In fact, the formula is more complex because, for example, we consider that the hours recorded during certain activities such as meetings are not as productive as those recorded on tickets. If I’m not mistaken, a certain Musk says that meetings do not serve anything (it must be said that he is not always the guesser of all...).
According to AI, quality is the percentage of the final product that meets the required standards. It is calculated as the ratio of the number of compliant products to the total number of manufactured products.
However, we are currently reflecting on this parameter as the ideal would be for quality to derive from the actual goodness of the work done. But how to evaluate goodness objectively (a well-written code can do more damage than hail if put in the wrong place)?
Initially we thought to link it to the number of support tickets opened by end customers, but this is a data that arrives too late.
Currently, we have decided to reward those who record AC hours and therefore the quality parameter is obtained by the ratio between the hours recorded (AC) and the hours recovered by the time brand (here too there is something more but we can not tell you everything today).
Since you have read all this treatise on "Alexidian delusions", you deserve to know the OEE of our company which, as the image states, is 85%.
However, we would like to point out that this is not entirely true (understatement), as in reality it is a lot, a lot (repetition is not a mistake) lower.
But there is a true figure and it is a 28.3% increase in productivity compared to previous results and this is what matters most.
If you have reached this point, you may also be convinced that you can, with some acrobatics, measure the productivity of a technical office.
So you just measure productivity to make the company work? Of course not because it is an indicator that must be put together with all the others.
I conclude with a pearl of philosophy that will further reduce the credibility of those who are writing this post.
"If measuring productivity isn’t enough, why do it? Because while driving blindfolded almost certainly leads to an accident, driving with eyes wide open and with the help of driver assistance systems does not guarantee the absence of collisions, but the probability of having an accident is certainly less than 100%".