Back to the homepage - Hans-Georg Michna
In Search of Inefficiency
How To Secure Jobs By Being Inefficient—
A Guide for Managers in Large High-Tech Companies
Similarities with some large companies and computer projects in Germany are not intended.
Last change: 2011-10-03. Copyright © 2004-2016 Hans-Georg Michna.
NEFFICIENCY can be something wonderful. If you have a job in a big company, it can be very desirable. Don't let yourself be misguided into believing that efficiency is always a good aim—there are many circumstances in which the opposite is true.
Think about it. Would you rather be a head of a five man team that finishes its job in a year, rendering you without anything to do? Or would you rather head a bunch of teams comprising some 30 software developers that stays on the task for many years to come? The decision should be easy.
This article attempts to teach the tools of the trade—experience gained in a few big companies, which shall remain unnamed, in Germany, a country predestined like no other to achieve the highest levels of inefficiency. Most of what I learned here can certainly be applied in all other countries on this planet, perhaps in the entire universe. This lesson is something you can, for once, learn from us Germans.
What's so special about Germany? The German culture is uniquely inclined towards regulation, and not just some simple rules, but detailed prescription. Even when on their own, people constantly ask themselves, may I do that? Is it allowed? Are there, could there, should there be rules? And some are constantly asking themselves, can I make a rule that all others then have to follow?
It is widely normal, accepted, and expected that people follow the rules to the letter, with deep devotion. And so one perfect (though typically German) way to create inefficiency is overregulation, which, as the German example consistently shows, can work even on entire countries.
Why strive for inefficiency in the first place? This question is easy to answer. If you can do a job for one person with 10 persons, 9 more have a job and get paid. This may or may not always exactly be what the board of directors has in mind, but with a lot of management levels between them and the office worker the view of things can change a lot while going from one level to the next.
A major method to make things change between management levels is to obscure information and build Potemkin villages and sand castles. How to do this is described in detail below, particularly for projects that use computers.
Of course the conquest for inefficiency can never be openly admitted. Therefore one of the biggest problems in this undertaking is to do it while appearing to be doing just the opposite. This article will show various ways to achieve this.
The general principles are:
A related problem is to stop people from naively trying to create efficiency without telling them exactly why they are being stopped. But there are many ways to hinder such efforts and many possible smoke screen causes, like security, so this is not an all too big worry.
Remember that you can stop or delay people, for example by giving them other tasks, but you can also render their inappropriate achievements useless later. Since most inventions have to rely on several other preconditions, removing just one of them may already be enough to remove any unwanted efficiency from any new inventions.
Also keep in mind that new employees are usually inefficient. Make sure they stay that way for as long as possible. Keep them in the dark. Give them stupid tasks, where they cannot learn. If you find that they befriend somebody knowledgeable, move them to another place.
This article was written over some time when I had insight, through friends or own experience, into a few big companies. It will be extended as soon as I gain new knowledge.
Actually my biggest worry is that readers believe I'm exaggerating, while I'm in fact understating and describing only parts of the truth. Only somebody who has been in it will be able to fully appreciate the descriptions. Almost everything I write has actually been observed in one large company or another.
Just believe me when I say that I'm not a very creative person. I can only describe things I and a few other contributors have actually seen and experienced. Before that, I probably wouldn't even have believed that they actually exist. But while working in a large company, one only needs to observe carefully, and one will learn some new method to achieve inefficiency every day.
A wonderful way to achieve new levels of inefficiency is to reorganize, i.e. to shift people around. In the upper levels of management you can always find a well sounding reason why the current organizational structure is not appropriate for the new aims and needs to be changed. Moreover:
Continuous restructuring gives the illusion of progress.
If, for any reason, a deep structural change is inopportune at the moment, for example, because the last restructuring is still effective and bearing more fruit, then you can shuffle managers around, which is always possible and can be done more often. Eugene Gershnik provided the following instruction on project manager rotation:
Make sure you switch project managers often and randomly. Anybody should be good for any job. "Oh you have seen this system once 25 years ago? Good, you are just the person to lead a large project on it." This would prevent managers to become overly familiar with what they manage and ensure that the decisions they make will not make any sense. You have reached the ideal state of affairs when nobody can tell who is managing what without consulting an orgchart weekly.
If, for any reason, you can't even do that, at least reseat your actual workers, the programmers, as often as possible. If you have any talent, it will be easy for you to find a good-sounding explanation for the necessity of the move.
The more you can cram in one room, the better, because any talking or telephoning will stop all others from doing any serious work.
If the company has more than one location or, even better, if a large project involves more than one company, this enables you to let people travel and spend time in different locations.
Make sure that people are travelling often and spend as much time as possible in different locations. Let them attend meetings, ideally regular, unimportant ones.
An additional bonus is that they will waste plenty of time on obtaining site passes, find out where to park their cars, where to get something to eat, and particularly how to connect their laptop computers to the foreign network. If the network is suitably set up, that alone will make sure that days will pass before the travellers are able to do anything useful.
Of course, once the quest for inefficiency bears fruit, you need to put more people to work. There are essentially two ways to do that. One is splitting existing teams and give the new teams special assignments.
The other is to create entirely new teams. If all obvious tasks are already assigned to existing teams, create generic team names like Operational Project Planning Team.
Any organization needs rules. But instead of having a small and clear rule set, try to have a complex system with lots of special rules.
Never create any complete rules overview. Instead make up new rules all the time and let other people do the same. You've reached the ideal state of affairs when all people receive more than one new rule every week by email and there are so many special rules that people regularly break them and get reprimanded. This will make sure that people become very nervous about breaking a rule and think for a long time before doing anything out of the ordinary.
If you cannot think of more rules yourself, keep adopting the latest management fad. No matter how stupid it is and how stupid it sounds, subjugate your employees to sprints and daily scrums and heap the going literature on them, containing more rules, checklists, planning guides, and the like.
Most people think of a line manager as a person controlling a small number of workers, but that's too simple for our purposes. You can make almost everybody a manager, if only you can think of enough titles. Project manager comes immediately to mind, so have lots of projects and, correspondingly, lots of project managers. But you don't have to stop there. You can have security managers, incident managers, change managers, problem managers, roll-out managers, and more.
Make sure their responsibilities are not too narrowly defined. Ideally they should be widely overlapping. This will make sure almost nothing can get done easily.
The foundation for inefficiency is laid in the design phase of a technical project, so be careful not to let things get out of hand early on.
Make sure that the system is not designed cleverly by one or a few intelligent, experienced people, particularly not by people conversant with the particular demands of the project on hand.
Instead have the system designed by a committee. You've reached the ideal state when committee members come from more than one company and have fundamentally different interests.
A very good example is a system with real-time requirements. This should be designed by older people who ideally still have experience with punched cards and mainframe computers, but not with real-time systems. Their background should be in batch processing. They should never have seen, even less touched, a real-time computer system.
A cron job is a task on a Unix or Linux computer that is automatically started at a particular time, which is entered by an administrator in a table named crontab.
The Unix cron system is rigidly linked to the clock. It is insensitive to conditions and demands. A cron job is started at its predetermined time, no matter what.
Then let everybody believe that cron jobs are the normal and only way to process time-critical data. Prevent the introduction of real-time operating systems and particularly of people conversant with those. Make sure the data flow is as awkward and slow as possible. Declare an hour indivisible. If there are people who would need data within minutes, make sure they don't get the data within hours (or days). Keep shifting data from one computer to another, but don't allow to do it more often than a few times a day.
Other excellent methods are the following.
Of course there are also endless possibilities to misdesign parts of a system. To give just one example, if you design a big system with an international aspect (for example, if it may be sold abroad), then avoid setting rules for character sets. Let the programmers choose the character set they happen to know, and avoid universal character sets like Unicode. You've reached the ideal state of affairs when it is impossible to query a database and have both the query and the results on screen with all international characters like umlauts displayed properly.
Another example is the opportunity that opens up when key codes have to be designed. Introduce the idea that every key should consist of a chain of classifying codes, leading to very long keys that nobody can remember, as opposed to short, identifying keys. If anybody contradicts you, explain that there is an urgent need to read all kinds of information directly from the key.
Generally avoid simple, lightweight solutions. Always strive for heavyweights and maximum complexity.
Learn about these magnificent successors of the erstwhile spaghetti code and utilize them for your purpose.
Lasagna code, in short, is code that is split into as many layers as possible. Use frameworks, invent process managers, process step managers, data access layers, and add more abstraction layers to your liking. This has the effect that the programs will become big and slow and that people will find it more difficult to understand and change them.
Ravioli are little pasta pouches, filled with very little ground meat or sometimes almost nothing. The idea that can be taken from this Italian delicacy is to adapt object-oriented programming to your purposes in a very simple, convincing way. The smaller you make the objects, i.e. the less meat there is in each of them, the more objects you can have.
Use this very simple idea to make the program code unassailable. A comment at the top of one of such objects, if you can't avoid documentation altogether, could read roughly like, "Stage 1 of preparing to frobnicate the foo part of the foobar" (gratefully borrowed from a slashdot.org article by dkf).
A sure sign that you've mastered the art of Italian ravioli cooking is that your programmers begin to use a debugger to find out which code is actually executed and in which sequence.
Here the fundamental idea is similar to others mentioned above. Avoid the small, nimble programming languages that easily lend themselves to slim-layered programs, directly wrapping a database. Particularly avoid modern script languages like Ruby or even the older PHP. Go for an old, object-oriented heavyweight where your programmers can spend their and their computers' time endlessly on reshaping data into objects, trees of objects, save these objects (no, "persist" them—you will have to learn the newer language too, to confuse the old hands and show them that they are worthless), of course using one of those frameworks for "persistence", reload them (I'm almost tempted to invent the new word "unpersist"—remember, you read it here first), pluck them apart again and stuff them back into the database. Never mind how nonsensical all these operations are. As long as they keep your programmers and your servers busy, they nicely fulfill the purpose.
Usually you also want to avoid Microsoft. They do not necessarily always provide the fastest and most efficient solution, and sometimes they can even be helpful in our quest for inefficiency by delivering a tool that works right into your hands. But at least their IDEs (Integrated Development Environments, example: Visual Studio) and their programming languages (like C# or F#) are usually too efficient for our purposes. If you inadvertently hired people who master languages like C# or F#, you have already made a mistake anyway. Hire the lowest-cost Java programmers instead.
Fortunately it is very easy to put Microsoft aside. Just mumble something about proprietary, insecure, platform-dependent, and a buggy Windows, and check whether everybody is already nodding in agreement. If so, brush them off the table and never mention Microsoft again. If later somebody comes up with some Microsoft-related idea like, "How about basing our new server on SharePoint?" then quickly tell them that a fundamental decision against using proprietary Microsoft products is already in place. If you can't avoid it at all, ask some Linux afficionado to build a complex server, one for which SharePoint is not well suited, on top of a SharePoint server. Then never ask for that project again.
It should already be obvious. As the main programming language always choose Java. For this old language you will easily find programmers (nowadays called software developers) with a solid semi-knowledge that will raise the size and the run time of your project very nicely. Java is like a honeypot for incompetent programmers, but you still have to make sure in the recruitment procedure that you avoid simplifiers and pick complicators.
To help with this, you could try to require a very old version of Java. Find a crucial piece of software that runs only on that old version and use it as your argument. If and when the pressure rises to go to a newer version, at least find reasons to avoid skipping to the latest. You will surely find an important piece of software that still requires at least a somewhat older version of Java. Leaving even one major Java version unused should cause you incessant mental pain.
One advantage of Java is the availability of an immense range of frameworks. Demand their use. As a baseline begin with J2EE (Enterprise Edition) and require as many as possible of Enterprise Java Beans, Java Server Faces, Spring, Hibernate, WebWorks, Struts, JBoss Seam and the like. A few of these, cleverly combined, may actually raise efficiency, so counteract that with prescribing more frameworks, old versions, and mix it with a strange design that fulfills your purposes.
An even bigger jump in inefficiency can, however, be achieved, when you can make your people write their own framework. Even if you can't avoid putting experienced programmers on this task, you can achieve high levels of inefficiency by selecting those with the least expertise in the actual problem. For example, if you have an insurance project on hand, make sure that none of the framework writers has any knowledge or experience in insurance. If you don't have enough of such people, give the task to an external company, preferably one whose owner is a buddy of yours, so you can at least gain some nice invitations and other benefits on the side.
In any case isolate the framework team totally from the other employees and let them do their own thing. Be assured that the resulting framework will add years of well-paid employment for you and your entire team.
There is not much you can do here, as you may not be able to avoid the most widespread IDEs and source code repositories, which typically don't add much to the inefficiency.
You can still do something though. There are usually some older systems that carry a larger burden of overhead and structural complexity to fulfill your purpose. The general rule is to shun small, nimble systems and instead look for heavyweights with as many ties to other systems as possible.
Also, try to nudge different groups into using different IDEs and different source code repositories. If you can't find any better reason, tell them they should test the different systems to find out which is best, but make sure that each group tests only one of them.
This will invariably lead to each group becoming attached to "their" system, arguing for keeping it, but not the others. Leave it at that. This alone will yield a fair amount of inefficiency, quite enough for the IDE and source code administration area.
You've mastered these tools of the trade when your people use at least three different IDEs and at least three different source code repositories.
Test programmers before you hire them. Ask them for a problem that could be solved with a line of Java like:
boolean x = a < 0;
If any of your candidates actually writes that line, dismiss him. Look for people who come up with a more substantial solution like this (declaration of ComparatorFactory omitted for brevity):
ComparatorFactory comparatorFactory = new ComparatorFactory();
Comparator lessThan = comparatorFactory.getComparator(Comparator.LESS_THAN);
boolean x = lessThan.doCompare(a, 0);
Etc. You get the idea. Prefer solutions where these program lines are spread over different files and modules.
Keep the hardware at minimums. Use the smallest, weakest computers that just barely do the job. Pretend that you want to save money that way.
The main resources to keep low are:
It cannot be stressed enough that keeping such resources low will do wonders for total inefficiency. Never mind that the hardware resources are actually cheap. Sometimes it helps to select computers where they are at least somewhat more expensive.
A long, bureaucratic process to upgrade the hardware is very helpful. Since money is involved, every hardware purchase has to wend its way through budget commissions and even bookkeeping. So even if a hardware upgrade may eventually happen, people will shun the extra work for a while, then you have the long delay until the hardware upgrade eventually arrives. Ideally it should just compensate for past growth in demand and should just barely suffice when it is finally put to work.
Disk space has a special attraction over most other resources—it can be separated into many "virtual" disk spaces. Use that possibility to the fullest. Try to have as many separate disk space limitations as possible. Use many separate partitions or mounted volumes, each with its own quota. Keep disk space quotas so tight that people keep overrunning them and constantly have to do housekeeping, delete and move files. You know you're on the right path when people begin to copy files from servers to their desktop computers to make room on the servers.
Electric power is a far underrated commodity. If you can manage to keep it scarce, you'll be surprised how much time can be added to an otherwise simple process like adding a power socket. Meeting after meeting will concern itself with the issue.
Of course it will eventually be resolved, but perhaps you can manage to introduce a temporary solution only or a very moderate increase in available electric power, so the problem will soon crop up again.
Assigning very small ranges of IP addresses is an interesting way to achieve inefficiency. It works very well in conjunction with assigning fixed IP addresses to computers, because when the administrator runs out of IP addresses, he will eventually have to recycle and reassign them to different computers, which can work wonders on processes that rely on certain fixed IP addresses.
You have reached a nice situation when workers on business trips to other locations have to apply for an IP address days in advance, to be able to connect their laptops to the company network at the destination. (No, I'm not making this up. I don't have that kind of creativity.)
This is a difficult topic, because there are no simple rules and applications. But there are many ways of obscuring information and making it difficult to find, use, or remember. I will give a few examples.
One of the general methods in this field is to digress from rules and conventions and established ways. Another is not to use systems that are there to make things easier. Or use several different systems for similar purposes, rather than one.
Another simple example: if you use abbreviations, add extra letters. For example, if you devise a system that you wanted to call DSRB, a name that is still a bit too short to be inefficient, call it eDSRB or iDSRB.
You can even try to obscure your identity. For example, if you have to send a problematic email, don't send it yourself. Instead walk up to a subordinate and dictate it to him.
You can't have enough of them. Consider that a two-hour meeting with a dozen people yields three man-days. If a particular kind of meeting doesn't fulfill your expectations in this respect, schedule it more often. Twice a day is taking it too far, because it makes your intention too obvious, but you can usually get away with daily meetings, so consider that the ideal.
If you find it difficult to justify such frequent meetings, pick out a suitable management fad that prescribes them (almost all do it), so your justification is simply that your new management method requires them.
Meetings always pose a slight danger that things are uncovered or made clear. Fortunately this risk is small, because such things are in few peoples' interests. Nonetheless you can help things a bit.
For example, in discussions, know your language. You must sound as if you perfectly understand and absolutely master the complex systems you describe. Use abbreviations and slightly deviating expressions, so the uninitiated can't be exactly sure what you mean, and people won't dare to say anything for fear of revealing their lack of understanding. For example, instead of "backup and restore" say, "backup and recovery".
Background information: IP (Internet Protocol) addresses and host names
The earliest computers that were connected by the Internet Protocol (IP) had IP addresses to identify themselves. An IP address looks like this: 18.104.22.168
Since they are not easy to remember and use, computers were soon given host names, which look like this: www.microsoft.com
Since the actual addressing is done by IP address, the Domain Name System (DNS) was established to automatically translate host names into IP addresses. Normally no computer user should ever have to touch an IP address.
In addition to DNS servers, a hosts file can be used on each computer, overriding any DNS entries.
Even the assignment of IP addresses to computers can and normally should be done automatically through DHCP servers (Dynamic Host Configuration Protocol). This makes it easy, for example, to move a computer from one workplace to another where different IP addresses apply.
The main advantage of this principle is that numbers are more difficult to remember than names. Increase the effect by avoiding ranges of numbers that have some meaning. For example, if new computers are bought, just give them consecutive numbers as computer names, regardless of which department they go to.
In fact, you can use letters as well, but they have to be meaningless instead of forming words. A little additional trick is to use large numbers of leading zeros, because these are hard for people to count and remember. An excellent computer name could be, for example, "wwy00000714".
A technical application of this principle is to use IP (Internet Protocol) addresses, rather than host names. This has far-reaching consequences and is an excellent tool to delay and hinder a wide range of organizational processes. One very simple advantage is that IP addresses are not easy to remember. An even bigger advantage is that many processes will come to a halt and will need lots of little modifications when an IP address changes.
You've reached the ideal state when IP addresses are frequently hardcoded in script programs.
A related method that doesn't go quite so far, but still works wonders, is not to use DNS or to use several DNS servers that are not synchronized and contain divergent information. In addition use hosts files on each computer. A combination of hosts files and unreplicated DNS servers will make sure that the system administrators will always be busy to fix many computers that cannot access this server or that. As soon as someone begins using a different DNS server, he will turn out to be unable to access some other servers and may require a fix in the form of a hosts file entry or the direct use of an IP address. The hosts files, in turn, will make things difficult when server addresses or DNS entries change. This is a system administrator's paradise.
There are various ways to do things differently to confuse people. For example, instead of giving a software version the number 5.1 and the subsequent version the number 5.2, use the number 01/05. The next version may get the number 02/05. This will at least confuse newcomers for some time and also people from other departments.
Web servers are standardized to use port 80. Every web browser knows that and automatically assumes and uses that port. However, it is possible to use other ports. Do that. Instead of intranet.company.com use intranet.company.com:1374 or some such.
Whenever some list is created, make sure it is never complete and contains at least a few exceptions. You want to make sure that nobody can ever be sure he's completed a job. People should constantly be searching for missing information and exceptions.
In addition keep forwarding such lists to people by email, asking them to review them and send you additions. A typical phrase is, "This list is probably not yet 100% complete. Please recheck all items and send me your corrections and additions."
Background information: Terminal server
A terminal server is a computer that takes the work away from your desktop computer and instead runs the programs of several users simultaneously, while using your desktop computer as a dumb terminal that does nothing more than receiving your keystrokes and mouse movements and sending back your screen content.
Obviously this is a slow way to work, as the terminal server does not usually have as much power as all the users' computers. Moreover, the screen graphics have to be sent back to your computer, which is usually slower than if your computer's graphics adapter doest this locally.
Terminal servers are a way to save money and to be able to keep using old, slow computers or cheap terminals. Occasionally they are used with special rugged shop floor terminals or indeed for security reasons.
Slowing everybody down is a most direct way to reduce efficiency. Since doing it directly and deliberately is too obvious, you always need an excuse, but many such excuses can be found.
Small steps in the right direction are revolving doors that don't revolve before you do something like pull a card through a slot or, if that's inappropriate, have a door installed where people have to step in and onto a sensor before the door can be moved. No matter how senseless this may be, some security rationalization can always be found to justify such measures.
But this is peanuts compared to the levels of inefficiency you can achieve in a computer network. Obviously, the network should be of low capacity and slow. If somebody has, without much forethought, installed fast network cables and switches, you have to slow down the traffic by other means. Fortunately this is easy, as a chain is only as strong as its weakest link. Severely limiting the resources for servers is a good way.
Another interesting way is the use of terminal servers. Block direct access to everything and allow people to access the servers they have to work on only through a terminal server.
If the people are Unix/Linux experts and the target server runs Unix or Linux, give people Windows machines and route the access to the server through a Citrix terminal server or a Microsoft remote desktop server. Again security is the standard excuse.
If that is still not slow enough, set the system up such that the users have to go through two cascaded terminal servers. Since this is a truly strange setup, let me try to explain it in detail.
The user should first have to log on to the first terminal server. Once he has reached that environment, he should have to go from inside that terminal server window into the next, different terminal server and log on there again (if possible with a different username and password to confuse people even more). The end result is a terminal server window inside another terminal server window. Only inside that second, inner window the user should be able to open a window to the server he actually has to work on.
This has the additional nicety that special key combinations will not work. Terminal servers usually offer an alternative for many key combinations like Ctrl + Alt + Del or Ctrl + Shift + Esc and many others. However, if you work through two cascaded terminal servers, then the alternatives will only go through the first, will be converted to the real thing there, which will then not go through the second terminal server, but will instead work on the intermediate terminal server itself, which is useless.
This simple method of cascading terminal servers is guaranteed to increase the number of people required to do a certain task, because the delays and the much higher frequency of network failures will regularly take their toll.
Warning: Don't try three cascaded terminal servers, because that could bring your work to a virtual standstill, and people might begin to revolt. Two is the holy grail if your network is otherwise far too fast.
Another way to slow things down is to have long delays for any changes. For example, design the internal network such that moving a desktop computer from one network socket to another can mean up to 7 days without Internet access. (You don't believe me again? I'm only reporting here what has been observed in at least one large Germany networking company.) And of course, as already mentioned, the Internet connection, once it is accessible, should be painfully slow.
Of course, a reasonable degree of security is necessary, so the first step of justification is always easy. The more interesting point is to increase security demands further, much further, and use it in various ways to achieve your ultimate aim—inefficiency.
Security is a wonderful excuse to increase the time for work processes, easily doubling or tripling the time needed to perform a certain task, time that is needed to ask various people to grant access rights or to hunt and beg for a password. But everybody will agree that security is very important and must not be neglected, so you're usually on the safe side when you try to tighten security. If people protest, you can invoke catastrophic scenarios to justify tightening security further.
You've reached the ideal state when people regularly receive work orders which they cannot fulfill because they lack the access rights.
One good way to keep many people busy is to divide them into groups and subgroups with narrowly segregated responsibilities. A task that could be done by two or three people will need a dozen when they are divided into three or four small groups, each being responsible only for a certain subsection of the workload.
The friction between these groups will add further work time consumption while members have to decide which group should handle a certain problem that touches the responsibilities of more than one subgroup.
You've reached a good state of affairs when a considerable fraction of total work time is needed to redirect problems from group to group.
When people are not fully busy, computers come to the rescue by providing a wonderful means to slow down the work flow—access rights. You don't actually have to do much beyond giving each department an ample share of computer administrators and instructing them to keep security tight. These administrators will then generally deny access to everything and allow specific access only after they've been convinced that a person very obviously needs access.
This works wonderfully well between different departments, because people will have a hard time to obtain access rights across department boundaries. Normally, requests from other departments will automatically be turned down because of security concerns, and it will take lots of higher-level intervention, meetings, setting up guidelines, designing application forms, etc., until such access is granted.
These application forms are, by themselves, a wonderful means to keep people at bay. They should be large, require approvals and signatures, and contain a very large number of specific questions or checkboxes for specific types of access. Since people will then not know exactly what to select, there are three different chances for success.
You've reached a pretty good state of affairs when somebody who is responsible for finding an error on a computer has to ask other people in writing to obtain information, like log files, because he himself has no access rights to the computer.
Remote control of computers is your enemy, because it would allow a support person to look at the computer thoroughly and analyze errors or read files. Try to prevent that wherever possible. Use the usual security excuse. An ideal state has been reached, for example, when people have to fix errors on remote workstations, but have so few access rights to those computers that they cannot even see the errors.
Passwords are just one element of security, but they are a crucial one, and they offer their own wonderful ways to slow people down.
During the operation of a more complex system, if obtaining access rights officially is sufficiently difficult, people will try to make do without going through the tedious bureaucratic process to gain official access to data they need. That will also take time, slow them down considerably, and will create the desired inefficiency. People will have to go begging for passwords, which they will occasionally get, but which can also be unpredictably refused. Passwords can also be changed, so users suddenly lose the ability to use the passwords they borrowed.
The worst that can happen to you is that somebody invents a centralized password system where everybody has his own, single password. That would create efficiency, rather than inefficiency. So try to avoid that, and if it has happened, make sure that passwords are regularly changed, once a month at the very least, and try to keep as many computer systems as possible outside that system, so people still have to administer a bunch of passwords, rather than just one.
Prevent that semi-centralized password repositories grow in departments or groups. Force everybody to keep his own passwords. Don't advise people on how to do this, but always stress the importance of security, so people have to invent systems that are awkward to use, fearful of being accused of a security breach. From time to time you can reprimand people for their lax security, for example when the write passwords down on paper and don't keep that paper under lock at all times, which, of course, they hardly can.
See to it that different systems enforce different and stringent rules of password composition, so people cannot use the same password on different systems and have to remember as many different passwords as possible. It is a good idea to set systems up such that they demand various subsets of characters, like upper and lower case letters, numbers, and special characters. But do not allow all special characters. Ideally the same password should be possible on one system, but impossible on the next. Introduce or use various length restrictions. (You think I'm exaggerating again? You are so mistaken!) In any case, people should need as many different passwords as possible and should have to change them as often as possible.
A good trick is to immediately and automatically block access when a user's password has expired, which it should do often, like once a month, and without warning. Then the rules should be that resetting the password has to be done by a different department through some kind of bureaucratic procedure, at least involving an entry in a trouble ticket database.
You've reached the ideal state if the following statements are true.
Establish a strict Internet access security regime and demand that it be enforced by means of at least one proxy server. Demand from your server administrators that the proxy server be equipped with all kinds of security software, including a virus scanner.
Make sure it is a caching proxy and demand a very long caching period. Your justification for this should be that your line to the Internet is expensive, and therefore you have to avoid that the same file is downloaded twice.
This will also increase the chances that the proxy's cache is filled with obsolete or broken downloads, which will cause all kinds of confusion when ordinary things no longer work.
Demand that the proxy employs URL blacklisting, choose as many blacklists that are as strict as possible, and set the security level to the highest, so people will be unable to open many web sites. One test site for you to try is Google Documents. If it is blocked, then your admins have done their job. If anybody asks why, you should reply without blinking, that it is, of course, for security reasons.
Also make sure that each Intranet user has to log on to the proxy server each time he opens his browser and wants to access the Internet.
Backup can be a wonderful source of inefficiency if it is handled the right way.
Use, and over-use, automatic monitoring. This needs some attention, because judicious monitoring could possibly create efficiency, rather than destroy it. Monitoring will only work in the right direction if it is over-used. Therefore:
Log everything. Create huge log files. Let different processes write into the same log file.
This will nicely work together with scarce disk space, when quickly growing log files fill up the available disk space, automatically creating monitoring events, trouble tickets, and failures along the way.
Everybody agrees that documentation is something good, important, and necessary. There is some truth in this, but the usually well-kept secret is that the usability of documentation is not obvious and varies widely. It is relatively easy to create loads of impressive, very-beautiful-looking documents that nevertheless are totally useless to the people who actually need good documentation.
The best you can do is to stress the amount of documentation, rather than its usability. Demand that everything be documented, ideally all activities.
Prevent a central computer based, searchable documentation repository. You've reached the ideal state when the directory trees of various computers are littered with word processing and other documents that represent isolated pieces of documentation without any systematic structure.
Whether centralized or not, have a large collection of various documents with titles that don't make their scope quite clear. Label them:
and keep adding more different kinds of documents. If you run out of ideas, you can still take some numbers from other documents and label new ones with them, like xc3845.doc.
A project I once came in contact with had thus amassed more than 200,000 files in over 20,000 folders, comprising over 50 GB of data, most if it in various word processing and spreadsheet formats.
Frequently pick out some of these documents and email them to various people with the request for review, correction, adaptation, and enhancement.
A special word is in order for programmers. This type of worker abhors documentation anyway, because they understand quite well that the lack of usable documentation means that they are being needed over a long time to explain or make changes to the programs they wrote. Therefore you won't find it difficult to convince programmers to do the right thing. Often you don't have to do anything at all—they will never write genuinely useful documentation on their own without being told.
Never mention inline documentation (explanations interspersed with program code), because this is an attack on inefficiency. If you don't demand this from programmers, they usually won't do it, or they will only insert personal codes that helps themselves understand their own programs a week later. These do no harm, because they are not understood by anybody else.
Instead let programmers write their program, then let some time pass. Something like half a year later demand that they document all their programs extensively in separate documents. This usually guarantees the desired inefficiency.
Planning is fundamentally a good thing, so it is not immediately useful to our purpose. But it can easily be made so.
Your enemy is a central, concise, and consistent plan. Things become much better when you have many different, overlapping plans and schedules.
One ideal tool for this is Excel, because it gives you the freedom to add ever more columns to a list, supposedly with additional important information. If you can have various different, partial plans for various activities that are mailed in criss-cross fashion, that's pretty good. It means that everybody has to try to reconcile these plans, which should ultimately be impossible if there are enough of them.
Ideas for plans and schedules (careful again, some may turn out to be genuinely useful—in that case avoid them):
If you have all the obvious ones, invent more of the same with generic names like:
Abbreviate their names (HK, EO, LO, RP, AL, CL, TS) to amplify the impression that their reason for being is self-evident and their meaning obvious. Of course this also stuns newcomers, which is a welcome side effect.
Be inventive. For example, if you already have a planning sheet, add a preliminary planning sheet.
Again mail them to as many people as possible and demand that they are reviewed, checked, expanded, completed, and mailed back to somebody else to reconcile them. A typical email could sound like this:
Please check the attached Excel document (Housekeeping XX YYY.xls) again and add new activities, remove obsolete activities, and adapt the changed modalities to the actual activities.
[An original, anonymized German text sample is: "Könnt Ihr euch bitte nochmals obiges Exceldokument (Housekeeping XX YYY.xls) anschauen und ggf. neue Aktivitäten ergänzen, obsolete Aktivitäten entfernen bzw. die veränderten Modalitäten den aktuellen Aktivitäten anpassen?"]
Never mind that it doesn't make much sense to make plans after the fact.
Now make sure that those plans and lists are not done just once and forgotten. Wherever it is possible, prepend the word daily, so the control list becomes the daily control list. Only if that is entirely unbelievable, relax it to weekly, but have a strict time schedule for each plan.
Then make a person responsible for each list and regularly check the timely appearance of these lists. Never mind their content, but when one of these plans is not mailed to you in time, immediately send out emails to demand immediate creation and mailing.
Different countries have different laws and regulations, so know these laws of your country and use them well.
I will now cite some German examples, even though they are probably not fully applicable to other countries. Germany has gone farthest in various ways, which now bears sour fruit for the entire economy, but if you discover similar tendencies in the regulations of your country, you can again learn from Germany how to exploit them.
In Germany it is difficult to fire anybody. As long as an employee shows up for work most of the time, firing him is almost impossible, and the attempt can lead to a difficult legal conflict. The courts usually favor the employee over the employer.
This enables, often even forces you to keep incompetent workers. Your only remaining task is to make the best use of these. Promote them to line managers. An incompetent manager can work wonders in demotivating over-eager employees. Ideal are assertive people who combine the joy of wielding power with utter incompetence.
This is a German specialty, so it may be difficult to exploit in other countries. But, who knows, perhaps something similar may yet get hold of some other countries or in the occasional big company, so be prepared.
In Germany, workers' participation (German: betriebliche Mitbestimmung, something like company workers' co-rule, but difficult to translate) is done via a company council (German: Betriebsrat), where workers and particularly trade unions are strongly represented.
This system was established shortly after the last world war under the auspices of the allied forces, perhaps in an attempt to hobble the German industry forever. In the long run it has been effective beyond all expectations and is now so deeply entrenched in Germany (though nowhere else), that it is impossible to eradicate.
Through the councils and trade unions, workers have a say in hiring and firing, in working conditions, but also in wage determination and particularly in work time rules. For example, it is not easy in Germany to make somebody work outside normal work hours.
A particularly interesting aspect is that the councils prevent most means to measure worker productivity. A good example is a trouble ticket system (a computer database where problems and their actual solutions are entered), that records, for technical documentation and reference reasons, the begin and end of a work phase.
With such a system in place, it would be easy enough to measure how much time people actually spend working on the trouble tickets, so this is explicitly prohibited by the councils. The rule is that it is not permissible to track the detailed use of work time for individual persons. Impersonal evaluations of an entire group are prohibited likewise, because they would allow to measure the efficiency of the group leader.
When I once checked whether it is still possible to clandestinely do such calculations on the archive of such a trouble ticket system, I found that the archive contained the end time of each assignment, but the begin time was removed, so it was impossible to determine the actual work times.
I'm still working on these. I plan to add more chapters, detailing the following points.
This article may be further refined and expanded. If you have comments, ideas, and particularly reports from big companies, please send me an email. If I receive text that can be included here, I will edit and insert it along with mentioning your name, but not your email address. If you want changes, an abbreviated name, or your name not mentioned at all, just let me know.
Back to the homepage - Hans-Georg Michna
hits since 2007-11-01
Free PHP scripts by PHPJunkYard.com