Minimalist Computing

The central idea of our approach can be summarized as minimalist computing. This entire web site is dedicated to this idea in some form. The minimalist principle when applied to LSDFR computer systems yields the following guidelines:

  • Minimize Hardware Costs

    Deploy the system across many smaller, cheaper computers rather than a single large computer. For example, given current hardware costs, this guideline would favour deploying on many rack-mounted Intel boxes, at around £2k each, rather than much more expensive SPARC based computers at about £20k a piece.

  • Minimize Software Costs

    Use good, reputable, free, public domain or open source software wherever possible and pay a third-party support contract to help your development teams and MIS department. For example, using Linux/MySQL and paying for support, rather than Solaris/Oracle can save thousands, even hundreds of thousands, on license fees. Only purchase commercial software when a suitable free or low cost alternative cannot be found.

  • Minimize Vendor Lockin

    Sourcing all computing services from a single supplier is an expensive and dangerous strategy. Expensive, because the vendor has, as the saying goes, 'you over a barrel'. Quite simply the vendor will enforce extortionate license fees and maintenance costs. Dangerous, because if the vendor goes 'the way of all things', the vendor specific implementation of your system will be difficult and expensive to maintain. Minimize vendor lockin by insisting on using technologies that are ratified by public standards bodies such as ANSI or OMG, and typified by languages such as C++ and SQL, or open source software such as Linux or Apache.

  • Minimize Training Costs

    Introduce critical new technologies into the development team by using contractors as team mentors where necessary. The mentor's role is to start each team member off in the right direction, help with designs that best exploit the new technology, oversee and review the code produced by the team, and correct any errors. The Pair Programming pattern is applicable here. The team write all (or most of) the code, thus avoiding the contractor becoming 'indispensable'.

  • Minimize Project Failures

    Allow sufficient time to properly develop and test the proposed system. Deadlines that are too tight will simply result in a failed project. Be realistic. Time to market is not that important. Better to get an end result that works and is viable in the long-term. Remember, the fastest way to develop high quality software is to develop it slowly. Slip functionality not quality if time really is an issue.

  • Minimize Risk

    One way to minimize risk is to implement a performance prototype early in a project. Code up a simulation of the proposed system, with dummy objects of about the same size as those in the real system, estimate the size of data sets, the number of records, arrival rates; basically model all the important performance features of the system. Testing and timing the main use-cases in this way reduces the risk of poor performance and gives the development team much needed information early in the project. Also, write small exploratory programs for the 'difficult bits'; issues with those parts of the system which are the most complex can often be investigated quite quickly with small throw-away programs, and give optimal solutions and realistic time estimates helping to reduce overall project risk early in the project life-cycle.

  • Minimize Analysis Time

    At Logos Software we recommend a use-case driven approach to requirements gathering. All OO methodologies, since the conception of OO, have incorporated use-cases, and this is because they work. Each aspect of the system is recorded once and is easy to see in context and cross reference, the functional description of the system is verifiably complete, use-cases are expressed in plain English so all people involved with the system can understand and verify that the requirements are correct from their perspective, and as the requirements change the use-cases are easy to navigate and to edit consistently, thus tracking the new features accurately.

  • Minimize Design Time

    Identify sub-systems and components. Clarify each component's 'contract' and inter-component dependencies. Design interfaces for each component that fulfil their contract. Don't design at a white board in teams - undertake design individually, or maybe in pairs, and review the design at the white-board. Take the ideas and suggestions away and make the necessary changes and re-review. Continue until no change, or consensus is reached. This phase can also include exploratory programs to invesitgate alternative approaches. Do not design down to the last detail; rather think of the design as a rough sketch of the proposed system.

  • Minimize Component Coupling

    De-coupling is a oft quoted maxim from the OO community and is still critical in today's systems. There are several advantages with highly de-coupled systems in areas such as: deployment configuration flexibility, system testing and reliability, build times and ease of maintenance. This is discussed further in this white paper [ToDo].

  • Minimize Use of 3rd Party Libraries

    Another aspect of de-coupling and one that is often overlooked. 'Software churn' is expensive, because every time a component in a system changes there is a probability that side-effects will be propagated elsewhere around the system. These usually require remedial action to in some form of development. You have no control over the churn of 3rd party libraries or components, therefore the more your project relies on them, the less control you have over how, and more importantly when, changes need to be integrated into your own project. In general ensure that each functional component, that is not developed in-house, is only provided by a single carefully chosen 3rd-party component; and choose these 3rd party components with a strong preference for mature Open Source components, such as Linux and MySQL.

  • Minimize Implementation Time

    Use a source code control system such as subversion to automatically control different versions/views of your source files. Never check-in code that does not compile, link and run correctly. Good to purify and sub-system test all code before it is checked in, and include any test harnesses. Try and avoid code 'merges' as these can be time consuming and increase risk. Have an automatic build using either Make or Ant. Have developers use IDE's as these significantly increase productivity, (yes - Emacs counts but not vi!) Release subsets of useful code at regular (two week) intervals and pass to the testing team.

  • Minimize Machine Cycles

    Use careful program design, scalable process architectures and languages, such as C++, which allow the programmer complete control of the computer where necessary. Consequently, we advocate deployment environments that efficiently support many different languages so developers are free to utilize the best technology to solve any problems they may encounter.

  • Minimize Memory Usage

    The per object memory overhead of Java is significantly greater than C++ and this becomes important when there are millions of little objects. Garbage collection, particularly on single CPU boxes can become a problem as the number of objects increases. Also, the cost of running constructors is expensive, and should be mitigated by using pool allocation techniques where appropriate. Memory issues can become acute if a system needs to run many hundreds or thousands of processes, all competing for memory; each process being swapped out to disc reducing total throughput and statistically increasing latency. LSDFR systems need to run lean and mean!

  • Minimize RTTI (Run-Time Type Identification)

    Systems designed such that callers must identify the type of objects at run-time before they know what to do with them suffer in several ways: they are more prone to run-time exceptions, therefore they are less reliable; they are harder to test, because testers have to contrive to subvert the type system; they are more complex and harder to maintain, because the type of every object must be tested at the call site; they typically run slower, because of the testing overhead and exception handling. Compile-time type safety should almost never be compromised.

  • Minimize Complexity

    Unless it is known, or can be easily proved, that a simple, direct approach to any implementation issue will fail, or perform unacceptably, it is usually best to code the simple direct approach and test it. For example, rather than code up a multi-threaded version of service X it is probably best to code it as a single threaded service and see how it runs. This idea is an example of the 'code-it-twice' pattern and a simplification of one of the central tenets of 'extreme programming' - 'you ain't gonna need it'. Often the simple, direct approach needs a little indirection; the best solution requiring appropriate generalisation of the problem to be solved, implementation of the general solution, and then implementation of the specific problem-at-hand in terms of the general solution.

  • Minimize Unnecessary Symmetry Breaking

    Up to a certain point, the more symmetry a system exhibits - the more alike each of the parts are - the easier the system is to understand. Of course when applied in extremis , this statement is false. The aim here is to minimize arbitrary differences in the system, such as having multiple String classes with no useful differences compiled into the same executable . The advent of Grid Computing minimizes symmetry breaking from the MIS department's perspective, but typically by using very expensive hardware. So to minimize hardware costs expect to break symmetry and deploy certain business critical systems on dedicated computing resources to provide guaranteed QoS.

  • Minimize Maintenance Costs

    Use the original developers to maintain the code where possible. It is much easier for someone to change their own code than to read and understand other people's code. Code ownership is good in this respect. Again, if the original developers are engaged on another project, the mentoring model works here also; the maintenance engineers are thoroughly mentored by one or more members of the original development team. Using publicly standardized technologies, such as C++, Linux, and SQL for the system means that it will be easier and cheaper to recruit maintenance team members in the future.

These ideas here are only guidelines and require that a judgement be made as to when or if any of these apply to the situation of your particular business/project. All these points are expanded further in this whitepaper [ToDo].