Software Development Life Cycle Part 5 – Evaluation

Evaluation

Evaluation is the last step of this cycle, remember that the SDLC should be repeated over and over, really this process should not end until the project has run its end and it becomes easier to start again when new technology can truly improve the project. This is also a good reason why documentation is important throughout all the stages of SDLC, it help to ensure that future developments to the system are done in line with the current system.

It can be hard to see the real benefit from doing a in-depth evaluation of all the progress made during the other stages. When you have stepped over the threshold of the evaluation stage and then begin again to start the cycle again, you will then truly be able to see the importance, you will be able to see what problems you encounter during the last cycle and be better prepared for the next cycle. The problems may not just down to the development of the system, it could easily be time management was poor during the problem and being able to analyse what you did you should be able to see what techniques you could use to make the time management of the project more of a success.

Other issues could be; problems during the workshops, issues with stakeholders, illness, unforeseen incidents. Being able to evaluate this is really going to help other people also coming in, that may be taking over the project. They would be able to see issues that could have kept arising during the stages and then be able to tackle those problems. It can be simple to see how important carrying out a good evaluation, but this is not easy, most people find this difficult.

But really you are here to evaluate the system you have created (although students may be asked to evaluate personal performance and you could ask some of these questions about yourself). Here is a list of criteria that can help you to understand some of things you should be thinking about when evaluating the process of creating the system:

  • Speed
  • Accuracy
  • Quality Of Output
  • Reliability
  • Cost Of Operation
  • Attractiveness
  • Ease Of Use
  • Robustness
  • Capacity
  • Compatibility With Other Systems
  • Functionality (Does it have all the functionality it needs)
  • Security
  • Flexibility (Can it be upgraded)
  • Size

Judge your system on efficiency and effectiveness, when we talk about efficiency we mean was it good value for the effort you put in it. So generally you could be thinking about time, time it took to load, how well it processed operation on a time scale. Did you program complete a given request in a reasonable amount of time, i.e. you would be very reasonable to think that a calculations on a calculator would take little time at all. The same is true for you system, could it complete a task in a reasonable amount of time, remember to think about connection speed and calculate this into your findings. You could also look at the economic value of the system, how much it costs to run e.g. electricity, hardware and so on. Also comparing it via labor cost such as, is running the new system going to mean that less people are needed to operate it, or does it perform action quicker than an older system that was in place previously.

Effectiveness means how well the system meets its original requirements, say you made a calculator, you would judge it on the accuracy of its calculations. The questions you are yourself have really already been set out by the scope and requirements, which you created in the analysis and pre-planning stages (haven’t talk much about this but do look at other blog posts). This section could really depend on how well structure your analysis and design stages were, they do really help you here.

Now you have an idea on what question you need to ask about your project now is really time to evaluate your project/system. There are two main method of evaluating your system:

  1. Objective – this means collecting facts and figure about the system. E.g. the system performed X calculations in Y amount of time.
  2. Subjective – this means collecting opinions. Such as a BETA tester saying in ran well under certain circumstance. Don’t use HYPERBOLE here it could affect you grade, state what the conditions were and then compare it to other conditions or a similar system.

Structure your evaluation as you would an experiment, an example structure below:

  1. Aim – Evaluate the system (and personal performance)
  2. What – Criteria set out early in the development process
  3. How (Method) – Collecting facts and opinion about the system
  4. Result – What you discovered about the system
  5. Conclusion – Did the system do what it was set out to do

The last section which is was I would call what you tutor is really looking for based on personal performance. A few questions you could ask yourself (this is not an exhaustive list):

  1. How well did you carry out the task
    • Time management
    • Quality of work
    • Focus
    • Obstructions
  2. What could you have done differently to improve the work
  3. What have you learnt from the process, this really needs to be along list of skills that you would take from this experience, not just technical.
  4. How could you improve your performance next time around

Thanks for reading this I know it was long, but I just felt that if I had this sort of information to guide me on my studies I would have had a clear path set out for me when I started.

Nathan Lunn

Part 1 – Introduction & Analysis
Part 2 – Design
Part 3 Implementation
Part 4 Testing

Leave a Reply