Embedded System Testing

System testing is a process to compare the system to its original objectives, e.g. the requirements set by the customer. The test cases are not developed on the grounds of detailed software specifications, but on the grounds of user documentation.

It is advisable that system testing is performed by an independent testing team to ensure unbiased testing, since psychological ties of the development team can subconsciously stand in the way of rigorous system testing. After all the goal of system testing is to try and make it fail not just to make it pass.

One special characteristic in embedded software development is that the actual environment, in which the software is run, is usually developed in parallel with the software. This causes problems with testing because comprehensive testing cannot be performed due to the lack of the hardware environment which is only available in the latter part of the project lifecycle. This is resolved by a hardware testing platform on which the tests can be carried out in a simulated environment before the actual Hardware environment is available.

Mixed Signal Systems
Mixed signal systems are systems that contain, in addition to binary signals, analogue signals. Inputs and outputs do not have exact values but have certain tolerances that define flexible boundaries for accepted test output. The boundaries often contain some subjectivity where the tester is required to make a decision as to whether the test output is accepted or rejected.

A control system interacts with the environment according to continuous feedback system. The system output affects the environment and the effects are returned to the system which affects the control accordingly. Therefore the behaviour of the system cannot be described independently, since the system is tightly interlinked with events in the environment.

The size of the software development team and lack of independent software testing team means that it is unavoidable that the software testing responsibility will be on the developers

Software testing of low level embedded systems is a surprisingly young art of engineering and it is still in a rather immature state. There are certain difficulties in testing embedded software that makes it a little more challenging than conventional software testing. The most critical problem is the tight dependency on the hardware environment that is developed concurrently with the software and that is often required to perform reliable software testing.

The next least cost way to observe an issue is during integration testing. This is the first opportunity to catch issues that go beyond mere coding errors, such as inaccuracies and omissions in the specification or architectural requirements. N:B: Never omit from the process integration testing!
Only after basic unit testing and integration testing, the system should be tested on a test stand. At this point, the embedded software will be at a much higher quality level than it otherwise may have been. Make no mistake – issues will still be encountered, but much fewer. The risk of costly re-designs that break schedule and budget is much lower.
Inadequate software testing methods increase risk of software issues passing through the tests and eventually manifesting themselves in the series manufacturing or, in the worst case, the end user customer finds the issues which may be safety critical.
The goal in a testing platform design is that it can be used immediately from the initial low-level unit testing through to testing the final complete code. In any project it is important to start the Software testing in the first Software prototype in an emulated hardware environment. This approach moves the emphasis of discovering issues and resolving them at a very early stage of the lifecycle of the product, thus improving the overall quality of the product and issue resolution lead-time.

Here are few basic principles in software testing that greatly affect the outcome and the efficiency of the testing process. The most important principles are:-

  • A test case must contain a definition of the expected output and results.
  • Each test result shall be thoroughly reviewed and analysed.
  • Test cases must be written for invalid and unexpected input conditions, as well as for input conditions that are valid and expected. This ensures that no false negatives and false positives are present
  • Review a program to see if it does not do what it should do and if it does what it should not do.
  • Test cases should be stored and be repeatable.
  • Plan a testing effort in assumption that issues will be found.
  • The probability of existence of issues is proportional to the number of issues already found.

The above principles are intuitive guidelines for test planning and help to get an understanding on the parameters of the testing process and its Objectives