Have you ever wondered what goes into testing new software applications? We develop a lot of custom software at OPUS. When a new tool is out of development, we work as a team to try to break the system! This means we’re looking for software bugs that our developers can then correct, long before they reach our end users. First we pretend that we’re the end user: We click around and rummage through the windows and fields. Then we really get going and basically try to trash the system! We’re testing because we want to find any little error or glitch.
Once the most obvious bugs are fixed, we create test cases. Test cases are true-to-life scenarios of how the software would be used. By trying to accomplish real tasks, we can see if the software achieves the expect outcome. For instance, does clicking Submit actually perform the submit function? When test cases don’t generate expected outcomes, we log the issue with error codes and return those error logs to our developers so that they can make adjustments.
We also try to think through possible issues by asking questions like, What happens when the software experiences a user-generated error? A user generated error could occur if a user tries to enter the same record twice or forgets to put data into a required field. It’s important to make sure that the software will handle such errors properly and generate alerts for corrective action. These alerts help users to use the software effectively. At OPUS, we want our custom software to be both useful and bug free. Testing is a necessary step in achieving that goal – and it is one of my favorite parts of my job!