BBST Foundations takeaways

Having finally completed the BBST Foundations course, it is time to share my key takeaways as well as to compare how my terminology has changed since my last blog post.

Aside from a refresher on testing related matters, my key takeaways had less to do with hands on testing and more with how to approach learning and testing in general:

  1. I learned that “Reading stuff because it is interesting” and “preparing for an essay style exam without reference materials” produce very different styles of learning. The BBST courses are known to require between 12-15 hours of work every week. While this didn’t seem like a lot to me at first – this was the time I spent “reading stuff because it is interesting” anyway – I quickly realized that a deep dive into the topics required a much larger amount of focused work and dedication than I had anticipated. To compensate, I had to slow down my other activities and completely stop writing any new blog posts.
  2. If I did not agree with a definition, it was difficult to work with it at first. However, having multiple contradicting definitions for the same thing – and being able to switch between them when appropriate – is a great help when discussing or modelling the problem space. I now think that being mentally fluent with various contradicting models and definitions is a key skill for testers.
  3. Learning to explain and argue (and giving feedback) are key tester skills as well. Most of the assignments had us participating in groups, writing short essay style explanations of our understanding of the course materials and then peer reviewing each other’s work.

Looking at my previous post, “Sharing my Testing Terminology, before BBST”, I realize that the course was not that much about terminology (with the exception of lesson 1) and rather about how to approach different problems in testing. Because of that, the words I commonly use were not really changed that much.

I now have a slightly different idea on why the distinction between black box and white box testing can be interesting in some contexts. I added the term “implementation level testing” to my vocabulary, to contrast “system level testing”. I learned to appreciate a lot of the concepts that did not even make it into my old testing terminology. Maybe it is time to build a new one from scratch.

As a closing note, I can highly recommend the BBST Foundations course to anyone who wants to deepen their understanding of the basic problems of software testing – I think the basics are important, no matter how long someone has been testing for.

Sharing my Testing Terminology, before BBST

One of the things that every sorcerer will tell you is if you have the name of a spirit, you have power over it.

Gerald Jay Sussman quote, 6.001 Structure and Interpretation of Computer Programs – Lecture 1B: Procedures and Processes; Substitution Model, July 1986 (link)


Since I started with the AST BBST Foundations course a few weeks ago, I thought it would be interesting to publish my current terminology for thinking about testing. Hopefully, this will be followed up by a “What I learned from BBST” post four weeks from now.

I do not claim that any of these represent some kind of “agreement” between testers, especially not an international one – these are just the words that I use for shaping my own mental model of what testing means to me. Most of the terms originate from conversations in the Context-Driven Community, and some belong to the RST namespace according to James Bach and Michael Bolton.

Namespaces – is a vocabulary of terms that are confined to a project or a group. Michael Bolton  has a great blog post on the topic. Besides the RST namespace, I like to think of BBST course terminology and possibly the ISTQB glossary as namespaces.  Additionally, most companies as a whole as well as any project in isolation probably have their own explicit or implicit namespaces.

The cool thing about namespaces is that we can create new ones whenever we want to, as long as we remind ourselves that these words might lose their meaning (or mean something completely different) in a different context.

One of the tricks to using namespaces is to make conscious switches when you change contexts. For example, a “Test Strategy” might have a very specific meaning in one company and a whole different meaning in another – and “out in the open” it is not that well defined at all.

Approaches – which can be thought of as “syles”

  • Scripted Testing
  • Exploratory Testing (a term with interesting history in its own right)
  • Tool-heavy style, Analytical style, Well-documented style, etc

Methods – sometimes called x based techniques

  • Black box – which is kind of the same as ‘specification based techniques’ for some people
  • White box – which is kind of the same as ‘structure based techniques’ for some people
  • Experience based – is sometimes included here but just seems terribly wrong to me.

I’m currently trying to figure out whether I find this distinction useful or not. Interestingly, these terms have been completely removed from the RST class materials.

General test techniques (According to HTSM) – A test technique is a heuristic for creating tests.

  • Function, Domain, Stress, Flow, Claims, User, Risk, Scenario, Automatic

Test design techniques – Interestingly, these seem be to almost “factory school” exclusive

  • Static
  • Dynamic

Test levels – For me, a test level is the layer at which our testing is performed. It is not really about coverage – we need to test at all levels – but about testability. It might be easiest to test certain elements of the product at the unit level.

  • Unit
  • Integration
  • System – also known as end-to-end
  • Acceptance – Sometimes included as a test level for reasons uknown. For me, the key difference between system and acceptance testing appears to be the performer. Acceptance testing should be done by the Customer. System testing by the Developing Organization. As such, it is not really a testing level for me. Acceptance testing focuses more on whether the application really performs as required, not as requested.

Test types – This category might as well be called “unsorted”. Wikipedia lists among others: Installation testing, Compatibility testing, Smoke and sanity testing, Regression testing, Acceptance testing (again!), Alpha/Beta Testing, Security, A/B testing.

Some of these seem to be about coverage level, some are non-functional requirements, yet others seem to imply who does the testing.

(Testing) Paradigms – An organizing worldview; a model. CDT is a paradigm. So is Agile. Interestingly, Cem Kaner notes that he has stopped using this term, as noted in a debate video with Rex Black (Paradigms mentioned at 24:45)

(Testing) Methodologies – Are specific ways of working within a paradigm. Scrum, RST, TDD, BDD are methodologies.

Verification and validation – I’ve never used that term in a professional capacity and I’ve heard several people claim the same thing. Still, it’s in all the classic textbooks. I still encounter it in testing philosophy though. In the debate video linked earlier, Rex Black made a note that Testing vs Checking (in the RST Namespace) is analogous to Verification and Validation for him.

How SQL can enhance your testing

There are plenty of free online resources available that can be used to learn how to interact with databases using SQL. Places like and tell students all about the effective use of statements like “select”, “update”, “join” and “group by”. The best part is that once you start learning, there is not all that much complexity about it – all of these materials can be processed in less than a day.

However, while these “general purpose” courses cover all the basic operations, they fail to mention why we as testers should take the time to learn SQL in the first place – and how our needs differ from those of our developers. This is what the rest of this article is about – to describe how taking control of the database has helped improve my day-to-day testing and why I think it can enhance everyday testing work in various contexts.

Modelling and thinking about your application: As testers, we rely on written and mental models to come up with test ideas and to think of our software in various dimensions. So most importantly for me, approaching the product under test from the database perspective allows you to model the application in a new way; thus supplementing any existing thought patterns.

Going over a database model is a great start, but I think that having hands-on experience gives me a better understanding of how specific features are implemented, as well as how the application as a whole is put together. This can be a tremendous asset to finding interesting bugs in upcoming features – even outside the scope of SQL. In addition, you might be able to anticipate possible failure scenarios of new functionality based on how they will interact with the database. By letting mind wander around database interactions, I have found showstopper bugs while drinking coffee on my balcony.

Fun fact: modern development frameworks create a lot of boilerplate database structures and interactions automatically, so developers can sometimes become quite distant from the actual internals of the database. As a tester, you might get into a position where nobody knows the ins and outs of the database as well as you do!

Visibility: After performing an action via the front end or API, you may need to check whether something was actually stored or updated in the database (and whether everything else was left intact). You can also monitor internal states of the application that are otherwise hidden.

Controllability: With Insert and Update rights, you can perform a specific change on the application’s data that might otherwise be hard to trigger, such as time related functionality. As an example, when testing recurring billing functionality, you could edit relevant date fields to test business flows that would normally have you waiting for weeks. Additionally, you can perform massive changes with a single query – such as replacing all emails, phone numbers and passwords with new test data. As an example, I have tested various systems that demanded unique and valid email addresses and phone numbers as part of account creation. I could create accounts from the front end – using my actual email and phone number – then replace them with dummy data using ‘update’ queries.

New challenges: Some features might exist entirely at the database layer, thus being really difficult to test in an end-to-end scenario. When there are no testers around who have the required skills to investigate those features, developers might have to test them instead. Even worse, such “untestable” features might be pushed directly to production. As an example, I have tested message queues that were only visible at the database layer, but had very specific requirements for the priority in which messages of various types were to be sent. Using queries that counted the amount of messages in the queue grouped by the message type, I was able to observe in real time how the prioritization behaved.

Get to know your data: Depending on your context, access to a (obfuscated) copy of production data could allow you to gauge the impact a bug would have on the customers. As an example, I once encountered a serious bug that was deemed extremely difficult to fix. However, it only affected specific types of legacy accounts under specific circumstances. Querying the database revealed that only an extremely small amount of actual users were using this type of account, changing the priority of the bug significantly.

In addition, database queries help you find realistic data to test with. Testing with self-generated test data is great to show that software ‘can work’, but for adequate coverage, it usually makes sense to make the test environment as close to production as possible – and a big part of that is having realistic, messy test data. As an example, I once encountered a strange ‘internal server error’ bug that could only be replicated on some production accounts, but never with test data. The root cause turned out to be the specific way in which some accounts were created in previous versions of the product.

Building trust: Last but not least, database access is a ‘key to the castle’, as described in Ioana Serban’s CAST2015 talk (database access is specifically discussed at the 35 minute mark). Working in an outsourced testing company, I am used to ‘asking for keys to the castle’ whenever a new project starts. Getting an increased level of access and showing how much this helps the testing effort – without breaking anything – can be used to increase mutual trust and respect within the project team.

Generate test data: If you need a thousand accounts for performance testing, it might be ok to use database queries for that – in so far as you can be sure that accounts created directly in the database do not differ from normal accounts in a performance significant way.

However, I urge you to be careful with query generated accounts for functional testing – there might be minute details – such as encoding or input field length limitations – that cause your test account (or any testdata really) to act differently from an account that was created by a real user.

As an example, I once tested an application that required measurements for every hour of every day for a full year – that is 8760 data points every time I wanted to test something. Using insert statements from a spreadsheet, I could add the exact test data I wanted in seconds.

As a closing note, I recommended that the fastest way to learn SQL is to try out statements on whatever database driven application you are testing at the moment. Working with familiar data makes learning much faster, since you will already know what most of the tables are used for. Ideally, I would get in touch with developers who interact with the database as well – they can be a great help in making queries return exactly the data that you want to see. In time, this will work the other way around too – you can be asked to review or test database statements affecting production data.

This is the end of my list for now. You are welcome to post a comment if you disagree with something or want to add your own ideas!

A resource on technical testing stuff

After being a tester for half a decade, I have found that learning and discussing the ideas and practices that interest me can only do so much for my continued professional growth. While being able to understand concepts, practices or tools is a good starting point, it takes constant deliberate practice to actually be able to explain those things to others in a coherent manner.

So after years of neglecting to write about the things that I am doing, I decided to try again. This time, I have a good amount of peer pressure to help me start and then keep on going. Special thanks to Carol Brands who published her first blog post before I did!

Since it took me a while to catch up with Carol, I have had time to contemplate what I want to write about and share with other testers in the world. My mission statement is as follows:

This website is will become a resource for technical testing that follows the Context-Driven Testing paradigm.

To elaborate:

Context-Driven Testing simply means that I agree with the seven basic principles of the Context-Driven School and will attempt to follow these principles in future posts.

The term technical testing is not as easy: The CDT paradigm demands an answer to the question “technical to whom”. I confess that I cannot describe “technical testing” with any certainty, but I doubt there is any widespread agreement on the matter to begin with. However, I claim that there are skills that might be considered technical and from which most testers can benefit. I will attempt to write about those skills here.

The goal is to write about skills, concepts, practices and tools, how testers can potentially benefit from them and in which contexts they might have a high impact or instead provide little to no value at all.

Or simply….putting technical testing stuff into context.