In a cyber security capture the flag competition, the most important element is arguably the challenges. Poorly designed challenges can leave players disappointed or frustrated, and in the worst cases can totally fail to teach any new skills. Creating effective challenges is a careful balance of difficulty and educational value, while still ensuring they are enjoyable to tackle.
In order to create fun and education challenges, we as challenge authors need to first understand the mindset of a player. Without considering how players will approach a challenge it can be incredibly hard to create a challenge that is rewarding and fun while also remaining educational.
It might go without saying, but the most effective way to understand the mindset of a player is to become a player. Taking part in lots of different CTF events not only helps with understanding the processes players perform but will also help to develop the gut feeling for when a challenge is good, or when it’s going to be annoying and frustrating.
With that out of the way, let’s take a dive into the processes players typically go through when solving a challenge. We can break them down into a few discrete phases, though it is worth noting that not every phase is applicable to every challenge, and sometimes phases can occur out of order or simultaneously.
Initially starting a challenge
When starting a new challenge, players are first going to have a very brief look at a challenge, and ask few questions such as:
- What type of challenge is this?
- What parts of this challenge can I immediately access?
- Is there an obvious end goal?
- Are there any easy exploits I can see right now?
- Do I want to attempt this challenge now, later, or not at all?
That final question is one of the most important, because it determines what players are going to be spending their time on; there shouldn’t be challenges that turn players totally away without at attempt being made.
Players are going to need to answer all of these questions for every challenge, consciously or not, so as challenge authors our job in this phase is to make them as painless to answer as possible. Time players spend looking at a challenge to try and answer these questions is wasted time they could be spending playing the challenge.
One popular helper used by challenge authors during this phase is explicit “tagging” of challenges. For example, a web-based challenge based around an SQL injection might have “Web” and “SQL” indicated as part of the challenge brief. This has the added benefit that users can utilise these tags in reverse and can, for example, view a list of all “Web” challenges if those are what they wish to play. It is pertinent however to not add more tags than necessary as not only can it be overwhelming for players, but it could even give away too much information regarding the solution!
If there are tools or resources players are expected to know, or have access to, it is similarly important to indicate these to the players in the challenge briefing, as otherwise players can be left totally lost. Similarly, adding intentional red-herrings and false starts in this phase in a challenge rarely aids in improving a challenge, rather just causing the challenge to be perceived as less fun and “guessy”.
If this phase is taking too long, it could be that your challenge is giving too much to players for them to do. For example, if a challenge expects users to attack a login form, the problem space could be drastically reduced by having only a homepage and a login page, and having all other links take the user to a “not implemented” page or similar. The degree to which this should be performed varies based on the specific event and the learning objective of the challenge.
Setting up local copies of challenges
Depending on the type of challenge, it is during this period that players will also attempt to create a local version of the challenge. Not all challenges require this step, but for challenges that do, this process should ideally be streamlined. Rather than just providing players with source code, it can be beneficial to also provide basic configuration, or even an already-working solution such as providing downloadable containers.
Sometimes getting a challenge to run at scale for an event involves an unusual or complex configuration within the infrastructure. While this is an expected part of running an event, it shouldn’t be expected for players to need to reproduce these complicated setups. The setups provided to players should be the minimum required to get the challenge started, rather than the exact setup used in production.
Part of investigating a challenge is comparing what you can see with what you would expect to see. If a challenge runs using an unconventional method, this could mislead players into suspecting that there is something related to how the challenge is run that makes it vulnerable. If that is the case, great, players are on the right track! If, and this is more likely, it is not the case, players are now being led down a potentially quite time-consuming path that will lead nowhere. It can be useful to justify any unconventional configurations with a short comment, or even just a note that “the unusual configuration of this service is not part of the challenge”.
Beginning to investigate exploits
Once players have a general idea of what exactly they are looking at, it is time to begin searching for useful ways to exploit the challenge to achieve the objective. How this process is performed will depend largely on the exact challenge. If players have been provided with source code, it can be expected that this process largely revolves around source analysis. On the other hand, for a binary exploitation challenge this phase may involve disassembling the binary, forming and understanding of how the binary functions, then looking for places where unintended behaviour might be present.
Not all challenges in a CTF must revolve around exploitation. For example, cryptograph, forensics and stenography challenges generally involve a player extracting useful information from seemingly useless information. In these cases, this phase typically revolves around exploration of the particular data provided, and research related to it.
A cryptography challenge often involves finding a flaw in how a piece of data is being encrypted, signed, decrypted or validated, which typically requires knowledge about that specific crypto scheme. Forensics and steganography challenges can involve identification of particular “tell-tale” signs that give away the true nature of the data presented. This is where it can be essential that in the prior phase players were informed regarding the expected tools—attempting to solve a challenge designed around the Volatility tool using a disk analysis tool rarely produces any useful information.
Sometimes a challenge, rather than having a single vulnerability, contains a chain of vulnerabilities that all need exploited in a particular order.
If a challenge is vague, or has a large scope, players can get lost during this phase. It is particularly important to play-test challenges with multiple people, as this can help identify where players might get lost or stuck, and remediations can be added either by making the exploit more obvious, providing more information in the brief, or as a last resort adding hints to the challenge.
Performing the exploit
For players, this is often the most rewarding part of solving a challenge because it’s where the actual solving occurs. By this point, players should be confident in their knowledge of how the challenge functions and importantly where and what the vulnerabilities are. This does not necessarily mean exploiting the vulnerabilities should be easy, however; in some cases, the majority of the time spent on a challenge can be in this phase.
As well as knowing about the vulnerabilities, players should also know what they are meant to be attempting to achieve. It can be easy to spend an hour attempting to pull off an exploit, only for it to succeed but to not receive a flag. The end goal is often best conveyed very simply in the briefing. For example, “login as an admin user to get the flag.” It can be tempting as a challenge author to hide the flag in an obscure location, but unless this is the central focus of the challenge, it rarely adds the intended complexity to the challenge, and instead just makes it more frustrating for players who have found the vulnerability but cannot find the flag.
Some vulnerabilities can have the ability to crash, break, or otherwise render a challenge unsolvable. In these situations, it is sometimes possible to alter the vulnerability to be “safer” (to use the word safe loosely) however more often than not this is an unavoidable consequence of the exploit. Players may also use tools that inadvertently can cause excessive load on a challenge potentially taking it offline. An example of one such tool commonly used in CTF events is sqlmap, which will aggressively bombard a server with queries, many of which can lock up unsuspecting services for significant amounts of time.
As challenge authors, we need to be acutely aware of this, and have monitoring in place to notice challenges that are experiencing issues during an event. Ideally this monitoring should be automated. If an event is targeting a global audience, and it is expected that players will be attempting challenges while members of the event team are sleeping, it is important that multiple members of the team have access to this monitoring, and that at least one member which access is always awake at any one moment.
Similarly, and especially if many players are expected, challenges should each be running multiple instances. Players should be allocated an instance however should ideally have the option to switch to a different instance. This ensures that challenges are not overwhelmed by sheer number of players, and similarly that if an instance has an issue, players are able to immediately switch to a different, hopefully working, instance.
Sometimes, a challenge is totally broken. While testing should catch these, broken challenges slipping through the cracks happens. When a player reports a challenge as broken, the very first thing to do is ensure it actually is broken. There are two things that will need checked. First, a walkthrough should be used to confirm that it is possible to get the flag out of the challenge. Secondly, that flag should be tested against the scoring platform—it’s very possible that the scoreboard is checking for the wrong flag!
If the challenge is in fact working, players should be simply informed as such. Pointing the player to where they may have gone wrong could be considered proving unfair hints, so should be avoided. On the other hand, if the challenge is actually broken there are a few questions that need considered:
- How can we communicate to players that the challenge is broken?
- Have any players already managed to solve this challenge?
- If so, at what point did this challenge stop working?
- Was this challenge maliciously broken by another player?
- Is it possible to fix this challenge quickly?
Hopefully that first question was considered well in advance of the event, and there are clear communication channels already established. Disabling the challenge and informing players it is broken is an important first step, as otherwise players may continue to waste time attempting a broken challenge.
If some players have already solved this challenge, it is essential to fix it as fast as possible, as otherwise other players are at an unfair disadvantage. Removing the challenge entirely would put players that have already solved this challenge at a disadvantage as their time and effort would now be wasted.
If the challenge has no solves, you are in a slightly more fortunate position. If you have the luxury of spare time, it can be worth assessing the issue with the challenge and then properly remediating it. The fixed challenge can then be re-released. In more time pressured situations, it can be better to instead remove the challenge entirely and move on. Whatever the decision is, it is essential that it is clearly communicated to players.
Nathan Taylor, is one of the talented team behind RACTF, the Really Awesome CTF (and now accepting sign-ups for 2021).