Four hours of hacking in Tacoma, WA

Foreword

Through four hours of competition across nine categories of technical challenges, from cryptography to network security, students from across Washington were able to demonstrate their knowledge, perseverance, soft and hard skills, by battling against a competition designed to test their limits.



Several months of work by multiple individuals led to this point:

And finally myself, in setting up the challenges and judging/scoring the results.


Background – 2019

2019 Cyber Security competition demo

I had put on a CTF for High School students across Washington in 2019, as part of SkillsUSA’s state-level competition. This was the first year for the competition, and the last year I was involved – COVID threw a wrench in things!

Being my first time running a CTF, it doubled as my own personally frantic hackathon and required a number of considerations:

QuestionsAnswers
Which CTF platform?CTFd is a good platform.
What types of challenges?I can have any challenge I can program for.
How are challenges scored?Ansible, in a pinch, can retrieve scorable data.
How do I minimize student hardware requirements?Apacha Guacamole obviates the need for PuTTy or VPN/RDP.
Where am I gonna host this?Amazon works but $$$
(Incidentally, I reused and somewhat expanded the same environment for another CTF a few months later, for NewTech Academy.)

Takeaways

After conducting these competitions, I had determined several avenues of improvement:

  • More challenge types addressing a broader set of skills.
  • More automation for competition setup.
  • Include some physical challenges.
  • Plan out scoring better.
  • Find a hosting platform that can be used long-term without costing a fortune.

Planning for 2022

The technical work for 2022’s competition began several months before I was asked to be involved: Dan Wordell, Information Security Officer at the City of Spokane, had managed to acquire a hypervisor platform (Antsle), which found a permanent home at Eastern Washington University’s datacenter with the backing of Stu Steiner, a well-known lecturer on Computer Science at EWU.

Antsle vs Amazon

Brain of the CTF, living at EWU’s Catalyst campus

This Antsle device, a small, low-power server designed to manage containers and virtual machines as an alternative to using Amazon AWS, provided a way for me to develop a technical challenge platform without the ongoing OpEx of Amazon.

After some initial technical challenges with the server itself in September 2021, and then with the networking at EWU through February 2022, I was finally able to use the device as it was intended. But at this point, it was still a blank canvas.

Automation

Potentially having more teams and more challenges, automation took priority – with the goal of reusing this infrastructure in the future, for more frequent competitions within Spokane Public Schools and across the state of Washington.


Terraform was an initial idea – I had the vague notion of Terraform as being like ansible but for provisioning – something that could automagically create infrastructure.

But after several hours of initial work, I realized that it really couldn’t do much with Antsle – the provider supports very little of the API.


In the end, as with 2019, my primary automation tool ended up being ansible. Unlike 2019, I used ansible to a much greater degree and for just about everything conceivable – perhaps it’s more accurate to say I abused ansible, given the level of nested loops and delegate_to.

This code will be available on Gitlab for those interested in helping to contribute, though at the moment it is a pile of frantic spaghetti and needs a serious refactor!


“Ansible is an open-source software provisioning, configuration management, and application-deployment tool enabling infrastructure as code”



Building the CTF

Warning: Beyond this point lies a lot of technical jargon.
Skip if uninterested.

With a platform already determined (Antsle, CTFd, Guacamole, ansible, etc), I began work in earnest about halfway through February. My strategy was to try to avoid any form of manual configuration and to rely on ansible as much as possible, even when investing hours into writing custom roles for my specific use-case.

The network topology would involve CloudFlare and HAProxy in front of everything, with CTFd/ansible on one Antsle container, Guacamole on another container, a VM for hosting multiple Linux challenges via LXD, and multiple VMs for a Windows instance for each team (all challenge instances being accessible via Guacamole):




Ansible Setup

Using SSH for the Linux containers and WinRM for the Windows VMs (I setup WinRM on one Windows VM then cloned it within Antsle to save time), I got Ansible running on the CTFd machine. This was to serve two purposes:

  1. Provision and coordinate configuration changes across all the infrastructure
  2. Facilitate scoring of interactive technical challenges

Provisioning with Ansible

I had an evolving YAML spec that defined challenge machines and team memberships. Since I added Windows support toward the end, there is some redundancy and inconsistency in how the YAML is defined. The process ended up mostly automated, though CTFd configuration remained mostly manual beyond the initial installation.

challenge_fqdn: "linux.cyberpatriot.ewu.edu"
win_challenge_fqdn: "windows.cyberpatriot.ewu.edu"
managed_dns_group: "{{ groups.main }}"
guacamole_api: https://rdp.wa-cyberhub.org/guacamole
guacamole_admin: ----
guacamole_pass: -----

common_machines:
  - name: scantarget
    os: ubuntu20.04
    net:
      int: eth0
      address: X.X.X.X
      mask: 255.255.255.0
      maskbits: 24
      gateway: X.X.X.X
      dns: X.X.X.X
    roles:
      - challenge-hiddenweb

windows_machines:
  worker:
    roles:
      - foo

user_machines:
  scan:
    os: ubuntu20.04
    net:
      int: eth0
      mask: 255.255.255.0
      maskbits: 24
      gateway: X.X.X.X
      dns: X.X.X.X
    roles:
      - challenge-forensic1

teams:
  - name: TeamName
    org: Some High School
    windows:
      - name: worker
        fqdn: team1.windows.cyberpatriot.ewu.edu
        os: windows
        ip: X.X.X.X
    users:
      - name: user1
        password: -----
        contact:
          name: User One
          email: [email protected]
      - name: user2
        password: -----
        contact:
          name: User Two
          email: [email protected]
    machines:
      - name: scan
        address: X.X.X.X

With the above YAML spec, my playbook/roles then took care of provisioning all the challenge containers, configured authentication across containers and Windows VMs, applied ansible challenge roles, configured Guacamole connections/users, configured HAProxy front/backends, requested certificates from LetsEncrypt, and maintained host file changes across all of the machines, modifying the Ansible inventory on disk and while running.


Ansible’s “–check –diff” came in handy a lot

In theory, I could add new Linux challenges very easily

  1. Create roles in ansible for the challenge (deploy backdoor, etc),
  2. Define the team machine template under user_machines and assign ansible challenge roles,
  3. Assign that machine template to each team with an IP address,
  4. Run the playbook,
  5. ???
  6. Ansible does the rest:
    1. New containers are provisioned and bootstrapped,
    2. Host file entry for container is added,
    3. Ansible inventory updated with the new container in the right groups,
    4. Team member accounts are created on their team’s containers,
    5. Guacamole connection is added for each new container, for each applicable team member,
    6. Guacamole user profile is created (if needed) and the new connections are attached to the respective user profile,
    7. Some other stuff.

A big challenge here involved a lot of hacky use of “lxc exec” to pseudo-delegate bootstrapping commands to the containers, which booted without DHCP due to being directly bridged to the host network. Also, there is a very unintuitive way that you have to combine loops with delegation in ansible.


Scoring with Ansible

Testing the get_gpresult playbook

In 2019, I had also used ansible to score Windows challenge:

  1. Ansible playbook generates/runs a scheduled task on a given system:
    1. Scheduled task runs gpresult and outputs xml,
    2. Ansible returns xml via stdout.
  2. Custom CTFd challenge plugin:
    1. Calls ansible-playbook,
    2. Parses xml output,
    3. Returns JSON of key->value pairs,
    4. Compares it to JSON flag.

This time, rather than setting up two machines per team (one domain controller and one domain member), I did single machines and relied on secedit. This had its drawbacks but was simpler in the end:

  1. Ansible playbook generates/schedules/runs a scheduled task on a given system:
    1. The task exports Secedit to a .ini file,
    2. Secedit ini is returned via stdout.
  2. Custom CTFd plugin runs the playbook:
    1. Uses environmental variables to:
      1. Make Ansible return JSON-formatted output rather than a steam of cowsays,
      2. Limit execution to single host defined in CLI argument.
    2. Flexibly parses INI output to a Python configuration object,
    3. Flexible parses flag (e.g., JSON) to a comparable Python configuration object,
    4. Compares the state and flag configuration objects.

CTFd Setup

While CTFd installation was automated through ansible, within CTFd I had to do a number of tasks:

  • Set up users and their passwords
  • Configure teams
  • Create custom plugins for challenges
  • Create challenges/flags

The custom challenge plugins were expanded vs 2019, and this time included custom flags as well:


  • Crypto Challenge
    • Creating/using different forms of cryptography.
  • Crypto Flag
    • Scoring RSA encryption/decryption, verifying against previously submit public keys.
  • Secpol Challenge
    • Score correct security settings on Windows.
  • SSL Challenge
    • Test TLS server parameters.
An almost elegant way to set up challenges

I have very little experience with Flask or SqlAlchemy so the learning curve was pretty steep. I also kept running into issues with the way things were cached, constantly needing to rename classes and folders after each iteration required changes to DB schemas or static assets. Clearly I didn’t do my homework.


Designing the Challenges

Reducing the focus on forensic questions took priority. These types of questions are often over-represented on CTFs, since they are simplest to implement (find the flag!), but they also skew the skillsets being tested.

Laptops would be required for each student, and as I was able to get assurance from the advisors that students would at least have Wireshark and PuTTy, my options wouldn’t be purely limited to what was accessible via Guacamole.

My approach was to consult the SkillsUSA guidelines for the competition, and base challenges on that. Since the time was limited, I tried to select a sample of as many areas as possible. For example, here are several types of challenges that I implemented:

CSC 1 – Professional Activities

Dice for random technical interview question
Contestants will provide verbal instructions or explanations to an evaluator for the task presented at the Professional Activities Station – Cyber Security Standards, SkillsUSA 2022

For this first category, no infrastructure was necessary at all!

I decided on a Technical Interview portion, which consisted of:

  1. Each contestant introduces themselves as if they were applying to an entry-level cyber security job.
  2. One contestant rolls two dice – yellow one for even/odd and white for a number.
    1. A random technical question is selected from the front (even on the yellow die) or back (odds on yellow) out of six options (white die).
  3. The team is given a minute to coordinate a response.
  4. Both team members give a collaborative answer to the question.

They were then evaluated on the following criteria:

  • Presentation (10) – Dress, posture/eye contact, professionality.
  • Understandability (10) – How well do they explain themselves, can they explain the concepts comfortably.
  • Accuracy (20) – Technical accuracy of their answers, depth of knowledge.

In the future, I’ll add an option to re-roll for a different question.

CSC 2 – Endpoint Security

Contestants will display knowledge of industry standard processes and procedures for hardening an endpoint or stand-alone computing device.

Cyber Security Standards, SkillsUSA 2022

Using the secedit challenge plugin, I was able to create a series of challenges where contestants interpreted a business problem and then implemented a solution by modifying the local security policy.


Endpoint security flag

I realized, with too little time to spare, that I should have added a way to compare other than “=”. Why fail a password complexity question when they set minimum length to 9 instead of 8?

CSC 3,4,6 – Switch/Router/Boundary Security

“Given a scenario, establish telnet or ssh administrative access to the switch…Create a routing scheme to route traffic from one designated network to another…Given a scenario, use a hardware firewall to create and configure perimeter security that provides a boundary between two network zones that have differing security levels.– Cyber Security Standards, SkillsUSA 2022

Logistically speaking, this was one of the more difficult items. Initially I had a grand plan of some kind of multi-segment Docker network for each team filled with VyOs containers, but eventually I settled on a technically simpler plan: physical Cisco ASA 5500s and a raspberry pi.

My problem here was threefold:

  1. I only had two spare ASAs – but there were seven registered teams.
  2. I didn’t have any managed switches or regular routers.
  3. I need to fit this all in the trunk of my already-cluttered Toyota Corolla.

I addressed the second point by crafting the challenge in such a way that some knowledge of switching, routing, and firewalling was necessary to complete the challenge – these devices already fill multiple roles anyway.

The fist point was addressed by setting a time-limit: if both are already in use, the first team gets booted after an hour from their start, or 20 minutes from that moment, whichever is later.

Gathering together a handful of serial adapters, power strips, power cables, spare laptops (a 2009 Macbook running Xubuntu!), and the Cisco ASAs, I just managed to fit everything in my trunk alongside my suitcase, laptop/binder bag, and emergency supplies.


Tight, dusty fit

With a handful of serial adapters and a few spare laptops in case of driver issues on the school-issued laptops, four teams tried their luck at applying knowledges of networking across multiple OSI layers to get a flag located on the raspberry pi.

CSC 5 – Server Hardening

This task contains activities related to hardening servers against attack.

Cyber Security Standards, SkillsUSA 2022

To my mind, ‘server hardening’ either addresses hardening the operating system environment or hardening applications. With that in mind, I proposed two challenges: one involved setting up a Domain Controller with a security-related GPO, which will improve the security of every operating system on the network, and the other involved the correct TLS configuration of a webserver using an internal Certificate Authority.


20 minutes of PHP and bootstrap later

The Domain Controller challenge was, due to a shortage of time, manually scored. The Certificate Authority challenge required:

  1. Set up a certificate authority cert/key and config in openssl.
  2. Create a basic web app in PHP which could sign certificate signing requests and return the resulting certificate.
  3. Create a custom CTFd plugin to test TLS validity of the website.

In the end, rather than securing the internal CA against attack, I reused “intentionally” vulnerable code for a separate pentesting challenge.

CSC 8 – Network Forensics

This task contains activities related to network forensic activities associated with Incident Response Actions. Contestants will use appropriate measures to collect information from a variety of sources to identify, analyze and report cyber events that occur

Cyber Security Standards, SkillsUSA 2022

Working in network security myself, this has always been one of my favorite areas. I tried to restrain myself, but a number of challenges remained focusing on forensics:


  • Wireshark challenges – I captured traffic on my own machine while using several plaintext protocols, and crafted scenarios to answer based on those protocols
  • Linux challenges – Reverse shell detection and analysis
  • Windows challenges – Bot detection and joining the C&C IRC to find the botmaster name
  • Digital Forensics – Analysis of artifacts from a breach (deobfuscation and interpretation of backdoor shells) as well as web log analysis

Aside from the initial setup, these were pretty easy to score since it simply involved putting the flag values into CTFd.

CSC 9 – Pentesting

This task contains activities related to the process of penetration testing…Hack a specified file (flag) in a remote network.

Cyber Security Standards, SkillsUSA 2022

Always one of the more fun contests – since it’s something that would otherwise potentially be illegal! Two types of pentest challenges were available: One involving port scanning and one involving abusing input validation.

The nmap challenge sounds simple, though it requires several skills: knowing how to install applications in linux, knowing how to use and interpret nmap, knowing what command line tools can request a webpage, and then reading the source code of the website enough to find the flag.


Admin-side validation of the challenge

Omissions

The astute readers among you may realize I missed CSC 7 – which was the wireless challenge. I did not have the resources to do a wireless challenge this time, though I tossed around a few ideas with varying levels of feasibility:

  • Locate a hidden Access Point in the building using a wardriving app?
    • Not everyone has a laptop with capable hardware; smartphone apps might be a workaround.
  • Bypass MAC filtering on an otherwise open AP?
    • Again, depends on laptop/wifi card hardware.
  • Configure a secure access point with a hidden SSID and connect to it?
    • Requires some additional hardware in my trunk, difficult to automate scoring.

Additionally, I had to consolidate all of the networking sections into one challenge, which is less than ideal.


Judging and Scoring a CTF

Beyond the technical challenges involved in provisioning this infrastructure and setting up challenges, a surprising amount of time ended up going into paperwork: a scoring rubrick, handouts, lists of questions, etc.

Scoring Rubric

So much printing

I had initially developed guidelines on scoring without any official instructions – grading resumes and technical interviews on several criteria, written tests (Ethical Hacker), CTF/technical challenges, all weighted to a percentage. Individual challenges within the CTF had largely arbitrary scores across 11+ categories, set by difficulty (such that some categories were worth much more than others).

At the last minute, I received the official scoring guide, and with about an hour to spare I worked on re-valuing and re-categorizing all my technical challenges such that I had nine categories of technical challenges each worth 100 points.

After the competition, I realized that the total points had to add up to 1000, but with the technical interview and two written tests I had 1200. So I just multiplied everything by 83% across the board and gave a few extra percent to the technical interview to get a nice round 1000.

Running and Judging the CTF

One eye-opening thing, also last minute, was that two students missed competition and so their respective teams were being consolidated into one: this should have been a perfect use-case for ansible, and I might have been able to address that problem very quickly with a simple change to the YAML file.



However, I ended up doing it manually – I didn’t trust the automation, as least not with only a few minutes to spare. In retrospect, it would have been a good idea to destroy and rebuild the whole automated infrastructure several times during development.

Beyond that, there were several minor technical problems (e.g., typo in a flag), and one of the spare laptops I brought ended up getting used. As with last year, in retrospect I should have given out more free hints.


Game Day

Of course I had to add a Caesar-cipher challenge

Preparation began at about six in the morning, with several trips down the escalator (and a flight of stairs) to bring the ASAs, a switch, laptops, boxes of cords, my own laptop, a binder of handouts, and several power strips into the competition area.

After setting everything up, I had to scramble and re-value/categorize the challenges, altering some systems for the team changes I mentioned earlier – but soon enough the competition was well under way.


Most teams accumulated a large number of points in a pretty linear fashion for the first 45 minutes or so, before leveling off

Teams earned more points initially, so I succeeded in adding entry-level challenges. More intermediate challenges (additional challenges within Endpoint Protection and other more accessible categories) should be added in the future to keep all teams engaged rather than frustrated at a single challenge for two hours.

At the half-way mark, I gave a reminder about the Cisco challenge – four teams scrambled to take a whack at it, one with 15 minutes on the clock.


Lonely, unused 2009 Macbook and DB9 – if I had just one more ASA, there could have been three teams at once

None of the teams successfully completed the Cisco challenge, but I ended up giving some points for the attempt (at least they got logged in and started configuring it, having never touched PuTTy or a serial cable in their lives.)

Final Scores

At the end of the competition, one team was firmly in the lead while the next two teams were separated by a single point!

I wouldn’t know the final scores until the next day, since I didn’t proctor the written tests, but the awards ceremony revealed that the final scores were in the same order as the CTF scores.


Future Work

While I believe 2022 was tremendously more successful than 2019, I have a list of items to improve upon, in something of an ascending rank of difficulty:

  1. Refactor scoring rubric to reflect official guidelines (not an hour before competition.)
  2. Improve technical interview portion.
    1. Add more technical interview questions (20-sided die?)
    2. Allow limited re-rolls.
  3. Add more intermediate challenges.
    1. More server and endpoint hardening.
  4. Add some type of wifi challenges.
  5. Refactor all the ansible code so it’s less of a jumble.
  6. More networking challenges – maybe the VyOs docker lab idea will see daylight?
    1. Packet tracer license?
    2. GNS3 emulation of Cisco gear sounds nice but legally is sketchy.
  7. Make a more formal system for managing CTFd and Ansible together.
    1. Ansible Tower?
    2. CTFd plugin?

Categories: NewsWriteup

1 Comment

Expanding the SkillsUSA CTF - Washington Cyberhub · May 4, 2023 at 10:27 pm

[…] was the 3rd competition I ran for SkillsUSA at the state level, and I had just run a similar competition at the regional level, so at this point I’m […]

Leave a Reply

Avatar placeholder

Your email address will not be published. Required fields are marked *