← Back to Recruiter Hub
NCL Network Analysis Password Cracking Steganography

NCL Fall 2025
Connecting the Dots

Eduardo competed in the National Cyber League Fall 2025 individual tournament against 4,214 teams, placing 194th. A larger field, harder challenges, and less practice time than the previous season. Despite all of that, he completed more challenges and found more flags. The reason: he had stopped treating categories as isolated skills and started chaining techniques together.

Event National Cyber League · Fall 2025
Placement 194th / 4,214 Teams
Format Individual · 9 Categories · Multi-Difficulty
Expanded Into Network Analysis · Log Analysis · Password Cracking

Skills Applied

What This Tournament Touched

Wireshark
Hashcat
Steghide
CyberChef
dcode.fr
Nmap
Google Dorking
strings
dig / whois
Linux CLI
Custom Wordlist Creation
Layered Decoding
Network Traffic Analysis
Steganography
OSINT

Year Over Year

What Changed Since Spring 2024

The same nine categories. The same structure. A completely different level of engagement.

Tooling

Real Tools This Time

In Spring 2024, Eduardo was using online hash databases and basic search queries. By Fall 2025, he was running Hashcat with wordlists, using Wireshark filters, extracting hidden data with steghide and strings, and querying DNS records with dig and whois. The tooling gap closed significantly between seasons.

Categories

Broader Coverage

Spring 2024 was largely OSINT and Cryptography. Fall 2025 added meaningful progress in Network Traffic Analysis, Log Analysis, and Password Cracking. Categories that were walls before became workable with the right tools and enough practice in between seasons.

Difficulty

Harder Baseline

Challenges labeled "Easy" in Fall 2025 were noticeably harder than the same tier in Spring 2024. The field was also nearly ten times larger. Placing 194th out of 4,214 teams in a harder tournament while covering more categories is a result Eduardo is proud of.

Mindset

Chaining, Not Isolating

The biggest shift was not a specific tool. It was learning to treat each category's output as potential input for another. That mental model, looking for layers rather than a single direct answer, is what unlocked flags that would have been abandoned in the previous season.

Methodology

How He Approached It

01

Steganography: Extracting What Is Not Visible

Image challenges asked for data hidden inside files. Eduardo used the strings command to pull readable text out of binary image files, and steghide to extract data concealed within the image itself. Metadata analysis rounded out the toolkit. These challenges demonstrated that files carry more information than what is visible on the surface, a mindset that transfers directly to forensics and malware analysis.

02

Password Cracking: Hashcat and a Custom Wordlist

Eduardo ran Hashcat properly this season, identifying hash types and running wordlist attacks with rockyou.txt. The challenge that stood out most required building a custom wordlist from scratch. He used Linux commands to pull raw data, clean it, and normalize it, converting everything to lowercase, stripping whitespace, and removing duplicates. That combination of shell scripting and Hashcat is closer to real-world credential testing than any textbook exercise. Hardware limits still capped cracking speed on harder hashes, but the methodology was right.

03

Network Traffic Analysis: Wireshark Finally Made Sense

Network traffic analysis was a wall in Spring 2024. In Fall 2025 it became one of Eduardo's favorite categories. Analyzing PCAP files in Wireshark allowed him to watch the TCP three-way handshake play out in real packet data, trace DNS queries and responses, and identify traffic anomalies. Seeing theoretical networking concepts rendered as actual captured packets made them permanent knowledge rather than test prep. He used display filters to isolate specific protocols and followed TCP streams to reconstruct sessions.

04

OSINT: Sharper This Time

OSINT challenges were more efficient this season because Eduardo's tooling was sharper. Google dork operators narrowed searches that would have taken minutes down to seconds. He used dig and whois to pull DNS records and domain registration data, and dcode.fr to decode formats that CyberChef did not immediately recognize. The approach was the same as Spring 2024. The execution was faster and more deliberate.

05

Consulting the Manual

Eduardo used hashcat --help frequently throughout the tournament. Not because he did not know what he was doing, but because knowing how to navigate tool documentation quickly is a real skill. In a timed competition, looking up the right flag or mode takes seconds. Not knowing how to find it costs minutes. Reading tool output and man pages under pressure is something he practiced more deliberately this season.

The Turning Point

Connecting the Dots

There was a specific moment near the end of this tournament that changed how I think about CTF challenges.

01

The Problem

Eduardo was working through a challenge and getting output. But the output did not look like a flag. It was not in the expected format. In a previous season he would have assumed he was on the wrong track entirely and started over. This time, something clicked. The output was not wrong. It was just not done yet.

02

The Pivot

Eduardo took the jumbled output and ran it through CyberChef. Applied a decoding step. Out came the flag. The challenge was not asking to stop at the first readable result. It was asking for recognition that one tool's output is another tool's input. That is layered thinking, and it is exactly how real-world investigation works. Malware is obfuscated. Logs are encoded. Evidence rarely presents itself cleanly.

03

The Shift

After that challenge, Eduardo's approach changed for the rest of the tournament. Instead of looking for an obvious flag, he started asking: what does this output become if I pass it through something else? That question unlocked several more flags before the tournament closed. It is now a default part of how he approaches any challenge where the first result does not look right.

Lessons Learned

Key Takeaways

Progress is measurable. More categories, more flags, harder challenges, larger field. The gap between Spring 2024 and Fall 2025 is documented and real.
Layered decoding is a fundamental mindset, not an advanced technique. One tool's output is another tool's input. Stop looking for a single clean answer.
Custom wordlists require data hygiene. Normalizing case, stripping duplicates, and cleaning raw input with Linux commands is as important as running Hashcat correctly.
Network traffic captures make theory concrete. Seeing a TCP handshake in Wireshark cements the concept in a way that diagrams cannot.
Tool documentation is not a crutch. Knowing how to navigate --help output and man pages quickly is a real operational skill.
Less practice time, more output. Efficiency compounds. The right fundamentals and the right tools do more than raw hours alone.

Want to talk through the details?

Eduardo can walk through his methodology, tooling, and how this work applies to your open roles.