Product Overview

The complete ATC
candidate assessment pipeline.

From a candidate's first login to a hiring panel's final decision — Falcon handles every step with rigorous, reproducible data.

Learning Metrics

Knowledge assessment alongside practical performance. Each candidate session includes a structured learning module with knowledge-check questions covering terminal procedures, separation standards, emergency handling, and coordination protocols.

Simulation Environment

Study-level terminal area radar control, built in Unity and served to the browser via WebGL. No downloads, no plugins — candidates access their simulation session from any modern browser.

The environment models realistic aircraft flight dynamics and procedural clearances. Traffic complexity scales with each successive run, allowing the platform to stress-test candidate performance under increasing workload.

Candidates interact using voice commands through any microphone.

Voice & Communication

Real speech. Real scoring. The voice layer uses speech-to-text transcription followed by an LLM-based inference model to parse candidate transmissions, correct transcription noise, and evaluate:

  • Adherence to standard phraseology
  • Rate of speech (words per minute)
  • Readback error detection — the simulator plants deliberate readback mistakes to test vigilance
  • Transmission redundancy — issuing commands to aircraft already executing that instruction

The model handles non-standard phraseology without penalising candidates for accents or minor linguistic variation — only for procedurally incorrect transmissions.

Scoring Engine

Every session is scored automatically by the Separation Integrity Engine (SIE) in real time. The SIE detects >99% of separation violations using predicted intercept vectors — the same principle underlying real STCA systems — and classifies each event by severity:

  • Separation Loss — actual radar or vertical separation infringement
  • Critical — clearance that would directly cause a separation loss
  • Significant — procedural error with safety implications
  • Minor — non-standard practice or communication error

Run scores are weighted by session difficulty — later, more complex runs contribute more to the final simulator score.

Skills Grading

Candidate performance is decomposed into distinct skill categories across four domains:

  • Planning
  • Managing Multitasking
  • Communication
  • Perceptual Speed

Each subcategory is graded A through F, aggregated from all simulation runs. Grades are weighted by run importance.

Interview Guide

Performance-targeted questions. After each assessment, the platform reviews a candidate's results and selects the most relevant questions from a curated bank — each written to probe a specific skill gap the simulator identified. The right question surfaces automatically.

Each question is paired with a model answer, so panel members can evaluate responses effectively without specialist ATC knowledge.

Built for ANSPs and training organisations.

Falcon is designed for Air Navigation Service Providers, ATC training academies, and defence organisations that need to screen large candidate pools efficiently and objectively.

The platform is configurable to local airspace rules, separation standards, and phraseology — so assessment criteria match the operational environment candidates will actually work in.

Pre-course screening
Filter applicants before offering costly simulator-based training seats.
Progress benchmarking
Track development over the course of a training programme.
Remote assessment
Screen candidates in any location without requiring travel to a simulator facility.
Structured hiring panels
Equip interview panels with data-driven questions and evaluation criteria.

See the platform with your own scenarios.

We configure Falcon to match your local airspace and separation standards. Get in touch to discuss a pilot deployment.

desk@falconatc.com