KCLarity at SemEval-2026 Task 6: Encoder and Zero-Shot Approaches to Political Evasion Detection

2026-03-06Computation and Language

Computation and Language
AI summary

The authors describe their team's work on a task where they had to classify unclear or evasive language in political speech. They tried two main approaches: one that predicts how clear the language is directly, and another that predicts if the speech is evasive first and then figures out clarity from that. They also tested different training methods and used large language models in a zero-shot way (without extra training). Both approaches performed similarly, with RoBERTa-large doing best on public data, while GPT-5.2 handled unseen data better.

ambiguity classificationevasion techniquespolitical discourseRoBERTa-largeGPT-5.2zero-shot learningencoder modelsdecoder modelstask taxonomy
Authors
Archie Sage, Salvatore Greco
Abstract
This paper describes the KCLarity team's participation in CLARITY, a shared task at SemEval 2026 on classifying ambiguity and evasion techniques in political discourse. We investigate two modelling formulations: (i) directly predicting the clarity label, and (ii) predicting the evasion label and deriving clarity through the task taxonomy hierarchy. We further explore several auxiliary training variants and evaluate decoder-only models in a zero-shot setting under the evasion-first formulation. Overall, the two formulations yield comparable performance. Among encoder-based models, RoBERTa-large achieves the strongest results on the public test set, while zero-shot GPT-5.2 generalises better on the hidden evaluation set.