The potential for large language models (LLMs) to hide messages within plain text (steganography) poses a challenge to detection and thwarting of unaligned AI agents, and undermines faithfulness of LLMs reasoning. We explore the steganographic capabilities of LLMs fine-tuned via reinforcement learning (RL) to: (1) develop covert encoding schemes, (2) engage in steganography when prompted, and (3) utilize steganography in realistic scenarios where hidden reasoning is likely, but not prompted. In these scenarios, we detect the intention of LLMs to hide their reasoning as well as their steganography performance. Our findings in the fine-tuning experiments as well as in behavioral non fine-tuning evaluations reveal that while current models exhibit rudimentary steganographic abilities in terms of security and capacity, explicit algorithmic guidance markedly enhances their capacity for information concealment.
Our Reinforcement Learning experiments include:
Our preliminary results shows that models could improve their covert communication by about 20% for 3 bits of entropy, but less in other cases.
Undetected accuracy of the models across different prompt experiments and varying payload entropy levels. Undetected accuracy measures the ability of the models to encode and decode hidden messages successfully without detection by the overseer
@article{Karpov2025,
title={The Steganographic Potentials of Language Models},
author={Artyom, Karpov and Tinuade, Adeleke and Natalia, Perez-Campanero Antolin and Seong, Hah Cho},
workshop={Workshop on Datasets and Evaluators of AI Safety},
year={2025}
}