Third-party resources ($e.g.$, samples, backbones, and pre-trained models)
are usually involved in the training of deep neural networks (DNNs), which
brings backdoor attacks as a new training-phase threat. In general, backdoor
attackers intend to implant hidden backdoor in DNNs, so that the attacked DNNs
behave normally on benign samples whereas their predictions will be maliciously
changed to a pre-defined target label if hidden backdoors are activated by
attacker-specified trigger patterns. To facilitate the research and development
of more secure training schemes and defenses, we design an open-sourced Python
toolbox that implements representative and advanced backdoor attacks and
defenses under a unified and flexible framework. Our toolbox has four important
and promising characteristics, including consistency, simplicity, flexibility,
and co-development. It allows researchers and developers to easily implement
and compare different methods on benchmark or their local datasets. This Python
toolbox, namely texttt{BackdoorBox}, is available at
url{https://github.com/THUYimingLi/BackdoorBox}.