TCD_IWSLT25_ModelCompression_en-zh_Bin0_constrained_primary

We fine-tuned Qwen2-Audio-7B-Instruct in two stages, (a) full fine-tuning with the ACL 60/60 dataset, and (b) QLoRA fine-tuning with the ACL 60/60 dataset augmented with data knowledge distillation from the fully fine-tuned model. This process has achieved 40% compression in terms of both model parameters and storage size.

Language pair: English-Chinese
From\To de zh
en 0.806