Adapted smaller-scale language models with domain-specific datasets to refine consistency and output style.
Adapted diffusion backbones to new datasets, enhancing style transfer and domain control.
Built prototype models to explore new interaction modes and creative applications beyond standard fine-tuning.
Tested hybrid transformer architectures for more efficient inference and fine-grained output control.