Explored preprocessing workflows for preparing text datasets, testing different tokenization strategies for smoother model training.
Linked models across modalities, experimenting with multi-stage reasoning flows and unconventional input–output loops.
Trialed methods like parameter-efficient fine-tuning, quantization, and dataset reduction to improve speed and resource use.
Generated artificial datasets to augment scarce domains, stress-testing models under controlled conditions.