Ting Hua

I’m a research scientist at Samsung Research America. At Samsung, I worked on topics including: model compression and continual learning, with applications on natural lanaguage understand (NLU) and automatic speech recognition (ASR). Before joining Samsung, my research interests were probabilistic graphical models (e.g., LDA) and deep generative models (e.g., VAE), with applications on social media data and low-resource language data. I received my Ph.D. from Virginia Tech in 2017.
LLM Compression
Develop dimension-independent structural pruning methods for large language models (NeurIPS2024). Create adaptive rank selection techniques for low-rank approximations (NAACL2024). Design numerical optimization approaches for weighted low-rank estimation (EMNLP2022). Pioneer weighted factorization methods for language model compression (ICLR2022). Design automatic mixed-precision quantization search methods for BERT (IJCAI2021).
Efficient Architectures
Create tiny transformers with shared dictionaries (ICLR2022). Develop lightweight multi-modal detectors (CVPR2022).
Continual Learning
Develop continual customization methods for text-to-image diffusion models with C-LoRA (TMLR2024). Create hyperparameter-free continuous learning approaches for domain classification in natural language understanding (NAACL2021).
Security & Privacy
Design black-box trojan prompt attacks for large language models to understand and improve model security (NeurIPS2023).
Social Event Detection
Develop automatic event detection systems from social media data. Create methods for beating traditional news sources using social media. Design spatio-temporal event detection algorithms for real-time social media analysis.
Social Media Analysis
Unify societal pattern recognition into probabilistic modeling frameworks. Create social influence detection methods. Develop topical modeling approaches for social data.