Umut Özyurt
I focus on refining stable diffusion models for personalized, high-fidelity image and video generation, aiming to build controllable models that deliver quality and diversity.

Education & Leadership
Academic background and key involvements
Middle East Technical University (METU / ODTÜ)
B.Sc. in Computer Science (Senior Year)
09/2020 - 06/2026 (Expected) | Ankara, Turkey
CGPA: 3.88 / 4.00
Honors & Leadership
- High Honor Student: Recognized for 7 consecutive semesters of academic excellence.
- METU Development Foundation Scholarship: Awarded for ranking in the Top 1000 among over 2.5 million applicants.
- Technical Lead, METU Artificial Intelligence Society: Led society initiatives and projects to drive technical excellence.
Relevant Coursework (all completed with 4.0/4.0)
Selected Publications
Research at the intersection of generative models and computer vision
Meta-LoRA: Meta-Learning LoRA Components for Domain-Aware ID Personalization


A novel approach using meta-learning for Low-Rank Adaptation (LoRA) components in diffusion models, enhancing identity preservation in text-to-image generation.
GRACE: Generating Socially Appropriate Robot Actions Leveraging LLMs and Human Explanations


A framework generating contextually appropriate robot behaviors by combining large language models with human social explanations for improved human-robot interaction.
Enhanced Thermal Human Detection with Fast Filtering for UAV Images


An approach optimizing thermal human detection on UAV platforms using efficient filtering techniques for real-time performance on edge devices.
Peer Review & Academic Service
Experience & Skills
Combining academic research with practical implementation
Download Full CVResearch Experience
METU ImageLab
Generative Computer Vision Researcher (Remote)
09/2024 - Present
Advisor: Assoc. Prof. R. Gökberk Cinbiş.
Researching state-of-the-art diffusion model fine-tuning techniques for generative computer vision, focusing on personalized image generation and aiming for high-impact publications.
University of Cambridge (AFAR Lab)
Computer Vision Engineer/Researcher
07/2024 - 09/2024
Advisor: Prof. Hatice Güneş.
Contributed significantly to research on uncertainty prediction. Involved in all project phases: experimental design, implementation, analysis, and manuscript preparation (second author on ICRA 2025 submission).
METU Intelligent Systems Lab
Candidate Computer Vision Engineer/Researcher
07/2023 - 07/2024
Advisor: Assoc. Prof. Seyda Ertekin.
Developed and evaluated methods for thermal human detection using UAV imagery. Focused on real-time processing via edge computing (NVIDIA Jetson). Contributed to IISEC 2023 publication as the first author.
Professional Experience
Syntonym
Generative Computer Vision Researcher (Remote)
09/2024 - Present
Researching diffusion models for high-fidelity face anonymization, integrating ControlNet for fine-tuning, and exploring text-to-image personalization (SD1.5, SDXL, FLUX).
Infodif
Computer Vision Engineer/Researcher
01/2024 - 07/2024
Developed and optimized a face recognition pipeline for the Turkish National Police using multi-attribute recognition and custom deep learning architectures.
AsisGuard
Candidate Computer Vision Engineer/Researcher
03/2023 - 12/2023
Led computer vision projects from inception, implementing solutions including thermal imaging analysis optimized for edge devices (NVIDIA Jetson, custom AI accelerators). Guided interns on integration tasks.
Technical Expertise
Research Skills
Frameworks & Libraries
Programming & Tools
Generative AI & Personalization
Creating controllable and identity-preserving visual generation
My current research focuses on refining stable diffusion models for personalized, high-fidelity image and video generation. I aim to build controllable, adaptable generative models that deliver both quality and diversity. Backed by experience in deep learning, face recognition, object detection, tracking, and thermal vision, I strive to push the boundaries of generative computer vision.
Personalized Generation
Developing methods for diffusion model fine-tuning to accurately capture and maintain identity characteristics while allowing stylistic variation.
Controllable Generation
Creating systems that allow precise control over generated outputs through intuitive interfaces, semantic guidance, and techniques like ControlNet.
Video Generation
Extending image generation capabilities to video, focusing on challenges of temporal consistency and coherence across frames.
Advanced Style Transfer Implementation
Re-implemented the model and training pipeline of the CVPR 2023 paper "Master: Meta Style Transformer for Controllable Zero-Shot and Few-Shot Artistic Style Transfer," creatively resolving critical ambiguities. Acknowledged as the most complex and successful work of the term among graduate submissions.
View on GitHubGet In Touch
Open to research collaborations in diffusion models and generative AI
Location
Ankara, Turkey