Skip to content
View waybarrios's full-sized avatar
:octocat:
coding
:octocat:
coding

Organizations

@EpicGames @NVIDIAGameWorks @dartmouth-cs98 @Wiqonn @VIVAMED @cs-98

Block or report waybarrios

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Maximum 250 characters. Please don't include any personal information such as legal names or email addresses. Markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse
waybarrios/README.md

Hey! I'm Wayner Barrios 👋

AI Research Scientist | PhD @ Dartmouth

I build and evaluate multimodal LLMs that actually understand video and vision. My research focuses on step-verified reasoning, cross-modal fusion, and long-video understanding—plus designing benchmarks and datasets that push these models forward.

👨‍💻 What I work on:

  • Multimodal LLMs (vision-language) & computer vision
  • Video understanding, moment retrieval & evaluation frameworks
  • Large-scale AI datasets (specs → tooling → QA → release)
  • Production ML systems with distributed training/inference

🛠️ Tech stack: PyTorch • JAX • TensorFlow • CUDA • Kubernetes • Docker • AWS/GCP • SQL/NoSQL • C++ • Python

🌎 Background: Colombian computer scientist now based in the US. I've shipped large-scale geo-spatial systems, worked with teams across LATAM and the US on data analytics and ML pipelines, and contributed to Python open-source projects.

📫 Connect: WebsiteTwitterLinkedInWiqonn

Pinned Loading

  1. guidance-based-video-grounding guidance-based-video-grounding Public

    [ICCV 2023] The official PyTorch implementation of the paper: "Localizing Moments in Long Video Via Multimodal Guidance"

    Python 18