Vivek Singh
I architect and build autonomous AI systems -- from multi-agent orchestration to production-grade RAG pipelines -- powered by LLMs, LangChain, and AWS infrastructure. Turning complex AI research into scalable, real-world solutions.
The Through Line
From Procedural to Agentic
Every concept from 3 years of FX pipeline engineering maps directly to modern AI systems. The vocabulary changed. The thinking did not.
"A Houdini network and a LangGraph are the same data structure, a directed graph of composable nodes. I switched from simulating fire to orchestrating reasoning."
Houdini SOP Network
Node-based DAG for geometry processing
LangGraph StateGraph
Node-based DAG for agent orchestration
Insight: Both are directed graphs of composable, stateless processing nodes — same data structure, different domain.
VEX Kernels
Data-parallel compute over point clouds
LLM Tool Functions
Discrete compute units called by agents
Insight: Atomic, typed, side-effect-free functions. Write once, reuse anywhere in the graph.
Procedural Simulation
Parameterized, reproducible, non-destructive
RAG Pipeline
Parameterized, reproducible, modular retrieval
Insight: Change one upstream parameter and everything downstream updates deterministically.
Unreal Blueprints
Visual state machine for runtime logic
Agent Conditional Edges
Routing logic between graph nodes
Insight: Event-driven state transitions: if condition X, go to node Y. Identical pattern.
Realtime VFX (<16ms)
Hard latency budget per frame
Production LLM API (<500ms)
Hard latency SLAs for UX
Insight: Both demand performance-first thinking: cache aggressively, batch, optimize hot paths.
Asset Pipeline Automation
Python ingesting raw assets to rendered outputs
ML Data Pipeline
Python ingesting raw docs to vector embeddings
Insight: ETL with domain-specific transforms. Same architecture, different data types.
This is not a career pivot. I spent 3 years thinking in graphs, writing Python automation, and shipping under hard deadlines. The domain changed. The way I approach problems did not.Same mindset. Different tools. Already proven under pressure.
Expertise
Technical Stack
Deep expertise across the full AI stack -- from model training and prompt engineering to cloud infrastructure and production deployment.
Core AI / ML
Building intelligent systems from the ground up
Frameworks
Production-grade tools and libraries
Languages
Writing clean, efficient code
Cloud & Infra
Deploying at scale with reliability
Data & Storage
Managing data pipelines and persistence
Orchestration
Connecting systems together
Selected Work
Featured Projects
Production AI systems built for scale, reliability, and real-world impact.