Skip to content

VRMModel

A batteries-included component that combines all hooks into a single <VRMModel />. Access animations, expressions, and state via ref. The component is generic — motion names are inferred from the motions prop for full type safety.

import { useRef } from 'react'
import { VRMModel, type VRMModelRef } from 'three-vrm-utils/vrm-model'
import { LightingPreset } from 'three-vrm-utils/lighting-preset'
const motions = {
idle: '/assets/idle.vrma',
wave: '/assets/wave.vrma',
} as const
type Motion = keyof typeof motions
function App() {
const ref = useRef<VRMModelRef<Motion>>(null)
return (
<Canvas>
<LightingPreset />
<Suspense fallback={null}>
<VRMModel ref={ref} url="/model.vrm" motions={motions} idle="idle" blink breathing />
</Suspense>
</Canvas>
)
}

The component is generic: VRMModel<T> where T is inferred from the keys of motions.

PropTypeDefaultDescription
urlstringURL to the VRM model file
motionsRecord<T, string>{}Map of animation names to .vrma file URLs
idleT | T[]'idle'Idle animation name(s)
fadeTimenumber0.3Animation crossfade duration in seconds
blendTimenumber0.15Expression crossfade duration in seconds
blinkboolean | UseVRMBlinkOptionsundefinedEnable auto-blink with optional config
breathingboolean | UseVRMBreathingOptionsundefinedEnable breathing with optional config
analyserRefRefObject<AnalyserNode | null>undefinedRef to an AnalyserNode for vowel lip-sync
onVowel(vowels: VowelAmplitudes) => voidundefinedCallback with vowel amplitudes each frame
vowelOptionsUseAnalyserVowelOptionsundefinedOptions for vowel analyser
refRef<VRMModelRef<T>>undefinedRef to access VRM model methods

Access via useRef<VRMModelRef<Motion>>():

The loaded VRM instance for direct access.

MethodTypeDescription
send(name: T) => numberTrigger an animation, returns its duration
getState() => T | nullGet the current animation state name
MethodTypeDescription
send(map: ExpressionMap) => voidSet facial expressions (crossfades from previous)
stop() => voidDecay all active expressions to neutral over blendTime
// Type-safe — only accepts 'idle' | 'wave'
ref.current?.animationManager.send('wave')
const state = ref.current?.animationManager.getState()
// state is 'idle' | 'wave' | null
// Simple
ref.current?.expressionManager.send({ happy: 1 })
// With hold and decay
ref.current?.expressionManager.send({
surprised: { value: 1, hold: 1, decay: 0.5 },
})
// Return to neutral (crossfades out)
ref.current?.expressionManager.stop()

The analyserRef accepts any Web Audio AnalyserNode — microphone, <audio>/<video> elements, Web Audio oscillators, etc. Here’s an example using a microphone:

import { useRef } from 'react'
import { VRMModel, type VRMModelRef } from 'three-vrm-utils/vrm-model'
function App() {
const ref = useRef<VRMModelRef>(null)
const analyserRef = useRef<AnalyserNode | null>(null)
const startMic = async () => {
const stream = await navigator.mediaDevices.getUserMedia({ audio: true })
const ctx = new AudioContext()
const source = ctx.createMediaStreamSource(stream)
const analyser = ctx.createAnalyser()
analyser.fftSize = 256
source.connect(analyser)
analyserRef.current = analyser
}
return (
<Canvas>
<VRMModel
ref={ref}
url="/model.vrm"
analyserRef={analyserRef}
onVowel={(vowels) => {
const mgr = ref.current?.vrm.expressionManager
if (!mgr) return
mgr.setValue('aa', vowels.aa)
mgr.setValue('ih', vowels.ih)
mgr.setValue('ou', vowels.ou)
mgr.setValue('ee', vowels.ee)
mgr.setValue('oh', vowels.oh)
mgr.update()
}}
/>
</Canvas>
)
}