The recipe engine is what makes AmberRoom a sound therapy product instead of a music app. It takes an intent (one of eight: sleep, anxiety, focus, grief, energy, meditation, tinnitus, pain) and a length, and produces a SessionRecipe — a typed object specifying every parameter the audio layers need: binaural frequency, noise color, BPM, pacing curve, layer selection.

The engine lives in lib/recipe/ and is genuinely live (unlike some of the audio layers it orchestrates). It's the part of AmberRoom that's most clearly built and most clearly different from any "AI generates ambient music" competitor.

The intent definition

Each intent is a typed record in lib/recipe/intents.ts:

  • id — the canonical name (sleep / anxiety / etc).
  • range + binauralHz — the targeted brainwave band and the specific frequency. Sleep targets 2.5 Hz Delta; anxiety targets 6.0 Hz Theta. These are research-derived, not invented.
  • noise + noiseDb — which noise color to layer in (brown / pink / white / none) and at what level.
  • bpm — target tempo for any rhythmic layer (bowls struck, isochronic pulses).
  • instruments — the named samples to load from the licensed library when Layer 1 ships.
  • research — the source citation summarized for the recipe inspector.
  • primaryHue — the visual identity color, applied across the whole app's gradient system.

The generate function

generateRecipe(intent, length, biometrics?) in lib/recipe/generate.ts takes those three arguments and returns a complete SessionRecipe. It does four things:

  1. Looks up the intent's base parameters.
  2. Optionally adjusts based on biometrics (e.g. HRV below baseline for anxiety → drop binaural to 6.0 Hz from default 5.5 Hz).
  3. Builds the layer specification — which of Layers 1–4 to include at what mix level.
  4. Builds the pacing curve — an array of {t, v} control points that drive how levels evolve over the session length.

Pacing curve

The pacing curve is the part most listeners notice without realizing it. Every session has four phases: arrival (ramping in), descent (settling deeper), ground (hold at depth), return (gentle resolution). The curve isn't the same for every length:

  • 5-minute sessions compress hard — arrival lasts 30 seconds, descent peaks at minute 2, return starts at minute 4.
  • 30-minute sessions hold the deep phase from minute 12 to minute 22.
  • 60-minute sessions add a longer tail — return starts at minute 48 and fades to silence so you wake to nothing, not to a cutoff.

The curve is rendered visually in the Recipe Inspector (the SVG line you see). It will also drive layer gain modulation over time — currently this isn't wired (levels are static once a session starts), but the curve data is there waiting for the orchestrator update.

Personalization compound

When a user rates a session (the post-session "How did that land?" prompt), the rating gets stored against the recipe parameters: bowls active or not, noise color, binaural Hz, length. After 5+ rated sessions, deriveInsights() in lib/profile/profile.ts surfaces preferences:

  • "Bowls > gongs" if rated higher with bowls active
  • "Brown floor" if brown noise sessions averaged higher than pink
  • "6.0 Hz preferred" if anxiety sessions at 6 Hz averaged higher than 5.5
  • "15 min sweet spot" if 15-min sessions averaged higher than 30-min

These preferences then bias the next recipe — the engine reads them before generating. After 20 rated sessions, the recipe is meaningfully different from the research default. Same intent, same research basis, tuned to your feedback. This is the personalization compound the brief calls AmberRoom's strongest moat.

Where it lives in the codebase

lib/recipe/
├── intents.ts      the 8 intent definitions
├── generate.ts     generateRecipe() + pacing builder
├── tuner.ts       position → recipe (Tuner alt UX)
├── signals.ts     text → recipe (Tell alt UX)
└── types.ts       SessionRecipe + Layer + Biometrics

The engine is also what makes the Tell (conversational) and Tune (2D pad) alternative UX surfaces work — both produce the same SessionRecipe shape, so they all feed into the same player.

What's planned

  • Pacing curve drives layer gain over time. Currently the curve is rendered visually but doesn't modulate audio. Wiring it is a small orchestrator update.
  • Wearable-driven adjustments. Oura HRV → binaural Hz; Apple Health sleep score → length recommendation. Connector code lands with V2.
  • Session history → recipe bias. Already structurally supported via deriveInsights(); needs the orchestrator to actually read those insights when generating.