A haiku has a fixed shape. Three lines, five syllables, seven syllables, five syllables. That rigidity is exactly where language models slip. Ask Gemini for a haiku in plain prose and about half the time you get something vaguely haiku-flavored with four lines or a stray rhyme.
The demo’s answer is to stop asking. The output is not prose, it is a structured object with a tuple of exactly three strings. Anything else fails at the schema boundary before it reaches the frontend.
The schema
export const HaikuOutputSchema = z.object({
lines: z.tuple([z.string(), z.string(), z.string()]),
});
A tuple of three is stricter than z.array(z.string()).length(3). The array variant only validates at runtime, so the inferred type stays string[] and the frontend has to handle the general case. The tuple is fixed-arity at the type level, so the React island gets [string, string, string] and can destructure three lines without a length check. One declaration doubles as the validator and the type.
What the schema does not do
Syllable counts are not in the schema. A three-line response where every line is one word will parse cleanly. That part is prose in the system prompt, and the model will get it wrong sometimes. The obvious next step is a post-parse validator that counts syllables and re-prompts on a miss, but that is one layer above the schema boundary the demo is showing off.
One honest caveat. The flow in this repo currently returns a mocked response while the quota wiring and frontend are shaken out, with the real ai.generate call stubbed behind a TODO. Swapping the mock for a live Gemini call with output: { schema: HaikuOutputSchema } is a single-function change, and the tuple does what it promises the moment that call is in place.