Subscribe

Get authentic content to your mailbox

Creating an LLM based content translation pipeline

0:00
Log

In the previous post we setup internationalisation with Astro. Next up is using a large language model (LLM) as a content translator. The goal is to publish the sites content in every well LLM supported human language and to investigate how feasible this actually is.

Previously Iā€™d used OpenAIā€™s models directly via their API but itā€™s not completely free and Iā€™m intrigued to explore the process of downloading models to run inference locally.

It turns out that this is insanely easy thanks to the Ollama project. It goes pretty much like this:

$ brew install ollama
$ ollama serve
$ ollama pull llama3

The default llama3 model seemed perfectly sufficient to get started.

Ollama provides a CLI & REST api for interaction with downloaded models, but rather than work with that directly, we can use Vercelā€™s standard-defining ai library as well as itā€™s ollama adapter over in our Astro project:

yarn add ai ollama-ai-provider

Then I create a src-content folder and create a new translateContent.ts file inside it with the following content:

import { createOllama } from "ollama-ai-provider";
import { generateText } from "ai";

const ollama = createOllama({});

const contentResponse = await generateText({
  model: ollama("llama3"),
  prompt: "Please translate the following content: ā€¦",
});

console.log(content);

Over the years Iā€™ve forgotten how to run a simply ts file with es module syntax in it simply more times than I can count, and after cycling through esbuild-register, and ts-node, I finally resettled on:

$ npx tsx ./translateContent

Just like that weā€™re programatically sending/receiving LLM output on the local machine for free. Perfect.

From here itā€™s just a bunch of node scripting to read in the English posts from src-content/posts/[slug].mdx, translate each and output them to each of the src/content/posts/[lang]/[slug].mdx files for each supported locale.

As this script will likely grow into a CLI for managing translations, I got started with commander to create a few options:

const program = new Command();

program
  .argument('[fileNameFilter]', 'Filter files to process by a filename')
  .option('-nc, --noclean', 'Donā€™t clean the content output directory. This is the default option when a filename is provided.')
  .parse(process.argv);

ā€¦

const [filenameArg] = program.args;
const { noclean } = program.opts();

How well is llama3 performing? One problem is we only want the prompt output to contain the translation itself without extra bits like ā€œCertainly, here is the translationā€. OpenAI supports functions that can guarantee a JSON shape output but for Ollama supported models we apparently need to resort to strongly encouraging the model to ā€œrespond with ONLY the exact translation WITHOUT commentary or prelude/notesā€. This seems slightly unreliable.

One seemingly promising option was asking the model to output JSON as it seemingly forces it to understand there is only one place to put the translation value. Unfortunately the model struggled with escaping quotation marks in the content within the output JSON. For now we can resort to straight text output but it would be nice to tighten this up later as we learn more about prompt engineering or appropriate models.

As for the translations themselves I am fluent enough in french to verify the output is somewhat sane, but at this stage, I havenā€™t read side by side to investigate whether the more nuanced aspects of the content are being translated correctly. We can look at that in a future post.

In the spirit of MVP - thatā€™s it E2E! Posts authored in English are automatically translated into a fully internationalised Astro site. Itā€™s just that currently it has very few posts and very little functionality šŸ˜….