Text generation API powered by N-gram language models
The Kreatyw API provides a simple REST endpoint for generating text continuations using N-gram language models. The API uses Markov chains trained on source texts to predict and generate coherent text sequences.
http://localhost:8000
Generate text continuation based on a given prompt.
| Parameter | Type | Required | Description |
|---|---|---|---|
| prompt | string | Required | The starting text to continue from |
| n | integer | Optional | N-gram size (2-5). Default: 4. Higher values produce more coherent but less creative text. |
| temperature | float | Optional | Sampling temperature (0.1-2.0). Default: 1.6. Higher values increase randomness. |
| length | integer | Optional | Number of words to generate (1-500). Default: 5. |
| Field | Type | Description |
|---|---|---|
| prediction | string | The generated text continuation |
curl -X POST http://localhost:8000/api/predict \
-H "Content-Type: application/json" \
-d '{
"prompt": "Once upon a time",
"n": 4,
"temperature": 1.2,
"length": 20
}'
{
"prediction": "in a kingdom far away there lived a brave knight who sought adventure..."
}
const response = await fetch('/api/predict', {
method: 'POST',
headers: {
'Content-Type': 'application/json'
},
body: JSON.stringify({
prompt: 'The old wizard',
n: 3,
temperature: 0.8,
length: 15
})
});
const data = await response.json();
console.log(data.prediction);
import requests
response = requests.post('http://localhost:8000/api/predict',
json={
'prompt': 'In the beginning',
'n': 4,
'temperature': 1.0,
'length': 25
}
)
result = response.json()
print(result['prediction'])
Controls the context window size. Higher values use more context words to predict the next word:
Controls the randomness of predictions:
The API returns standard HTTP status codes:
Currently, there are no rate limits imposed on the API. For production use, consider implementing appropriate rate limiting.