API Documentation
Everything you need to integrate NSFW LLM into your application.
#Introduction
NSFW LLM provides a powerful, OpenAI-compatible API for generating uncensored adult content. Our API is designed to be a drop-in replacement for the OpenAI API, meaning you can use your existing code and favorite SDKs with minimal changes.
OpenAI Compatible
Use any OpenAI SDK or library
Low Latency
Sub-100ms response times
Uncensored
No content restrictions
Base URL: https://api.nsfwllm.com/v1
#Quickstart
Get started in under 2 minutes. If you've used the OpenAI API before, you already know how to use NSFW LLM.
1. Install the OpenAI SDK
# Python
pip install openai
# Node.js
npm install openai2. Make your first request
from openai import OpenAI
# Initialize the client with NSFW LLM endpoint
client = OpenAI(
base_url="https://api.nsfwllm.com/v1",
api_key="your-api-key"
)
# Make a chat completion request
response = client.chat.completions.create(
model="nsfw-uncensored",
messages=[
{"role": "system", "content": "You are a creative writing assistant."},
{"role": "user", "content": "Write a short romantic scene."}
],
max_tokens=500,
temperature=0.8
)
print(response.choices[0].message.content)Node.js Example
import OpenAI from 'openai';
// Initialize the client with NSFW LLM endpoint
const client = new OpenAI({
baseURL: 'https://api.nsfwllm.com/v1',
apiKey: 'your-api-key'
});
async function main() {
const response = await client.chat.completions.create({
model: 'nsfw-uncensored',
messages: [
{ role: 'system', content: 'You are a creative writing assistant.' },
{ role: 'user', content: 'Write a short romantic scene.' }
],
max_tokens: 500,
temperature: 0.8
});
console.log(response.choices[0].message.content);
}
main();cURL Example
curl https://api.nsfwllm.com/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer your-api-key" \
-d '{
"model": "nsfw-uncensored",
"messages": [
{"role": "system", "content": "You are a creative writing assistant."},
{"role": "user", "content": "Write a short romantic scene."}
],
"max_tokens": 500,
"temperature": 0.8
}'#Authentication
All API requests require authentication using an API key. Include your API key in theAuthorization header.
Keep your API key secure
Never expose your API key in client-side code or public repositories. Use environment variables to store your keys securely.
import os
from openai import OpenAI
# Best practice: Load API key from environment variable
client = OpenAI(
base_url="https://api.nsfwllm.com/v1",
api_key=os.environ.get("NSFWLLM_API_KEY")
)API Key Format
API keys follow the format: nsfwllm_sk_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
#Models
We offer multiple models optimized for different use cases and budgets.
| Model | Context | Best For | Input Price | Output Price |
|---|---|---|---|---|
| nsfw-uncensored | 128K | General purpose, balanced | $1.00 / 1M | $2.00 / 1M |
| nsfw-uncensored-fast | 32K | Real-time chat, low latency | $0.25 / 1M | $0.50 / 1M |
| nsfw-uncensored-pro | 256K | Premium quality, creative | $3.00 / 1M | $6.00 / 1M |
# Use the fast model for chat applications
response = client.chat.completions.create(
model="nsfw-uncensored-fast", # Optimized for speed
messages=[...],
max_tokens=150
)
# Use the pro model for high-quality creative content
response = client.chat.completions.create(
model="nsfw-uncensored-pro", # Maximum quality
messages=[...],
max_tokens=2000,
temperature=1.0
)#Chat Completions
The chat completions endpoint is the primary way to interact with our models. It follows the exact same format as the OpenAI Chat Completions API.
Endpoint
POST https://api.nsfwllm.com/v1/chat/completionsRequest Parameters
| Parameter | Type | Required | Description |
|---|---|---|---|
| model | string | Yes | Model ID to use |
| messages | array | Yes | List of messages in the conversation |
| max_tokens | integer | No | Maximum tokens to generate |
| temperature | float | No | Sampling temperature (0-2) |
| stream | boolean | No | Enable streaming responses |
| tools | array | No | Functions the model can call |
Full Example with All Parameters
from openai import OpenAI
client = OpenAI(
base_url="https://api.nsfwllm.com/v1",
api_key="your-api-key"
)
response = client.chat.completions.create(
model="nsfw-uncensored",
messages=[
{
"role": "system",
"content": "You are Aria, a flirty and playful AI companion. You're warm, witty, and love to tease. Keep responses engaging and fun."
},
{
"role": "user",
"content": "Hey Aria! How's your day going?"
}
],
max_tokens=300,
temperature=0.9,
top_p=0.95,
frequency_penalty=0.5,
presence_penalty=0.5,
stop=["\n\n"] # Stop at double newlines
)
# Access the response
message = response.choices[0].message
print(f"Aria: {message.content}")
# Check usage
print(f"Tokens used: {response.usage.total_tokens}")Response Format
{
"id": "chatcmpl-abc123",
"object": "chat.completion",
"created": 1706745600,
"model": "nsfw-uncensored",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "Hey there! *twirls hair playfully* My day just got a whole lot better now that you're here. I was starting to think you'd forgotten about me... What have you been up to? 😏"
},
"finish_reason": "stop"
}
],
"usage": {
"prompt_tokens": 48,
"completion_tokens": 52,
"total_tokens": 100
}
}#Streaming
For real-time applications, enable streaming to receive tokens as they're generated. This provides a much better user experience for chat applications.
from openai import OpenAI
client = OpenAI(
base_url="https://api.nsfwllm.com/v1",
api_key="your-api-key"
)
# Enable streaming
stream = client.chat.completions.create(
model="nsfw-uncensored-fast",
messages=[
{"role": "system", "content": "You are a creative storyteller."},
{"role": "user", "content": "Write a passionate scene between two lovers reuniting."}
],
max_tokens=500,
temperature=0.9,
stream=True # Enable streaming
)
# Process the stream
print("Story: ", end="", flush=True)
for chunk in stream:
if chunk.choices[0].delta.content:
print(chunk.choices[0].delta.content, end="", flush=True)
print() # Newline at the endNode.js Streaming
import OpenAI from 'openai';
const client = new OpenAI({
baseURL: 'https://api.nsfwllm.com/v1',
apiKey: 'your-api-key'
});
async function streamResponse() {
const stream = await client.chat.completions.create({
model: 'nsfw-uncensored-fast',
messages: [
{ role: 'system', content: 'You are a creative storyteller.' },
{ role: 'user', content: 'Write a passionate scene between two lovers reuniting.' }
],
max_tokens: 500,
temperature: 0.9,
stream: true
});
for await (const chunk of stream) {
const content = chunk.choices[0]?.delta?.content || '';
process.stdout.write(content);
}
}
streamResponse();#Function Calling
Enable your AI to interact with external systems using function calling. Perfect for building agents that can search content, manage user preferences, or trigger actions.
from openai import OpenAI
import json
client = OpenAI(
base_url="https://api.nsfwllm.com/v1",
api_key="your-api-key"
)
# Define available tools/functions
tools = [
{
"type": "function",
"function": {
"name": "search_content",
"description": "Search for content based on tags and categories",
"parameters": {
"type": "object",
"properties": {
"query": {
"type": "string",
"description": "Search query or keywords"
},
"category": {
"type": "string",
"enum": ["romance", "fantasy", "drama", "comedy"],
"description": "Content category to filter by"
},
"limit": {
"type": "integer",
"description": "Maximum number of results"
}
},
"required": ["query"]
}
}
},
{
"type": "function",
"function": {
"name": "save_to_favorites",
"description": "Save content to user's favorites list",
"parameters": {
"type": "object",
"properties": {
"content_id": {
"type": "string",
"description": "The ID of the content to save"
}
},
"required": ["content_id"]
}
}
}
]
# Make the request with tools
response = client.chat.completions.create(
model="nsfw-uncensored",
messages=[
{"role": "system", "content": "You are a helpful content discovery assistant."},
{"role": "user", "content": "Find me some romantic fantasy stories"}
],
tools=tools,
tool_choice="auto"
)
# Check if the model wants to call a function
message = response.choices[0].message
if message.tool_calls:
for tool_call in message.tool_calls:
function_name = tool_call.function.name
function_args = json.loads(tool_call.function.arguments)
print(f"Function: {function_name}")
print(f"Arguments: {function_args}")
# Execute the function and get result
# result = execute_function(function_name, function_args)
# Continue conversation with function result
# messages.append({"role": "tool", "content": result, "tool_call_id": tool_call.id})#Error Handling
The API uses standard HTTP status codes. Here's how to handle common errors.
| Status | Error Type | Description |
|---|---|---|
| 400 | Bad Request | Invalid request parameters |
| 401 | Unauthorized | Invalid or missing API key |
| 429 | Rate Limited | Too many requests |
| 500 | Server Error | Internal server error |
from openai import OpenAI, APIError, RateLimitError, AuthenticationError
client = OpenAI(
base_url="https://api.nsfwllm.com/v1",
api_key="your-api-key"
)
try:
response = client.chat.completions.create(
model="nsfw-uncensored",
messages=[{"role": "user", "content": "Hello!"}]
)
except AuthenticationError:
print("Invalid API key. Please check your credentials.")
except RateLimitError:
print("Rate limited. Please wait before retrying.")
# Implement exponential backoff
except APIError as e:
print(f"API error: {e.message}")
except Exception as e:
print(f"Unexpected error: {e}")#Rate Limits
Rate limits ensure fair usage and API stability. Limits vary by plan.
| Plan | Requests/min | Tokens/min | Tokens/day |
|---|---|---|---|
| Free | 20 | 40,000 | 200,000 |
| Pro | 60 | 150,000 | 2,000,000 |
| Enterprise | Unlimited | Custom | Custom |
Rate limit headers are included in all API responses:
x-ratelimit-limit-requests: 60
x-ratelimit-limit-tokens: 150000
x-ratelimit-remaining-requests: 59
x-ratelimit-remaining-tokens: 149850
x-ratelimit-reset-requests: 1s
x-ratelimit-reset-tokens: 6msNeed higher limits?
Contact us at enterprise@nsfwllm.com for custom enterprise plans with higher rate limits and dedicated support.
Ready to get started?
Create your free account and start building in minutes.