概览
Django 是 Python 中以“内含电池 (batteries-included)”著称的 Web 框架。只需几行配置,您即可将 ModelRiver 的 AI 能力添加到您的 Django 视图、REST API 和后台任务中。
快速上手
安装相关依赖
Bash
pip install django openai djangorestframework python-dotenv设定配置 (Configuration)
PYTHON
1# settings.py2import os3 4MODELRIVER_API_KEY = os.environ.get("MODELRIVER_API_KEY", "")5MODELRIVER_BASE_URL = "https://api.modelriver.com/v1"AI 客户端服务 (AI client service)
PYTHON
1# ai/client.py2from openai import OpenAI3from django.conf import settings4 5def get_client():6 return OpenAI(7 base_url=settings.MODELRIVER_BASE_URL,8 api_key=settings.MODELRIVER_API_KEY,9 )10 11def chat(workflow: str, messages: list, **kwargs) -> str:12 client = get_client()13 response = client.chat.completions.create(14 model=workflow,15 messages=messages,16 **kwargs,17 )18 return response.choices[0].message.contentDjango 视图 (Django views)
PYTHON
1# views.py2from django.http import JsonResponse3from django.views.decorators.csrf import csrf_exempt4from django.views.decorators.http import require_POST5import json6from .client import chat7 8@csrf_exempt9@require_POST10def chat_view(request):11 body = json.loads(request.body)12 message = body.get("message", "")13 14 response = chat(15 workflow="my-chat-workflow",16 messages=[{"role": "user", "content": message}],17 )18 19 return JsonResponse({"content": response})Django REST Framework
PYTHON
1# serializers.py2from rest_framework import serializers3 4class ChatSerializer(serializers.Serializer):5 message = serializers.CharField(max_length=4000)6 workflow = serializers.CharField(default="my-chat-workflow")7 8class ChatResponseSerializer(serializers.Serializer):9 content = serializers.CharField()10 tokens = serializers.IntegerField()11 12# views.py13from rest_framework.views import APIView14from rest_framework.response import Response15from openai import OpenAI16from django.conf import settings17 18class ChatAPIView(APIView):19 def post(self, request):20 serializer = ChatSerializer(data=request.data)21 serializer.is_valid(raise_exception=True)22 23 client = OpenAI(24 base_url=settings.MODELRIVER_BASE_URL,25 api_key=settings.MODELRIVER_API_KEY,26 )27 28 response = client.chat.completions.create(29 model=serializer.validated_data["workflow"],30 messages=[{"role": "user", "content": serializer.validated_data["message"]}],31 )32 33 return Response({34 "content": response.choices[0].message.content,35 "tokens": response.usage.total_tokens,36 })Celery 后台后台任务 (Celery background tasks)
遇到吃重的 AI 处理负荷时,请使用 Celery 来避免拥堵或阻塞正常的 Web 工作进程:
PYTHON
1# tasks.py2from celery import shared_task3from openai import OpenAI4from django.conf import settings5 6@shared_task7def summarise_document(doc_id: int):8 from myapp.models import Document9 10 doc = Document.objects.get(id=doc_id)11 12 client = OpenAI(13 base_url=settings.MODELRIVER_BASE_URL,14 api_key=settings.MODELRIVER_API_KEY,15 )16 17 response = client.chat.completions.create(18 model="my-summary-workflow",19 messages=[20 {"role": "system", "content": "请简明扼要地概括以下文档内容。"},21 {"role": "user", "content": doc.content},22 ],23 )24 25 doc.summary = response.choices[0].message.content26 doc.save()27 return doc.summary28 29# 在视图中调用30@csrf_exempt31@require_POST32def summarise_view(request, doc_id):33 summarise_document.delay(doc_id)34 return JsonResponse({"status": "processing"})通过 Django Channels 实现流式响应 (Streaming with Django Channels)
PYTHON
1# consumers.py2import json3from channels.generic.websocket import AsyncWebsocketConsumer4from openai import AsyncOpenAI5from django.conf import settings6 7class ChatConsumer(AsyncWebsocketConsumer):8 async def connect(self):9 await self.accept()10 self.messages = []11 self.client = AsyncOpenAI(12 base_url=settings.MODELRIVER_BASE_URL,13 api_key=settings.MODELRIVER_API_KEY,14 )15 16 async def receive(self, text_data=None):17 data = json.loads(text_data)18 self.messages.append({"role": "user", "content": data["message"]})19 20 stream = await self.client.chat.completions.create(21 model="my-chat-workflow",22 messages=self.messages,23 stream=True,24 )25 26 full_response = ""27 async for chunk in stream:28 content = chunk.choices[0].delta.content29 if content:30 full_response += content31 await self.send(json.dumps({"type": "chunk", "content": content}))32 33 self.messages.append({"role": "assistant", "content": full_response})34 await self.send(json.dumps({"type": "done"}))最佳实践 (Best practices)
- 针对批处理任务务必使用 Celery: 诸如提炼总结、生成 embeddings 和深度分析这类的活儿都应该以异步方式在后台跑。
- 创建一个高度可复用的 AI 服务模块: 把 ModelRiver 相关的配置和调用统归并在一个单独的文件/模块中去中心化管理。
- 添加错误处理: 记得捕获由
openai.APIError抛出的错误,并为前端用户反馈清晰的错误响应。 - 借助 Django Channels 运作流式场景: 标准的普通视图(Views)没办法很高效地承担起 SSE(服务器发送事件)推流重托。
- 在请求日志中监控: 利用 可观测性 (Observability) 控制台来洞察和探查每个视图函数所耗费的具体成本。
下一步探究
- FastAPI 集成: 优先使用异步的替代 Python 框架
- API 参考: 端点配置与参数文档说明
- Webhooks 应用: 带有 Webhook 功能的异步作业流