A highly lightweight, dynamic C++ library to interact with Cloud and Local LLMs on ESP32/ESP8266.
NocLLM natively supports streaming responses from Large Language Models (LLMs) via OpenAI-compatible endpoints. It dynamically switches between HTTPS for Cloud (OpenAI, Groq) and HTTP for Local APIs (LM Studio, Ollama). Uses a zero-dependency, ultra-lightweight custom JSON extraction approach for fast performance and low RAM usage.
| Filename | Release Date | File Size |
|---|---|---|
| NocLLM-1.0.0.zip | 2026-04-11 | 13.56 KiB |