Package details
| Package | llama.cpp |
|---|---|
| Version | 0.0.8508-r0 |
| Description | LLM inference in C/C++ (with Vulkan GPU acceleration) |
| Project | https://github.com/ggml-org/llama.cpp |
| License | MIT |
| Branch | edge |
| Repository | testing |
| Architecture | loongarch64 |
| Size | 14.9MiB |
| Installed size | 33.6MiB |
| Origin | llama.cpp |
| Maintainer | Hugo Osvaldo Barrera |
| Build time | 2026-03-25 11:54:57 |
| Commit | 604d1210cff69c058b018a8f7e15676715f5c2dc | Merge request | N/A |
| Git repository | Git repository |
| Build log | Build log |
| Issues | Open packaging issues |
| Contents | Contents of package |