Package details
| Package | llama.cpp-openrc |
|---|---|
| Version | 0.0.8368-r0 |
| Description | LLM inference in C/C++ (with Vulkan GPU acceleration) (OpenRC init scripts) |
| Project | https://github.com/ggml-org/llama.cpp |
| License | MIT |
| Branch | edge |
| Repository | testing |
| Architecture | loongarch64 |
| Size | 2.0KiB |
| Installed size | 606.0B |
| Origin | llama.cpp |
| Maintainer | Hugo Osvaldo Barrera |
| Build time | 2026-03-20 15:02:47 |
| Commit | beb54cadeb5924e7e27c6d552b592ed45a54e9c5 | Merge request | N/A |
| Git repository | Git repository |
| Build log | Build log |
| Issues | Open packaging issues |
| Contents | Contents of package |