Skip to content

Commit

Permalink
Update README.md
Browse files Browse the repository at this point in the history
  • Loading branch information
trholding authored Jul 27, 2023
1 parent 4880e1b commit 7b31221
Showing 1 changed file with 34 additions and 0 deletions.
34 changes: 34 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,37 @@
## Llama 2 everywhere

Standalone and 64bit Binary Portable Llama 2 Inference in one file of C

A friendly fork of the excellent https://github.com/karpathy/llama2.c

Our goal is to mirror the progress of https://github.com/karpathy/llama2.c, add improvements such as as OpenCL / Vulkan GPU inference, webserver etc which certainly would not fit in the upstream do to the minimal / simplicity / elegance requirements constraints there.

# Features

+ Executable that runs on
+ GNU/Systemd
+ BSD
++ FreeBSD
++ OpenBSD
++ NetBSD
+ XNU's Not UNIX
+ Bare Metal (Not fully functional yet but almost...)
+ Windows

+ Runs on ARM64 (aarch64), x86_64

+ Standalone
+ Embedded model

These features depend on a specific cosmocc toolchain: https://github.com/jart/cosmopolitan

Building this with gcc or clang would result in normal binaries similar to upstream.

Read more:
[How to build](https://github.com/trholding/llama2.c/edit/master/README.md#binary-portability-even-more-magic)

Download the prebuilt run.com binary from releases

## llama2.c

<img src="assets/llama_cute.jpg" width="300" height="300">
Expand Down

0 comments on commit 7b31221

Please sign in to comment.