M3 Max is a Machine Learning BEAST. So I took it for a spin with some LLM's running locally.
I also show how to gguf quantizations with llama.cpp
Temperature/fan on your Mac: [ Ссылка ] (affiliate link)
Run Windows on a Mac: [ Ссылка ] (affiliate)
Use COUPON: ZISKIND10
🛒 Gear Links 🛒
* 🍏💥 New MacBook Air M1 Deal: [ Ссылка ]
* 💻🔄 Renewed MacBook Air M1 Deal: [ Ссылка ]
* 🎧⚡ Great 40Gbps T4 enclosure: [ Ссылка ]
* 🛠️🚀 My nvme ssd: [ Ссылка ]
* 📦🎮 My gear: [ Ссылка ]
🎥 Related Videos 🎥
* 🌗 RAM torture test on Mac - [ Ссылка ]
* 🛠️ Set up Conda on Mac - [ Ссылка ]
* 👨💻 15" MacBook Air | developer's dream - [ Ссылка ]
* 🤖 INSANE Machine Learning on Neural Engine - [ Ссылка ]
* 💻 M2 MacBook Air and temps - [ Ссылка ]
* 💰 This is what spending more on a MacBook Pro gets you - [ Ссылка ]
* 🛠️ Developer productivity Playlist - [ Ссылка ]
🔗 AI for Coding Playlist: 📚 - [ Ссылка ]
Timestamps
00:00 Intro
00:40 Build from scratch - manual
09:44 Bonus script - automated
11:21 LM Studio - one handed
Repo
[ Ссылка ]
Commands
//assuming you already have a conda environment set up, and dev tools installed (see videos above for instructions)
*Part 1 - manual*
brew install git-lfs
git lfs install
git clone [ Ссылка ]
cd llama.cpp
pip install -r requirements.txt
make
git clone [ Ссылка ] openhermes-7b-v2.5
mv openhermes-7b-v2.5 models/
python3 convert.py ./models/openhermes-7b-v2.5 --outfile ./models/openhermes-7b-v2.5/ggml-model-f16.gguf --outtype f16
./quantize ./models/openhermes-7b-v2.5/ggml-model-f16.gguf ./models/openhermes-7b-v2.5/ggml-model-q8_0.gguf q8_0
./quantize ./models/openhermes-7b-v2.5/ggml-model-f16.gguf ./models/openhermes-7b-v2.5/ggml-model-q4_k.gguf q4_k
./batched-bench ./models/openhermes-7b-v2.5/ggml-model-f16.gguf 4096 0 99 0 2048 128,512 1,2,3,4
./server -m models/openhermes-7b-v2.5/ggml-model-q4_k.gguf --port 8888 --host 0.0.0.0 --ctx-size 10240 --parallel 4 -ngl 99 -n 512
*Part 2 - auto*
bash -c "$(curl -s [ Ссылка ])"
💻 MacBooks in this video
M2 Max 16" MacBook Pro 64GB/2TB
— — — — — — — — —
❤️ SUBSCRIBE TO MY YOUTUBE CHANNEL 📺
Click here to subscribe: [ Ссылка ]
— — — — — — — — —
Join this channel to get access to perks:
[ Ссылка ]
#m3max #macbook #macbookpro
— — — — — — — — —
📱 ALEX ON X: [ Ссылка ]
Zero to Hero LLMs with M3 Max BEAST
Теги
appleapple siliconapple eventm3m3 prom3 maxmacbook prosoftware developer14-inch MacBook Pro16-inch MacBook ProMac laptopApple siliconM3 chipM3 ProM3 Maxprogrammersoftware developmentprogrammingdeveloperdeveloper testsm3 chipmachine learningllmm3maxm3 llmm3 mlm3 max mlml on m3machine learning m3m3 machine learningm3 ai