How to Run LLaMA Locally on CPU or GPU | Python & Langchain & CTransformers Guide