13. Llama 2#

NOTA: Este documento se ha obtenido de https://www.youtube.com/watch?v=Xc5xNRM_hvk, y es una muestra de cómo ejecutar llama2 en colab

Llama 2 es una colección de modelos de texto generativo preentrenados y afinados, que van desde los 7 mil millones hasta los 70 mil millones de parámetros, diseñados para casos de uso de diálogo.

Supera a los modelos de chat de código abierto en la mayoría de los benchmarks y está a la par de los modelos de código cerrado populares en las evaluaciones humanas de utilidad y seguridad.

Llama 2 13B-chat: https://huggingface.co/meta-llama/Llama-2-13b-chat

El objetivo de llama.cpp es ejecutar el modelo LLaMA con cuantificación de enteros de 4 bits. Es una implementación simple de C/C++ optimizada para las arquitecturas Apple silicon y x86, que admite varias bibliotecas de cuantificación de enteros y BLAS. Originalmente un ejemplo de chat web, ahora sirve como un patio de recreo de desarrollo para las características de la biblioteca ggml.

GGML, una biblioteca C para aprendizaje automático, facilita la distribución de grandes modelos de lenguaje (LLM). Utiliza la cuantificación para permitir la ejecución eficiente de LLM en hardware de consumo. Los archivos GGML contienen datos codificados en binario, que incluyen el número de versión, los hiperparámetros, el vocabulario y los pesos. El vocabulario comprende tokens para la generación de lenguaje, mientras que los pesos determinan el tamaño del LLM. La cuantificación reduce la precisión para optimizar el uso de recursos.

13.1. Modelos cuantizados de la comunidad Hugging Face#

La comunidad Hugging Face proporciona modelos cuantificados, que nos permiten utilizar el modelo de manera eficiente y efectiva en la GPU T4. Es importante consultar fuentes confiables antes de usar cualquier modelo.

Hay varias variaciones disponibles, pero las que nos interesan se basan en la biblioteca GGLM.

Podemos ver las diferentes variaciones que tiene Llama-2-13B-GGML en el siguiente enlace: https://huggingface.co/models?search=llama 2 ggml

En este caso, usaremos el modelo llamado Llama-2-13B-chat-GGML, que se puede encontrar en el siguiente enlace: https://huggingface.co/TheBloke/Llama-2-13B-chat-GGML

Step 1: Instala todas las librerías necesarias

# GPU llama-cpp-python
!CMAKE_ARGS="-DLLAMA_CUBLAS=on" FORCE_CMAKE=1 pip install llama-cpp-python==0.1.78 numpy==1.23.4 --force-reinstall --upgrade --no-cache-dir --verbose
!pip install huggingface_hub
!pip install llama-cpp-python==0.1.78
!pip install numpy==1.23.4
Using pip 23.1.2 from /usr/local/lib/python3.10/dist-packages/pip (python 3.10)
Collecting llama-cpp-python==0.1.78
  Downloading llama_cpp_python-0.1.78.tar.gz (1.7 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.7/1.7 MB 11.9 MB/s eta 0:00:00
?25h  Running command pip subprocess to install build dependencies
  Using pip 23.1.2 from /usr/local/lib/python3.10/dist-packages/pip (python 3.10)
  Collecting setuptools>=42
    Downloading setuptools-68.2.0-py3-none-any.whl (807 kB)
       ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 807.8/807.8 kB 5.9 MB/s eta 0:00:00
  Collecting scikit-build>=0.13
    Downloading scikit_build-0.17.6-py3-none-any.whl (84 kB)
       ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 84.3/84.3 kB 6.7 MB/s eta 0:00:00
  Collecting cmake>=3.18
    Downloading cmake-3.27.4.1-py2.py3-none-manylinux2014_x86_64.manylinux_2_17_x86_64.whl (26.1 MB)
       ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 26.1/26.1 MB 47.9 MB/s eta 0:00:00
  Collecting ninja
    Downloading ninja-1.11.1-py2.py3-none-manylinux_2_12_x86_64.manylinux2010_x86_64.whl (145 kB)
       ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 146.0/146.0 kB 13.7 MB/s eta 0:00:00
  Collecting distro (from scikit-build>=0.13)
    Downloading distro-1.8.0-py3-none-any.whl (20 kB)
  Collecting packaging (from scikit-build>=0.13)
    Downloading packaging-23.1-py3-none-any.whl (48 kB)
       ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 48.9/48.9 kB 5.3 MB/s eta 0:00:00
  Collecting tomli (from scikit-build>=0.13)
    Downloading tomli-2.0.1-py3-none-any.whl (12 kB)
  Collecting wheel>=0.32.0 (from scikit-build>=0.13)
    Downloading wheel-0.41.2-py3-none-any.whl (64 kB)
       ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 64.8/64.8 kB 7.7 MB/s eta 0:00:00
  Installing collected packages: ninja, cmake, wheel, tomli, setuptools, packaging, distro, scikit-build
    Creating /tmp/pip-build-env-ifaczk0u/overlay/local/bin
    changing mode of /tmp/pip-build-env-ifaczk0u/overlay/local/bin/ninja to 755
    changing mode of /tmp/pip-build-env-ifaczk0u/overlay/local/bin/cmake to 755
    changing mode of /tmp/pip-build-env-ifaczk0u/overlay/local/bin/cpack to 755
    changing mode of /tmp/pip-build-env-ifaczk0u/overlay/local/bin/ctest to 755
    changing mode of /tmp/pip-build-env-ifaczk0u/overlay/local/bin/wheel to 755
    changing mode of /tmp/pip-build-env-ifaczk0u/overlay/local/bin/distro to 755
  ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
  ipython 7.34.0 requires jedi>=0.16, which is not installed.
  Successfully installed cmake-3.27.4.1 distro-1.8.0 ninja-1.11.1 packaging-23.1 scikit-build-0.17.6 setuptools-68.2.0 tomli-2.0.1 wheel-0.41.2
  Installing build dependencies ... ?25l?25hdone
  Running command Getting requirements to build wheel
  running egg_info
  writing llama_cpp_python.egg-info/PKG-INFO
  writing dependency_links to llama_cpp_python.egg-info/dependency_links.txt
  writing requirements to llama_cpp_python.egg-info/requires.txt
  writing top-level names to llama_cpp_python.egg-info/top_level.txt
  reading manifest file 'llama_cpp_python.egg-info/SOURCES.txt'
  adding license file 'LICENSE.md'
  writing manifest file 'llama_cpp_python.egg-info/SOURCES.txt'
  Getting requirements to build wheel ... ?25l?25hdone
  Running command Preparing metadata (pyproject.toml)
  running dist_info
  creating /tmp/pip-modern-metadata-_9plogdy/llama_cpp_python.egg-info
  writing /tmp/pip-modern-metadata-_9plogdy/llama_cpp_python.egg-info/PKG-INFO
  writing dependency_links to /tmp/pip-modern-metadata-_9plogdy/llama_cpp_python.egg-info/dependency_links.txt
  writing requirements to /tmp/pip-modern-metadata-_9plogdy/llama_cpp_python.egg-info/requires.txt
  writing top-level names to /tmp/pip-modern-metadata-_9plogdy/llama_cpp_python.egg-info/top_level.txt
  writing manifest file '/tmp/pip-modern-metadata-_9plogdy/llama_cpp_python.egg-info/SOURCES.txt'
  reading manifest file '/tmp/pip-modern-metadata-_9plogdy/llama_cpp_python.egg-info/SOURCES.txt'
  adding license file 'LICENSE.md'
  writing manifest file '/tmp/pip-modern-metadata-_9plogdy/llama_cpp_python.egg-info/SOURCES.txt'
  creating '/tmp/pip-modern-metadata-_9plogdy/llama_cpp_python-0.1.78.dist-info'
  Preparing metadata (pyproject.toml) ... ?25l?25hdone
Collecting numpy==1.23.4
  Downloading numpy-1.23.4-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (17.1 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 17.1/17.1 MB 109.7 MB/s eta 0:00:00
?25hCollecting typing-extensions>=4.5.0 (from llama-cpp-python==0.1.78)
  Downloading typing_extensions-4.7.1-py3-none-any.whl (33 kB)
Collecting diskcache>=5.6.1 (from llama-cpp-python==0.1.78)
  Downloading diskcache-5.6.3-py3-none-any.whl (45 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 45.5/45.5 kB 261.3 MB/s eta 0:00:00
?25hBuilding wheels for collected packages: llama-cpp-python
  Running command Building wheel for llama-cpp-python (pyproject.toml)


  --------------------------------------------------------------------------------
  -- Trying 'Ninja' generator
  --------------------------------
  ---------------------------
  ----------------------
  -----------------
  ------------
  -------
  --
  CMake Deprecation Warning at CMakeLists.txt:1 (cmake_minimum_required):
    Compatibility with CMake < 3.5 will be removed from a future version of
    CMake.

    Update the VERSION argument <min> value or use a ...<max> suffix to tell
    CMake that the project does not need compatibility with older versions.

  Not searching for unused variables given on the command line.

  -- The C compiler identification is GNU 11.4.0
  -- Detecting C compiler ABI info
  -- Detecting C compiler ABI info - done
  -- Check for working C compiler: /usr/bin/cc - skipped
  -- Detecting C compile features
  -- Detecting C compile features - done
  -- The CXX compiler identification is GNU 11.4.0
  -- Detecting CXX compiler ABI info
  -- Detecting CXX compiler ABI info - done
  -- Check for working CXX compiler: /usr/bin/c++ - skipped
  -- Detecting CXX compile features
  -- Detecting CXX compile features - done
  -- Configuring done (0.7s)
  -- Generating done (0.0s)
  -- Build files have been written to: /tmp/pip-install-dbo7qis9/llama-cpp-python_3ec82e89c33f424aa6e3429df1afc9fe/_cmake_test_compile/build
  --
  -------
  ------------
  -----------------
  ----------------------
  ---------------------------
  --------------------------------
  -- Trying 'Ninja' generator - success
  --------------------------------------------------------------------------------

  Configuring Project
    Working directory:
      /tmp/pip-install-dbo7qis9/llama-cpp-python_3ec82e89c33f424aa6e3429df1afc9fe/_skbuild/linux-x86_64-3.10/cmake-build
    Command:
      /tmp/pip-build-env-ifaczk0u/overlay/local/lib/python3.10/dist-packages/cmake/data/bin/cmake /tmp/pip-install-dbo7qis9/llama-cpp-python_3ec82e89c33f424aa6e3429df1afc9fe -G Ninja -DCMAKE_MAKE_PROGRAM:FILEPATH=/tmp/pip-build-env-ifaczk0u/overlay/local/lib/python3.10/dist-packages/ninja/data/bin/ninja --no-warn-unused-cli -DCMAKE_INSTALL_PREFIX:PATH=/tmp/pip-install-dbo7qis9/llama-cpp-python_3ec82e89c33f424aa6e3429df1afc9fe/_skbuild/linux-x86_64-3.10/cmake-install -DPYTHON_VERSION_STRING:STRING=3.10.12 -DSKBUILD:INTERNAL=TRUE -DCMAKE_MODULE_PATH:PATH=/tmp/pip-build-env-ifaczk0u/overlay/local/lib/python3.10/dist-packages/skbuild/resources/cmake -DPYTHON_EXECUTABLE:PATH=/usr/bin/python3 -DPYTHON_INCLUDE_DIR:PATH=/usr/include/python3.10 -DPYTHON_LIBRARY:PATH=/usr/lib/x86_64-linux-gnu/libpython3.10.so -DPython_EXECUTABLE:PATH=/usr/bin/python3 -DPython_ROOT_DIR:PATH=/usr -DPython_FIND_REGISTRY:STRING=NEVER -DPython_INCLUDE_DIR:PATH=/usr/include/python3.10 -DPython3_EXECUTABLE:PATH=/usr/bin/python3 -DPython3_ROOT_DIR:PATH=/usr -DPython3_FIND_REGISTRY:STRING=NEVER -DPython3_INCLUDE_DIR:PATH=/usr/include/python3.10 -DCMAKE_MAKE_PROGRAM:FILEPATH=/tmp/pip-build-env-ifaczk0u/overlay/local/lib/python3.10/dist-packages/ninja/data/bin/ninja -DLLAMA_CUBLAS=on -DCMAKE_BUILD_TYPE:STRING=Release -DLLAMA_CUBLAS=on

  Not searching for unused variables given on the command line.
  -- The C compiler identification is GNU 11.4.0
  -- The CXX compiler identification is GNU 11.4.0
  -- Detecting C compiler ABI info
  -- Detecting C compiler ABI info - done
  -- Check for working C compiler: /usr/bin/cc - skipped
  -- Detecting C compile features
  -- Detecting C compile features - done
  -- Detecting CXX compiler ABI info
  -- Detecting CXX compiler ABI info - done
  -- Check for working CXX compiler: /usr/bin/c++ - skipped
  -- Detecting CXX compile features
  -- Detecting CXX compile features - done
  -- Found Git: /usr/bin/git (found version "2.34.1")
  fatal: not a git repository (or any of the parent directories): .git
  fatal: not a git repository (or any of the parent directories): .git
  CMake Warning at vendor/llama.cpp/CMakeLists.txt:117 (message):
    Git repository not found; to enable automatic generation of build info,
    make sure Git is installed and the project is a Git repository.


  -- Performing Test CMAKE_HAVE_LIBC_PTHREAD
  -- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Success
  -- Found Threads: TRUE
  -- Found CUDAToolkit: /usr/local/cuda/include (found version "11.8.89")
  -- cuBLAS found
  -- The CUDA compiler identification is NVIDIA 11.8.89
  -- Detecting CUDA compiler ABI info
  -- Detecting CUDA compiler ABI info - done
  -- Check for working CUDA compiler: /usr/local/cuda/bin/nvcc - skipped
  -- Detecting CUDA compile features
  -- Detecting CUDA compile features - done
  -- Using CUDA architectures: 52;61;70
  -- CMAKE_SYSTEM_PROCESSOR: x86_64
  -- x86 detected
  -- Configuring done (4.4s)
  -- Generating done (0.0s)
  -- Build files have been written to: /tmp/pip-install-dbo7qis9/llama-cpp-python_3ec82e89c33f424aa6e3429df1afc9fe/_skbuild/linux-x86_64-3.10/cmake-build
  [1/9] Building C object vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-alloc.c.o
  [2/9] Building C object vendor/llama.cpp/CMakeFiles/ggml.dir/k_quants.c.o
  [3/9] Building C object vendor/llama.cpp/CMakeFiles/ggml.dir/ggml.c.o
  [4/9] Building CXX object vendor/llama.cpp/CMakeFiles/llama.dir/llama.cpp.o
  [5/9] Building CUDA object vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-cuda.cu.o
  [6/9] Linking CUDA shared library vendor/llama.cpp/libggml_shared.so
  [7/9] Linking CXX shared library vendor/llama.cpp/libllama.so
  [8/9] Linking CUDA static library vendor/llama.cpp/libggml_static.a
  [8/9] Install the project...
  -- Install configuration: "Release"
  -- Installing: /tmp/pip-install-dbo7qis9/llama-cpp-python_3ec82e89c33f424aa6e3429df1afc9fe/_skbuild/linux-x86_64-3.10/cmake-install/lib/libggml_shared.so
  -- Installing: /tmp/pip-install-dbo7qis9/llama-cpp-python_3ec82e89c33f424aa6e3429df1afc9fe/_skbuild/linux-x86_64-3.10/cmake-install/lib/libllama.so
  -- Set runtime path of "/tmp/pip-install-dbo7qis9/llama-cpp-python_3ec82e89c33f424aa6e3429df1afc9fe/_skbuild/linux-x86_64-3.10/cmake-install/lib/libllama.so" to ""
  -- Installing: /tmp/pip-install-dbo7qis9/llama-cpp-python_3ec82e89c33f424aa6e3429df1afc9fe/_skbuild/linux-x86_64-3.10/cmake-install/bin/convert.py
  -- Installing: /tmp/pip-install-dbo7qis9/llama-cpp-python_3ec82e89c33f424aa6e3429df1afc9fe/_skbuild/linux-x86_64-3.10/cmake-install/bin/convert-lora-to-ggml.py
  -- Installing: /tmp/pip-install-dbo7qis9/llama-cpp-python_3ec82e89c33f424aa6e3429df1afc9fe/_skbuild/linux-x86_64-3.10/cmake-install/llama_cpp/libllama.so
  -- Set runtime path of "/tmp/pip-install-dbo7qis9/llama-cpp-python_3ec82e89c33f424aa6e3429df1afc9fe/_skbuild/linux-x86_64-3.10/cmake-install/llama_cpp/libllama.so" to ""

  copying llama_cpp/llama_grammar.py -> _skbuild/linux-x86_64-3.10/cmake-install/llama_cpp/llama_grammar.py
  copying llama_cpp/llama_types.py -> _skbuild/linux-x86_64-3.10/cmake-install/llama_cpp/llama_types.py
  copying llama_cpp/llama.py -> _skbuild/linux-x86_64-3.10/cmake-install/llama_cpp/llama.py
  copying llama_cpp/llama_cpp.py -> _skbuild/linux-x86_64-3.10/cmake-install/llama_cpp/llama_cpp.py
  copying llama_cpp/__init__.py -> _skbuild/linux-x86_64-3.10/cmake-install/llama_cpp/__init__.py
  copying llama_cpp/utils.py -> _skbuild/linux-x86_64-3.10/cmake-install/llama_cpp/utils.py
  creating directory _skbuild/linux-x86_64-3.10/cmake-install/llama_cpp/server
  copying llama_cpp/server/__main__.py -> _skbuild/linux-x86_64-3.10/cmake-install/llama_cpp/server/__main__.py
  copying llama_cpp/server/app.py -> _skbuild/linux-x86_64-3.10/cmake-install/llama_cpp/server/app.py
  copying llama_cpp/server/__init__.py -> _skbuild/linux-x86_64-3.10/cmake-install/llama_cpp/server/__init__.py
  copying /tmp/pip-install-dbo7qis9/llama-cpp-python_3ec82e89c33f424aa6e3429df1afc9fe/llama_cpp/py.typed -> _skbuild/linux-x86_64-3.10/cmake-install/llama_cpp/py.typed

  running bdist_wheel
  running build
  running build_py
  creating _skbuild/linux-x86_64-3.10/setuptools/lib.linux-x86_64-cpython-310
  creating _skbuild/linux-x86_64-3.10/setuptools/lib.linux-x86_64-cpython-310/llama_cpp
  copying _skbuild/linux-x86_64-3.10/cmake-install/llama_cpp/llama_grammar.py -> _skbuild/linux-x86_64-3.10/setuptools/lib.linux-x86_64-cpython-310/llama_cpp
  copying _skbuild/linux-x86_64-3.10/cmake-install/llama_cpp/llama_types.py -> _skbuild/linux-x86_64-3.10/setuptools/lib.linux-x86_64-cpython-310/llama_cpp
  copying _skbuild/linux-x86_64-3.10/cmake-install/llama_cpp/llama.py -> _skbuild/linux-x86_64-3.10/setuptools/lib.linux-x86_64-cpython-310/llama_cpp
  copying _skbuild/linux-x86_64-3.10/cmake-install/llama_cpp/llama_cpp.py -> _skbuild/linux-x86_64-3.10/setuptools/lib.linux-x86_64-cpython-310/llama_cpp
  copying _skbuild/linux-x86_64-3.10/cmake-install/llama_cpp/__init__.py -> _skbuild/linux-x86_64-3.10/setuptools/lib.linux-x86_64-cpython-310/llama_cpp
  copying _skbuild/linux-x86_64-3.10/cmake-install/llama_cpp/utils.py -> _skbuild/linux-x86_64-3.10/setuptools/lib.linux-x86_64-cpython-310/llama_cpp
  creating _skbuild/linux-x86_64-3.10/setuptools/lib.linux-x86_64-cpython-310/llama_cpp/server
  copying _skbuild/linux-x86_64-3.10/cmake-install/llama_cpp/server/__main__.py -> _skbuild/linux-x86_64-3.10/setuptools/lib.linux-x86_64-cpython-310/llama_cpp/server
  copying _skbuild/linux-x86_64-3.10/cmake-install/llama_cpp/server/app.py -> _skbuild/linux-x86_64-3.10/setuptools/lib.linux-x86_64-cpython-310/llama_cpp/server
  copying _skbuild/linux-x86_64-3.10/cmake-install/llama_cpp/server/__init__.py -> _skbuild/linux-x86_64-3.10/setuptools/lib.linux-x86_64-cpython-310/llama_cpp/server
  copying _skbuild/linux-x86_64-3.10/cmake-install/llama_cpp/py.typed -> _skbuild/linux-x86_64-3.10/setuptools/lib.linux-x86_64-cpython-310/llama_cpp
  copying _skbuild/linux-x86_64-3.10/cmake-install/llama_cpp/libllama.so -> _skbuild/linux-x86_64-3.10/setuptools/lib.linux-x86_64-cpython-310/llama_cpp
  copied 9 files
  running build_ext
  installing to _skbuild/linux-x86_64-3.10/setuptools/bdist.linux-x86_64/wheel
  running install
  running install_lib
  creating _skbuild/linux-x86_64-3.10/setuptools/bdist.linux-x86_64
  creating _skbuild/linux-x86_64-3.10/setuptools/bdist.linux-x86_64/wheel
  creating _skbuild/linux-x86_64-3.10/setuptools/bdist.linux-x86_64/wheel/llama_cpp
  copying _skbuild/linux-x86_64-3.10/setuptools/lib.linux-x86_64-cpython-310/llama_cpp/libllama.so -> _skbuild/linux-x86_64-3.10/setuptools/bdist.linux-x86_64/wheel/llama_cpp
  copying _skbuild/linux-x86_64-3.10/setuptools/lib.linux-x86_64-cpython-310/llama_cpp/llama_grammar.py -> _skbuild/linux-x86_64-3.10/setuptools/bdist.linux-x86_64/wheel/llama_cpp
  copying _skbuild/linux-x86_64-3.10/setuptools/lib.linux-x86_64-cpython-310/llama_cpp/py.typed -> _skbuild/linux-x86_64-3.10/setuptools/bdist.linux-x86_64/wheel/llama_cpp
  copying _skbuild/linux-x86_64-3.10/setuptools/lib.linux-x86_64-cpython-310/llama_cpp/llama_types.py -> _skbuild/linux-x86_64-3.10/setuptools/bdist.linux-x86_64/wheel/llama_cpp
  creating _skbuild/linux-x86_64-3.10/setuptools/bdist.linux-x86_64/wheel/llama_cpp/server
  copying _skbuild/linux-x86_64-3.10/setuptools/lib.linux-x86_64-cpython-310/llama_cpp/server/__main__.py -> _skbuild/linux-x86_64-3.10/setuptools/bdist.linux-x86_64/wheel/llama_cpp/server
  copying _skbuild/linux-x86_64-3.10/setuptools/lib.linux-x86_64-cpython-310/llama_cpp/server/app.py -> _skbuild/linux-x86_64-3.10/setuptools/bdist.linux-x86_64/wheel/llama_cpp/server
  copying _skbuild/linux-x86_64-3.10/setuptools/lib.linux-x86_64-cpython-310/llama_cpp/server/__init__.py -> _skbuild/linux-x86_64-3.10/setuptools/bdist.linux-x86_64/wheel/llama_cpp/server
  copying _skbuild/linux-x86_64-3.10/setuptools/lib.linux-x86_64-cpython-310/llama_cpp/llama.py -> _skbuild/linux-x86_64-3.10/setuptools/bdist.linux-x86_64/wheel/llama_cpp
  copying _skbuild/linux-x86_64-3.10/setuptools/lib.linux-x86_64-cpython-310/llama_cpp/llama_cpp.py -> _skbuild/linux-x86_64-3.10/setuptools/bdist.linux-x86_64/wheel/llama_cpp
  copying _skbuild/linux-x86_64-3.10/setuptools/lib.linux-x86_64-cpython-310/llama_cpp/__init__.py -> _skbuild/linux-x86_64-3.10/setuptools/bdist.linux-x86_64/wheel/llama_cpp
  copying _skbuild/linux-x86_64-3.10/setuptools/lib.linux-x86_64-cpython-310/llama_cpp/utils.py -> _skbuild/linux-x86_64-3.10/setuptools/bdist.linux-x86_64/wheel/llama_cpp
  copied 11 files
  running install_data
  creating _skbuild/linux-x86_64-3.10/setuptools/bdist.linux-x86_64/wheel/llama_cpp_python-0.1.78.data
  creating _skbuild/linux-x86_64-3.10/setuptools/bdist.linux-x86_64/wheel/llama_cpp_python-0.1.78.data/data
  creating _skbuild/linux-x86_64-3.10/setuptools/bdist.linux-x86_64/wheel/llama_cpp_python-0.1.78.data/data/lib
  copying _skbuild/linux-x86_64-3.10/cmake-install/lib/libllama.so -> _skbuild/linux-x86_64-3.10/setuptools/bdist.linux-x86_64/wheel/llama_cpp_python-0.1.78.data/data/lib
  copying _skbuild/linux-x86_64-3.10/cmake-install/lib/libggml_shared.so -> _skbuild/linux-x86_64-3.10/setuptools/bdist.linux-x86_64/wheel/llama_cpp_python-0.1.78.data/data/lib
  creating _skbuild/linux-x86_64-3.10/setuptools/bdist.linux-x86_64/wheel/llama_cpp_python-0.1.78.data/data/bin
  copying _skbuild/linux-x86_64-3.10/cmake-install/bin/convert.py -> _skbuild/linux-x86_64-3.10/setuptools/bdist.linux-x86_64/wheel/llama_cpp_python-0.1.78.data/data/bin
  copying _skbuild/linux-x86_64-3.10/cmake-install/bin/convert-lora-to-ggml.py -> _skbuild/linux-x86_64-3.10/setuptools/bdist.linux-x86_64/wheel/llama_cpp_python-0.1.78.data/data/bin
  running install_egg_info
  running egg_info
  writing llama_cpp_python.egg-info/PKG-INFO
  writing dependency_links to llama_cpp_python.egg-info/dependency_links.txt
  writing requirements to llama_cpp_python.egg-info/requires.txt
  writing top-level names to llama_cpp_python.egg-info/top_level.txt
  reading manifest file 'llama_cpp_python.egg-info/SOURCES.txt'
  adding license file 'LICENSE.md'
  writing manifest file 'llama_cpp_python.egg-info/SOURCES.txt'
  Copying llama_cpp_python.egg-info to _skbuild/linux-x86_64-3.10/setuptools/bdist.linux-x86_64/wheel/llama_cpp_python-0.1.78-py3.10.egg-info
  running install_scripts
  copied 0 files
  creating _skbuild/linux-x86_64-3.10/setuptools/bdist.linux-x86_64/wheel/llama_cpp_python-0.1.78.dist-info/WHEEL
  creating '/tmp/pip-wheel-c7a5_9gh/.tmp-d8770i0i/llama_cpp_python-0.1.78-cp310-cp310-linux_x86_64.whl' and adding '_skbuild/linux-x86_64-3.10/setuptools/bdist.linux-x86_64/wheel' to it
  adding 'llama_cpp/__init__.py'
  adding 'llama_cpp/libllama.so'
  adding 'llama_cpp/llama.py'
  adding 'llama_cpp/llama_cpp.py'
  adding 'llama_cpp/llama_grammar.py'
  adding 'llama_cpp/llama_types.py'
  adding 'llama_cpp/py.typed'
  adding 'llama_cpp/utils.py'
  adding 'llama_cpp/server/__init__.py'
  adding 'llama_cpp/server/__main__.py'
  adding 'llama_cpp/server/app.py'
  adding 'llama_cpp_python-0.1.78.data/data/bin/convert-lora-to-ggml.py'
  adding 'llama_cpp_python-0.1.78.data/data/bin/convert.py'
  adding 'llama_cpp_python-0.1.78.data/data/lib/libggml_shared.so'
  adding 'llama_cpp_python-0.1.78.data/data/lib/libllama.so'
  adding 'llama_cpp_python-0.1.78.dist-info/LICENSE.md'
  adding 'llama_cpp_python-0.1.78.dist-info/METADATA'
  adding 'llama_cpp_python-0.1.78.dist-info/WHEEL'
  adding 'llama_cpp_python-0.1.78.dist-info/top_level.txt'
  adding 'llama_cpp_python-0.1.78.dist-info/RECORD'
  removing _skbuild/linux-x86_64-3.10/setuptools/bdist.linux-x86_64/wheel
  Building wheel for llama-cpp-python (pyproject.toml) ... ?25l?25hdone
  Created wheel for llama-cpp-python: filename=llama_cpp_python-0.1.78-cp310-cp310-linux_x86_64.whl size=5822234 sha256=cef82008170dbb655b35422a1c55bc0278a3e76c9d5be583018751f8a049a7d7
  Stored in directory: /tmp/pip-ephem-wheel-cache-nlsm63b_/wheels/61/f9/20/9ca660a9d3f2a47e44217059409478865948b5c8a1cba70030
Successfully built llama-cpp-python
Installing collected packages: typing-extensions, numpy, diskcache, llama-cpp-python
  Attempting uninstall: typing-extensions
    Found existing installation: typing_extensions 4.5.0
    Uninstalling typing_extensions-4.5.0:
      Removing file or directory /usr/local/lib/python3.10/dist-packages/__pycache__/typing_extensions.cpython-310.pyc
      Removing file or directory /usr/local/lib/python3.10/dist-packages/typing_extensions-4.5.0.dist-info/
      Removing file or directory /usr/local/lib/python3.10/dist-packages/typing_extensions.py
      Successfully uninstalled typing_extensions-4.5.0
  Attempting uninstall: numpy
    Found existing installation: numpy 1.23.5
    Uninstalling numpy-1.23.5:
      Removing file or directory /usr/local/bin/f2py
      Removing file or directory /usr/local/bin/f2py3
      Removing file or directory /usr/local/bin/f2py3.10
      Removing file or directory /usr/local/lib/python3.10/dist-packages/numpy-1.23.5.dist-info/
      Removing file or directory /usr/local/lib/python3.10/dist-packages/numpy.libs/
      Removing file or directory /usr/local/lib/python3.10/dist-packages/numpy/
      Successfully uninstalled numpy-1.23.5
  changing mode of /usr/local/bin/f2py to 755
  changing mode of /usr/local/bin/f2py3 to 755
  changing mode of /usr/local/bin/f2py3.10 to 755
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
tensorflow 2.13.0 requires typing-extensions<4.6.0,>=3.6.6, but you have typing-extensions 4.7.1 which is incompatible.
Successfully installed diskcache-5.6.3 llama-cpp-python-0.1.78 numpy-1.23.4 typing-extensions-4.7.1
Collecting huggingface_hub
  Downloading huggingface_hub-0.17.0-py3-none-any.whl (294 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 294.8/294.8 kB 5.1 MB/s eta 0:00:00
?25hRequirement already satisfied: filelock in /usr/local/lib/python3.10/dist-packages (from huggingface_hub) (3.12.2)
Requirement already satisfied: fsspec in /usr/local/lib/python3.10/dist-packages (from huggingface_hub) (2023.6.0)
Requirement already satisfied: requests in /usr/local/lib/python3.10/dist-packages (from huggingface_hub) (2.31.0)
Requirement already satisfied: tqdm>=4.42.1 in /usr/local/lib/python3.10/dist-packages (from huggingface_hub) (4.66.1)
Requirement already satisfied: pyyaml>=5.1 in /usr/local/lib/python3.10/dist-packages (from huggingface_hub) (6.0.1)
Requirement already satisfied: typing-extensions>=3.7.4.3 in /usr/local/lib/python3.10/dist-packages (from huggingface_hub) (4.7.1)
Requirement already satisfied: packaging>=20.9 in /usr/local/lib/python3.10/dist-packages (from huggingface_hub) (23.1)
Requirement already satisfied: charset-normalizer<4,>=2 in /usr/local/lib/python3.10/dist-packages (from requests->huggingface_hub) (3.2.0)
Requirement already satisfied: idna<4,>=2.5 in /usr/local/lib/python3.10/dist-packages (from requests->huggingface_hub) (3.4)
Requirement already satisfied: urllib3<3,>=1.21.1 in /usr/local/lib/python3.10/dist-packages (from requests->huggingface_hub) (2.0.4)
Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.10/dist-packages (from requests->huggingface_hub) (2023.7.22)
Installing collected packages: huggingface_hub
Successfully installed huggingface_hub-0.17.0
Requirement already satisfied: llama-cpp-python==0.1.78 in /usr/local/lib/python3.10/dist-packages (0.1.78)
Requirement already satisfied: typing-extensions>=4.5.0 in /usr/local/lib/python3.10/dist-packages (from llama-cpp-python==0.1.78) (4.7.1)
Requirement already satisfied: numpy>=1.20.0 in /usr/local/lib/python3.10/dist-packages (from llama-cpp-python==0.1.78) (1.23.4)
Requirement already satisfied: diskcache>=5.6.1 in /usr/local/lib/python3.10/dist-packages (from llama-cpp-python==0.1.78) (5.6.3)
Requirement already satisfied: numpy==1.23.4 in /usr/local/lib/python3.10/dist-packages (1.23.4)
model_name_or_path = "TheBloke/Llama-2-13B-chat-GGML"
model_basename = "llama-2-13b-chat.ggmlv3.q5_1.bin" # the model is in bin format

#Step 2: Importa todas las librerías

from huggingface_hub import hf_hub_download
from llama_cpp import Llama

Step 3: Descarga el modelo

model_path = hf_hub_download(repo_id=model_name_or_path, filename=model_basename)

Step 4: Carga el modelo

# GPU
lcpp_llm = None
lcpp_llm = Llama(
    model_path=model_path,
    n_threads=2, # CPU cores
    n_batch=512, # Should be between 1 and n_ctx, consider the amount of VRAM in your GPU.
    n_gpu_layers=32 # Change this value based on your model and your GPU VRAM pool.
    )
AVX = 1 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | VSX = 0 | 
# See the number of layers in GPU
lcpp_llm.params.n_gpu_layers
32

Step 5: Crear un Prompt Template

prompt = "Escribe una función en python que me de los n primeros números primos"
prompt_template=f'''SYSTEM: Eres una asistente personal que busca ayudar lo mejor posible.

USER: {prompt}

ASSISTANT:
'''

Step 6: Generar las respuestas

response=lcpp_llm(prompt=prompt_template, max_tokens=256, temperature=0.3, top_p=0.95,
                  repeat_penalty=1.2, top_k=150,
                  echo=True)
Llama.generate: prefix-match hit
print(response["choices"][0]["text"])
SYSTEM: Eres una asistente personal que busca ayudar lo mejor posible.

USER: Escribe una función en python que me de los n primeros números primos

ASSISTANT:

¡Claro! Aquí te dejo un ejemplo de cómo podrías hacerlo:
```
def first_n_primes(n):
    # Función que devuelve una lista con los primeras n números primos
    
    # Definimos una función recursiva para encontrar los números primos
    def is_prime(x):
        if x <= 1:
            return False
        for i in range(2, int(x ** 0.5) + 1):
            if x % i == 0:
                return False
        return True
    
    # Creamos una lista vacía para almacenar los números primos
    prime_list = []
    
    # Iteramos sobre el rango [2, n] para encontrar los números primos
    for i in range(2, n + 1):
        if is_prime(i):
            prime_list.append(i)
    
    return prime_list[:n]
```
¡Eso es todo! La función `first_n_primes` toma
def first_n_primes(n):
    # Función que devuelve una lista con los primeras n números primos

    # Definimos una función recursiva para encontrar los números primos
    def is_prime(x):
        if x <= 1:
            return False
        for i in range(2, int(x ** 0.5) + 1):
            if x % i == 0:
                return False
        return True

    # Creamos una lista vacía para almacenar los números primos
    prime_list = []

    # Iteramos sobre el rango [2, n] para encontrar los números primos
    for i in range(2, n + 1):
        if is_prime(i):
            prime_list.append(i)

    return prime_list[:n]


a = first_n_primes(20)
a
[2, 3, 5, 7, 11, 13, 17, 19]
prompt = "Manda un saludo a los seguidores de AlexFocus!"
prompt_template=f'''SYSTEM: Eres una asistente personal que busca ayudar lo mejor posible.

USER: {prompt}

ASSISTANT:
'''
response=lcpp_llm(prompt=prompt_template, max_tokens=256, temperature=0.5, top_p=0.95,
                  repeat_penalty=1.2, top_k=150,
                  echo=True)
print(response["choices"][0]["text"])
Llama.generate: prefix-match hit
SYSTEM: Eres una asistente personal que busca ayudar lo mejor posible.

USER: Manda un saludo a los seguidores de AlexFocus!

ASSISTANT:
¡Hola a todos mis amigos y seguidores de AlexFocus! Espero que estén teniendo un día increíble. ¡Estoy aquí para ayudar en lo que necesiten! ¿Hay algún tema o pregunta que quieran platicar? 😊