#### 自定义dockerfile
自定义安装,主要是为了在docker中使用conda虚拟环境。
FROM nvcr.io/nvidia/pytorch:23.11-py3
LABEL maintainer=“transformers”ARG DEBIAN_FRONTEND=noninteractive
ARG PYTORCH=‘2.1.0’
Example: cu102, cu113, etc.
ARG CUDA=‘cu121’
RUN apt-get update &&
apt-get install -y libaio-dev wget bzip2 ca-certificates curl git git-lfs unzip mlocate usbutils
vim tmux g++ gcc build-essential cmake checkinstall lsb-release &&
rm -rf /var/lib/apt/lists/* &&
apt-get cleanRUN python3 -m pip uninstall -y torch torchvision torchaudio torch-tensorrt transformer-engine apex
SHELL [“/bin/bash”, “–login”, “-c”]
RUN cd / && wget --quiet https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh -O /miniconda.sh &&
/bin/bash /miniconda.sh -b -p /opt/conda &&
ln -s /opt/conda/etc/profile.d/conda.sh /etc/profile.d/conda.sh &&
echo “. /opt/conda/etc/profile.d/conda.sh” >> ~/.bashrc &&
/bin/bash -c “source ~/.bashrc” &&
/opt/conda/bin/conda update -n base -c defaults conda -y &&
/opt/conda/bin/conda config --set ssl_verify no &&
/opt/conda/bin/conda config --add channels conda-forge &&
/opt/conda/bin/conda create -n ai python=3.10ENV PATH $PATH:/opt/conda/envs/ai/bin
RUN conda init bash &&
echo “conda activate ai” >> ~/.bashrc &&
conda activate ai &&
pip install --upgrade pip -i https://mirror.baidu.com/pypi/simple &&\
pip config set global.index-url https://mirror.baidu.com/pypi/simple &&\
Install latest release PyTorch
(PyTorch must be installed before pre-compiling any DeepSpeed c++/cuda ops.)
(https://www.deepspeed.ai/tutorials/advanced-install/#pre-install-deepspeed-ops)
pip install --no-cache-dir -U torch==$PYTORCH torchvision torchaudio \
--extra-index-url https://download.pytorch.org/whl/$CUDA &&\
pip install -U numpy opencv-python onnx onnxoptimizer onnxruntime -i https://mirror.baidu.com/pypi/simple
ARG REF=main
RUN conda activate ai &&
cd &&
git clone https://github.com/huggingface/transformers && cd transformers && git checkout $REF &&
cd … &&
pip install --no-cache-dir ./transformers[deepspeed-testing] &&
pip install --no-cache-dir git+https://github.com/huggingface/accelerate@main#egg=accelerate &&\recompile apex
# pip uninstall -y apex &&\
RUN git clone https://github.com/NVIDIA/apex
MAX_JOBS=1 disables parallel building to avoid cpu memory OOM when building image on GitHub Action (standard) runners
TODO: check if there is alternative way to install latest apex
RUN cd apex && MAX_JOBS=1 python3 -m pip install --global-option=“–cpp_ext” --global-option=“–cuda_ext” --no-cache -v --disable-pip-version-check .
Pre-build latest DeepSpeed, so it would be ready for testing (otherwise, the 1st deepspeed test will timeout)
pip uninstall -y deepspeed
This has to be run (again) inside the GPU VMs running the tests.
The installation works here, but some tests fail, if we don’t pre-build deepspeed again in the VMs running the tests.
TODO: Find out why test fail.
RUN DS_BUILD_CPU_ADAM=1 DS_BUILD_FUSED_ADAM=1
RUN conda activate ai &&
pip install deepspeed --global-option=“build_ext”
–global-option=“-j8” --no-cache -v --disable-pip-version-check 2>&1When installing in editable mode, transformers is not recognized as a package.
this line must be added in order for python to be aware of transformers.
RUN conda activate ai &&
cd &&
cd transformers && python3 setup.py developThe base image ships with pydantic==1.8.2 which is not working - i.e. the next command fails
RUN conda activate ai &&
pip install -U --no-cache-dir “pydantic<2”RUN conda activate ai &&
python3 -c “from deepspeed.launcher.runner import main”RUN apt-get update &&
rm -rf /var/lib/apt/lists/* &&
apt-get clean
### 缓存设置
预训练模型会被下载并本地缓存到 `~/.cache/huggingface/hub`。这是由环境变量 `TRANSFORMERS_CACHE` 指定的默认目录。在 Windows 上,默认目录为 `C:\Users\username\.cache\huggingface\hub`。你可以按照不同优先级改变下述环境变量,以指定不同的缓存目录。
1. 环境变量(默认): `HUGGINGFACE_HUB_CACHE` 或 `TRANSFORMERS_CACHE`。
2. 环境变量 `HF_HOME`。
3. 环境变量 `XDG_CACHE_HOME` + `/huggingface`。
除非你明确指定了环境变量 `TRANSFORMERS_CACHE`,🤗 Transformers 将可能会使用较早版本设置的环境变量 `PYTORCH_TRANSFORMERS_CACHE` 或 `PYTORCH_PRETRAINED_BERT_CACHE`。
### 离线模式
🤗 Transformers 可以仅使用本地文件在防火墙或离线环境中运行。设置环境变量 `TRANSFORMERS_OFFLINE=1` 以启用该行为。
通过设置环境变量 `HF_DATASETS_OFFLINE=1` 将 [🤗 Datasets]( ) 添加至你的离线训练工作流程中。
例如,你通常会使用以下命令对外部实例进行防火墙保护的的普通网络上运行程序:
python examples/pytorch/translation/run_translation.py --model_name_or_path google-t5/t5-small --dataset_name wmt16 --dataset_config ro-en …
在离线环境中运行相同的程序:
HF_DATASETS_OFFLINE=1 TRANSFORMERS_OFFLINE=1
python examples/pytorch/translation/run_translation.py --model_name_or_path google-t5/t5-small --dataset_name wmt16 --dataset_config ro-en …
现在脚本可以应该正常运行,而无需挂起或等待超时,因为它知道只应查找本地文件。
#### 获取离线时使用的模型和分词器
另一种离线时使用 🤗 Transformers 的方法是预先下载好文件,然后在需要离线使用时指向它们的离线路径。有三种实现的方法:
* 单击 [Model Hub]( ) 用户界面上的 ↓ 图标下载文件。
* 使用 [`PreTrainedModel.from_pretrained`] 和 [`PreTrainedModel.save_pretrained`] 工作流程:
1. 预先使用 [`PreTrainedModel.from_pretrained`] 下载文件:
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained(“bigscience/T0_3B”)
model = AutoModelForSeq2SeqLM.from_pretrained(“bigscience/T0_3B”)
2. 使用 [`PreTrainedModel.save_pretrained`] 将文件保存至指定目录:
tokenizer.save_pretrained(“./your/path/bigscience_t0”)
model.save_pretrained(“./your/path/bigscience_t0”)
3. 现在,你可以在离线时从指定目录使用 [`PreTrainedModel.from_pretrained`] 重新加载你的文件:
tokenizer = AutoTokenizer.from_pretrained(“./your/path/bigscience_t0”)
model = AutoModel.from_pretrained(“./your/path/bigscience_t0”)
* 使用代码用 [huggingface\_hub]( ) 库下载文件:
1. 在你的虚拟环境中安装 `huggingface_hub` 库:
python -m pip install huggingface_hub
2. 使用 [`hf_hub_download`]( ) 函数将文件下载到指定路径。例如,以下命令将 `config.json` 文件从 [T0]( ) 模型下载至你想要的路径:
from huggingface_hub import hf_hub_download
hf_hub_download(repo_id=“bigscience/T0_3B”, filename=“config.json”, cache_dir=“./your/path/bigscience_t0”)