DeepSeek V2#

Released in June 2024, DeepSeek V2 is a Mixture-of-Experts model featuring architectural innovations such as Multi-head Latent Attention (MLA) and device limited expert routing. DeepSeek V2 has 21B active parameters out of 236B total parameters. DeepSeek V2 Lite, a smaller variant, has 2.4B active parameters out of 15.7B total parameters.

We provide pre-defined recipes for pretraining and finetuning DeepSeek V2 models using NeMo 2.0 and NeMo-Run. These recipes configure a run.Partial for one of the nemo.collections.llm api functions introduced in NeMo 2.0. The recipes are hosted in deepseek_v2 and deepseek_v2_lite .

NeMo 2.0 Pretraining Recipes#

Note

The pretraining recipes use the MockDataModule for the data argument. You are expected to replace the MockDataModule with your custom dataset.

We provide an example below on how to invoke the default recipe and override the data argument:

from nemo.collections import llm

pretrain = llm.deepseek_v2_lite.pretrain_recipe(
    name="deepseek_v2_lite_pretraining",
    dir=f"/path/to/checkpoints",
    num_nodes=1,
    num_gpus_per_node=8,
)

# # To override the data argument
# dataloader = a_function_that_configures_your_custom_dataset(
#     gbs=gbs,
#     mbs=mbs,
#     seq_length=pretrain.model.config.seq_length,
# )
# pretrain.data = dataloader

NeMo 2.0 Finetuning Recipes#

Note

The finetuning recipes use the SquadDataModule for the data argument. You can replace the SquadDataModule with your custom dataset.

To import the HF model and convert to NeMo 2.0 format, run the following command (this only needs to be done once)

from nemo.collections import llm
llm.import_ckpt(model=llm.DeepSeekModel(llm.DeepSeekV2LiteConfig()), source='hf://deepseek-ai/DeepSeek-V2-Lite')

By default, the non-instruct version of the model is loaded. To load a different model, set finetune.resume.restore_config.path=nemo://<hf model id> or finetune.resume.restore_config.path=<local model path>

We provide an example below on how to invoke the default recipe and override the data argument:

from nemo.collections import llm

recipe = llm.deepseek_v2_lite.finetune_recipe(
    name="deepseek_v2_lite_finetuning",
    dir=f"/path/to/checkpoints",
    num_nodes=1,
    num_gpus_per_node=8,
    peft_scheme='lora',  # 'lora', 'dora', 'none'
)

# # To override the data argument
# dataloader = a_function_that_configures_your_custom_dataset(
#     gbs=gbs,
#     mbs=mbs,
#     seq_length=recipe.model.config.seq_length,
# )
# recipe.data = dataloader

By default, the finetuning recipe will run LoRA finetuning with LoRA applied to all linear layers in MLA (none in the MoE layer). To finetune the entire model without LoRA, set peft_scheme='none' in the recipe argument.

Note

The configuration in the recipes is done using the NeMo-Run run.Config and run.Partial configuration objects. Please review the NeMo-Run documentation to learn more about its configuration and execution system.

Once you have your final configuration ready, you can execute it on any of the NeMo-Run supported executors. The simplest is the local executor, which just runs the pretraining locally in a separate process. You can use it as follows:

import nemo_run as run

run.run(recipe, executor=run.LocalExecutor())

Additionally, you can also run it directly in the same Python process as follows:

run.run(recipe, direct=True)