Compose Applications#
Overview of Compose Applications#
Creating a Compose Application in a Workbench project requires technical experience with multi-container applications.
You should be familiar with how containers are configured and started
You should be familiar with compose files and the compose specification
You should be able to test and debug multi-container applications
However, using a compose application in a Workbench project should be straightforward.
Read the relevant section of the readme to understand any process required
Make sure any necessary environment variables are configured
Go to Environment > Compose and start the application
Note
The user experience of the compose application is limited by what the application creator has configured.
However, because it is code in the repository, you can always edit things to create an experience to your liking.
Compose File Names and Locations#
Workbench has expectations for the file name and where it goes in the project.
The file name must be one of the following:
compose.yaml
orcompose.yml
docker-compose.yml
ordocker-compose.yaml
In general, the file must be in the root of the project,
/project/
or in a/project/deploy
folderHowever, you can specify a different location by editing the project specification (
spec.yaml
) fileYou must add a field,
environment:compose-file-path
, to the specification fileThe value for this field should be the relative path to the compose file in the project repository
You can see more about the project specification file in the The Project Specification documentation.
Service Profiles#
Adjusting multi-container applications to run on different hardware setups can be done with service profiles.
Using the Compose Feature#
Creating and Starting a Compose Application#
You can do everything from within Project Tab > Environment > Compose in the Desktop App
Open a project in the AI Workbench desktop application
Click Environment to open the environment page
Click Compose or scroll to the compose section
Click Create compose file. The Create compose file window appears
In the Create compose file window, edit your compose file - when complete, click Save (see more in here and here)
(Optional) For Profile select one or more profiles that you want to use when you start the compose environment
Click Start to start the compose environment for your project
Click Stop to stop the compose environment for your project
Tip
The edit feature for the compose file has a Cheat Sheet that you can use to help you edit the compose file.
Versioning a Compose Application#
As long as the compose file is in a tracked folder in the project repository, it will be versioned by git.
Once you get the compose application working, you should make a commit to save that version
However, the containers used for the services are not versioned and you must be aware of that
Mounts and Volumes#
AI Workbench does not manage setting bind mount values for compose containers. AI Workbench creates a shared volume that all containers can use, including the project container. The mount is available at /nvwb-shared-volume.
Tip
If you run different containers as different users, you might need to modify the permissions of files your project creates, so that all containers can read and write as necessary.
Sample Docker Compose Files#
Example of a Simple Compose File#
The following is a sample Docker compose file that includes a web app service. There is no profile on this service, so it always runs.
1services:
2
3 web1:
4 # Using build: builds the image from a local dockerfile in the project
5 image: hashicorp/http-echo
6 environment:
7 # Setting the NVWB_TRIM_PREFIX env var causes this service to be routed through the proxy.
8 # NVWB_TRIM_PREFIX=true trims the proxy prefix.
9 # The env var PROXY_PREFIX is injected into the service if you need it.
10 - NVWB_TRIM_PREFIX=true
11 ports:
12 - '5678:5678'
13 command: ["-text=hello from service 1"]
Example of a Web App That Requires a GPU#
The following is a sample Docker compose file that includes two web app services.
Service 1 always runs.
Service 2 requires a GPU,
and only runs when you select the gpu-service
profile.
1services:
2
3 web1:
4 # Using build: builds the image from a local dockerfile in the project
5 image: hashicorp/http-echo
6 environment:
7 # Setting the NVWB_TRIM_PREFIX env var causes this service to be routed through the proxy.
8 # NVWB_TRIM_PREFIX=true trims the proxy prefix.
9 # The env var PROXY_PREFIX is injected into the service if you need it.
10 - NVWB_TRIM_PREFIX=true
11 ports:
12 - '5678:5678'
13 command: ["-text=hello from service 1"]
14
15 web2:
16 image: hashicorp/http-echo
17 profiles: [gpu-service]
18 environment:
19 - NVWB_TRIM_PREFIX=true
20 ports:
21 - '5679:5679'
22 # Specify GPU requests in this format.
23 # AI Workbench manages reservations and explicitly passes GPUs into each container,
24 # so you don't have to worry about collisions
25 deploy:
26 resources:
27 reservations:
28 devices:
29 - driver: nvidia
30 count: 1
31 capabilities: [gpu]
32 command: ["-text=hello from service 2", "-listen=:5679"]
Example of a Compose With Environment Variables and Secrets#
The following is a sample Docker compose file that includes a web app service.
This compose file includes an environment variable and a secret.
Create the variable TEST_VAR
and the secret TEST_SECRET
in your AI Workbench project before you use this example.
For more information, see Environment Variables.
There is no profile on this service, so it always runs.
1services:
2
3 web3:
4 # Using build: builds the image from a local dockerfile in the project
5 image: hashicorp/http-echo
6 environment:
7 # Setting the NVWB_TRIM_PREFIX env var causes this service to be routed through the proxy.
8 # NVWB_TRIM_PREFIX=true trims the proxy prefix.
9 # The env var PROXY_PREFIX is injected into the service if you need it.
10 - NVWB_TRIM_PREFIX=true
11 # Environment variables set in the project in AI Workbench are available by interpolation like this
12 - TEST_ENV_VAR=${TEST_VAR}
13 # Secrets are also available by interpolation if you prefer that over the file
14 - TEST_SECRET_FROM_ENV_VAR=${TEST_SECRET}
15 ports:
16 - '5678:5678'
17 command: ["-text=${TEST_VAR}"]
Example of Compose Secrets#
The following is a sample Docker compose file that includes two web app services.
Service 1 always runs.
Service 4 uses compose secrets,
and only runs when you select the compose-secret
profile.
You need to set the secret TEST_SECRET
in your AI Workbench project for this service to run.
1services:
2
3 web1:
4 # Using build: builds the image from a local dockerfile in the project
5 image: hashicorp/http-echo
6 environment:
7 # Setting the NVWB_TRIM_PREFIX env var causes this service to be routed through the proxy.
8 # NVWB_TRIM_PREFIX=true trims the proxy prefix.
9 # The env var PROXY_PREFIX is injected into the service if you need it.
10 - NVWB_TRIM_PREFIX=true
11 ports:
12 - '5678:5678'
13 command: ["-text=hello from service 1"]
14
15 web4:
16 image: hashicorp/http-echo
17 profiles: [compose-secret]
18 environment:
19 - NVWB_TRIM_PREFIX=true
20 # This is an example of how you can use the secret as a file.
21 # Compose mounts the secret there for you
22 - TEST_SECRET_FILE=/run/secrets/TEST_SECRET
23 ports:
24 - '5680:5680'
25 # To use a compose secret, you must set the secret if you want it active on the service.
26 # It should match the secret name in the AI Workbench project
27 secrets:
28 - TEST_SECRET
29 command: ["-text=hello from service 4", "-listen=:5680"]
30
31
32# If you want to use compose secrets, set this global value so compose file validation works,
33# but AI Workbench automatically replaces the value.
34# The name should match the name in AI Workbench (e.g. TEST_SECRET).
35secrets:
36TEST_SECRET:
37 environment: "HOME"
FAQs#
Can I manage a compose application with the Workbench CLI?#
Yes. The CLI offers the same functionality as the Desktop App for this.
See the following sections of this guide: