Command Line Tools

The Open Data Cube offers a CLI for common administrative tasks related to the Open Data Cube.

datacube

Data Cube command-line interface

datacube [OPTIONS] COMMAND [ARGS]...

Options

--version
-v, --verbose

Use multiple times for more verbosity

--log-file <log_file>

Specify log file

-E, --env <env>
-C, --config, --config_file <config>
--log-queries

Print database queries.

dataset

Dataset management commands

datacube dataset [OPTIONS] COMMAND [ARGS]...

add

Add datasets to the Data Cube

datacube dataset add [OPTIONS] [DATASET_PATHS]...

Options

-p, --product <product_names>

Only match against products specified with this option, you can supply several by repeating this option with a new product name

-x, --exclude-product <exclude_product_names>

Attempt to match to all products in the DB except for products specified with this option, you can supply several by repeating this option with a new product name

--auto-add-lineage, --no-auto-add-lineage

Default behaviour is to automatically add lineage datasets if they are missing from the database, but this can be disabled if lineage is expected to be present in the DB, in this case add will abort when encountering missing lineage dataset

--verify-lineage, --no-verify-lineage

Lineage referenced in the metadata document should be the same as in DB, default behaviour is to skip those top-level datasets that have lineage data different from the version in the DB. This option allows omitting verification step.

--dry-run

Check if everything is ok

--ignore-lineage

Pretend that there is no lineage data in the datasets being indexed

--confirm-ignore-lineage

Pretend that there is no lineage data in the datasets being indexed, without confirmation

Arguments

DATASET_PATHS

Optional argument(s)

archive

Archive datasets

datacube dataset archive [OPTIONS] [IDS]...

Options

-d, --archive-derived

Also recursively archive derived datasets

--dry-run

Don’t archive. Display datasets that would get archived

--all

Ignore id list - archive ALL non-archived datasets (warning: may be slow on large databases)

Arguments

IDS

Optional argument(s)

info

Display dataset information

datacube dataset info [OPTIONS] [IDS]...

Options

--show-sources

Also show source datasets

--show-derived

Also show derived datasets

-f <f>

Output format

Default

yaml

Options

csv | yaml

--max-depth <max_depth>

Maximum sources/derived depth to travel

Arguments

IDS

Optional argument(s)

purge

Purge archived datasets

datacube dataset purge [OPTIONS] [IDS]...

Options

--dry-run

Don’t archive. Display datasets that would get archived

--all

Ignore id list - purge ALL archived datasets (warning: may be slow on large databases)

Arguments

IDS

Optional argument(s)

restore

Restore datasets

datacube dataset restore [OPTIONS] [IDS]...

Options

-d, --restore-derived

Also recursively restore derived datasets

--dry-run

Don’t restore. Display datasets that would get restored

--derived-tolerance-seconds <derived_tolerance_seconds>

Only restore derived datasets that were archived this recently to the original dataset

--all

Ignore id list - restore ALL archived datasets (warning: may be slow on large databases)

Arguments

IDS

Optional argument(s)

update

Update datasets in the Data Cube

datacube dataset update [OPTIONS] [DATASET_PATHS]...

Options

--allow-any <keys_that_can_change>

Allow any changes to the specified key (a.b.c)

--dry-run

Check if everything is ok

--location-policy <location_policy>

What to do with previously recorded dataset location(s)

 - ‘keep’: keep as alternative location [default] - ‘archive’: mark as archived - ‘forget’: remove from the index

Options

keep | archive | forget

Arguments

DATASET_PATHS

Optional argument(s)

ingest

Ingest datasets

datacube ingest [OPTIONS]

Options

-c, --config-file <config_file>

Ingest configuration file

--year <year>

Limit the process to a particular year

--queue-size <queue_size>

Task queue size

--save-tasks <save_tasks>

Save tasks to the specified file

--load-tasks <load_tasks>

Load tasks from the specified file

-d, --dry-run

Check if everything is ok

--allow-product-changes

Allow the output product definition to be updated if it differs.

--executor <executor>

Run parallelized, either locally or distributed. eg: –executor multiproc 4 (OR) –executor distributed 10.0.0.8:8888

metadata

Metadata type commands

datacube metadata [OPTIONS] COMMAND [ARGS]...

add

Add or update metadata types in the index

datacube metadata add [OPTIONS] [FILES]...

Options

--allow-exclusive-lock, --forbid-exclusive-lock

Allow index to be locked from other users while updating (default: false)

Arguments

FILES

Optional argument(s)

list

List metadata types that are defined in the generic index.

datacube metadata list [OPTIONS]

show

Show information about a metadata type.

datacube metadata show [OPTIONS] [METADATA_TYPE_NAME]...

Options

-f <output_format>

Output format

Default

yaml

Options

yaml | json

Arguments

METADATA_TYPE_NAME

Optional argument(s)

update

Update existing metadata types.

An error will be thrown if a change is potentially unsafe.

(An unsafe change is anything that may potentially make the metadata type incompatible with existing types of the same name)

datacube metadata update [OPTIONS] [FILES]...

Options

--allow-unsafe, --forbid-unsafe

Allow unsafe updates (default: false)

--allow-exclusive-lock, --forbid-exclusive-lock

Allow index to be locked from other users while updating (default: false)

-d, --dry-run

Check if everything is ok

Arguments

FILES

Optional argument(s)

product

Product commands

datacube product [OPTIONS] COMMAND [ARGS]...

add

Add or update products in the generic index.

datacube product add [OPTIONS] [FILES]...

Options

--allow-exclusive-lock, --forbid-exclusive-lock

Allow index to be locked from other users while updating (default: false)

Arguments

FILES

Optional argument(s)

list

List products that are defined in the generic index.

datacube product list [OPTIONS]

Options

-f <output_format>

Output format

Default

default

Options

default | csv | yaml | tab

show

Show details about a product in the generic index.

datacube product show [OPTIONS] [PRODUCT_NAME]...

Options

-f <output_format>

Output format

Default

yaml

Options

yaml | json

Arguments

PRODUCT_NAME

Optional argument(s)

update

Update existing products.

An error will be thrown if a change is potentially unsafe.

(An unsafe change is anything that may potentially make the product incompatible with existing datasets of that type)

datacube product update [OPTIONS] [FILES]...

Options

--allow-unsafe, --forbid-unsafe

Allow unsafe updates (default: false)

--allow-exclusive-lock, --forbid-exclusive-lock

Allow index to be locked from other users while updating (default: false)

-d, --dry-run

Check if everything is ok

Arguments

FILES

Optional argument(s)

system

System commands

datacube system [OPTIONS] COMMAND [ARGS]...

check

Check and display current configuration

datacube system check [OPTIONS]

init

Initialise the database

datacube system init [OPTIONS]

Options

--default-types, --no-default-types

Add default types? (default: true)

--init-users, --no-init-users

Include user roles and grants. (default: true)

--recreate-views, --no-recreate-views

Recreate dynamic views

--rebuild, --no-rebuild

Rebuild all dynamic fields (caution: slow)

--lock-table, --no-lock-table

Allow table to be locked (eg. while creating missing indexes)

user

User management commands

datacube user [OPTIONS] COMMAND [ARGS]...

create

Create a User

datacube user create [OPTIONS] [user|ingest|manage|admin] USER

Options

--description <description>

Arguments

ROLE

Required argument

USER

Required argument

delete

Delete a User

datacube user delete [OPTIONS] [USERS]...

Arguments

USERS

Optional argument(s)

grant

Grant a role to users

datacube user grant [OPTIONS] [user|ingest|manage|admin] [USERS]...

Arguments

ROLE

Required argument

USERS

Optional argument(s)

list

List users

datacube user list [OPTIONS]

Options

-f <f>

Output format

Default

yaml

Options

csv | yaml

datacube-worker

datacube-worker [OPTIONS]

Options

--executor <executor>

(distributed|dask(alias for distributed)|celery) host:port

--nprocs <nprocs>

Number of worker processes to launch