List all datasets in the current BigQuery project
bq lsCreate a new BigQuery dataset
bq mk DATASET_NAMEDisplay schema and metadata for a BigQuery table
bq show DATASET.TABLERun a BigQuery query using standard SQL (GoogleSQL)
bq query --use_legacy_sql=false 'SELECT * FROM DATASET.TABLE'Load a CSV file from Cloud Storage into a BigQuery table using a schema file
bq load DATASET.TABLE gs://BUCKET/FILE.csv SCHEMA.jsonExport a BigQuery table to a CSV file in Cloud Storage
bq extract DATASET.TABLE gs://BUCKET/export.csvDelete a BigQuery table without prompting for confirmation
bq rm -f DATASET.TABLEDisplay the first rows of a BigQuery table
bq head DATASET.TABLECopy a BigQuery table to a new destination table
bq cp SOURCE_DATASET.SOURCE_TABLE DEST_DATASET.DEST_TABLECreate a BigQuery table with an explicit schema file
bq mk --table DATASET.TABLE SCHEMA.jsonEstimate the bytes processed by a BigQuery query without running it
bq query --dry_run --use_legacy_sql=false 'SELECT * FROM DATASET.TABLE'Run a BigQuery query and write the results to a destination table
bq query --use_legacy_sql=false --destination_table=DATASET.RESULTS_TABLE 'SELECT * FROM DATASET.TABLE'Load newline-delimited JSON data from Cloud Storage into BigQuery
bq load --source_format=NEWLINE_DELIMITED_JSON DATASET.TABLE gs://BUCKET/FILE.json SCHEMA.jsonCreate a BigQuery view with a standard SQL query
bq mk --use_legacy_sql=false --view 'SELECT id, name FROM DATASET.TABLE' DATASET.VIEW_NAMECreate a BigQuery dataset in a specific region such as the EU
bq mk --location=EU DATASET_NAMEExport a BigQuery table to compressed CSV files in Cloud Storage
bq extract --compression=GZIP --destination_format=CSV DATASET.TABLE gs://BUCKET/export_*.csv.gzRun a BigQuery query that fails if it would process more than 1 GB of data
bq query --use_legacy_sql=false --maximum_bytes_billed=1000000000 'SELECT * FROM DATASET.TABLE'Set a 30-day expiry on a BigQuery table (value in seconds)
bq update --expiration=2592000 DATASET.TABLELoad Parquet files from Cloud Storage into BigQuery with schema autodetection
bq load --source_format=PARQUET --autodetect DATASET.TABLE gs://BUCKET/*.parquetCreate a BigQuery table partitioned by day using a schema file
bq mk --table --time_partitioning_type=DAY DATASET.PARTITIONED_TABLE SCHEMA.jsonReady to test yourself?
Practice these commands →