[
{
"name": "ABS",
"arguments": [],
"category": "Mathematical",
"description_markdown": "```\nABS(X)\n```\n\n **Description** \n\nComputes absolute value. Returns an error if the argument is an integer and the\noutput value cannot be represented as the same type; this happens only for the\nlargest negative input value, which has no positive representation.\n\n| X | ABS(X) |\n| --- | --- |\n| 25 | 25 |\n| -25 | 25 |\n| `+inf` | `+inf` |\n| `-inf` | `+inf` |\n\n **Return Data Type** \n\n| INPUT | `INT64` | `NUMERIC` | `BIGNUMERIC` | `FLOAT64` |\n| --- | --- | --- | --- | --- |\n| OUTPUT | `INT64` | `NUMERIC` | `BIGNUMERIC` | `FLOAT64` |\n\n\n\n"
},
{
"name": "ACOS",
"arguments": [],
"category": "Mathematical",
"description_markdown": "```\nACOS(X)\n```\n\n **Description** \n\nComputes the principal value of the inverse cosine of X. The return value is in\nthe range [0,π]. Generates an error if X is a value outside of the\nrange [-1, 1].\n\n| X | ACOS(X) |\n| --- | --- |\n| `+inf` | `NaN` |\n| `-inf` | `NaN` |\n| `NaN` | `NaN` |\n| X < -1 | Error |\n| X > 1 | Error |\n\n\n\n"
},
{
"name": "ACOSH",
"arguments": [],
"category": "Mathematical",
"description_markdown": "```\nACOSH(X)\n```\n\n **Description** \n\nComputes the inverse hyperbolic cosine of X. Generates an error if X is a value\nless than 1.\n\n| X | ACOSH(X) |\n| --- | --- |\n| `+inf` | `+inf` |\n| `-inf` | `NaN` |\n| `NaN` | `NaN` |\n| X < 1 | Error |\n\n\n\n"
},
{
"name": "AEAD.DECRYPT_BYTES",
"arguments": [],
"category": "AEAD_encryption",
"description_markdown": "```\nAEAD.DECRYPT_BYTES(keyset, ciphertext, additional_data)\n```\n\n **Description** \n\nUses the matching key from`keyset`to decrypt`ciphertext`and verifies the\nintegrity of the data using`additional_data`. Returns an error if decryption or\nverification fails.\n\n`keyset`is a serialized`BYTES`value returned by one of the`KEYS`functions or a`STRUCT`returned by`KEYS.KEYSET_CHAIN`.`keyset`must contain the key that was used to\nencrypt`ciphertext`, and the key must be in an`'ENABLED'`state, or else the\nfunction returns an error.`AEAD.DECRYPT_BYTES`identifies the matching key\nin`keyset`by finding the key with the key ID that matches the one encrypted in`ciphertext`.\n\n`ciphertext`is a`BYTES`value that is the result of\na call to`AEAD.ENCRYPT`where the input`plaintext`was of type`BYTES`.\n\nIf`ciphertext`includes an initialization vector (IV),\nit should be the first bytes of`ciphertext`. If`ciphertext`includes an\nauthentication tag, it should be the last bytes of`ciphertext`. If the\nIV and authentic tag are one (SIV), it should be the first bytes of`ciphertext`. The IV and authentication tag commonly require 16 bytes, but may\nvary in size.\n\n`additional_data`is a`STRING`or`BYTES`value that binds the ciphertext to\nits context. This forces the ciphertext to be decrypted in the same context in\nwhich it was encrypted. This function casts any`STRING`value to`BYTES`.\nThis must be the same as the`additional_data`provided to`AEAD.ENCRYPT`to\nencrypt`ciphertext`, ignoring its type, or else the function returns an error.\n\n **Return Data Type** \n\n`BYTES`\n\n **Example** \n\nThis example creates a table of unique IDs with associated plaintext values and\nkeysets. Then it uses these keysets to encrypt the plaintext values as`BYTES`and store them in a new table. Finally, it\nuses`AEAD.DECRYPT_BYTES`to decrypt the encrypted values and display them as\nplaintext.\n\nThe following statement creates a table`CustomerKeysets`containing a column of\nunique IDs, a column of`AEAD_AES_GCM_256`keysets, and a column of favorite\nanimals.\n\n```\nCREATE TABLE aead.CustomerKeysets AS\nSELECT\n 1 AS customer_id,\n KEYS.NEW_KEYSET('AEAD_AES_GCM_256') AS keyset,\n b'jaguar' AS favorite_animal\nUNION ALL\nSELECT\n 2 AS customer_id,\n KEYS.NEW_KEYSET('AEAD_AES_GCM_256') AS keyset,\n b'zebra' AS favorite_animal\nUNION ALL\nSELECT\n 3 AS customer_id,\n KEYS.NEW_KEYSET('AEAD_AES_GCM_256') AS keyset,\n b'nautilus' AS favorite_animal;\n```\n\nThe following statement creates a table`EncryptedCustomerData`containing a\ncolumn of unique IDs and a column of ciphertext. The statement encrypts the\nplaintext`favorite_animal`using the keyset value from`CustomerKeysets`corresponding to each unique ID.\n\n```\nCREATE TABLE aead.EncryptedCustomerData AS\nSELECT\n customer_id,\n AEAD.ENCRYPT(keyset, favorite_animal, CAST(CAST(customer_id AS STRING) AS BYTES))\n AS encrypted_animal\nFROM\n aead.CustomerKeysets AS ck;\n```\n\nThe following query uses the keysets in the`CustomerKeysets`table to decrypt\ndata in the`EncryptedCustomerData`table.\n\n```\nSELECT\n ecd.customer_id,\n AEAD.DECRYPT_BYTES(\n (SELECT ck.keyset\n FROM aead.CustomerKeysets AS ck\n WHERE ecd.customer_id = ck.customer_id),\n ecd.encrypted_animal,\n CAST(CAST(customer_id AS STRING) AS BYTES)\n ) AS favorite_animal\nFROM aead.EncryptedCustomerData AS ecd;\n```\n\n\n"
},
{
"name": "AEAD.DECRYPT_STRING",
"arguments": [],
"category": "AEAD_encryption",
"description_markdown": "```\nAEAD.DECRYPT_STRING(keyset, ciphertext, additional_data)\n```\n\n **Description** \n\nLike[AEAD.DECRYPT_BYTES](#aeaddecrypt_bytes), but where`additional_data`is\nof type`STRING`.\n\n **Return Data Type** \n\n`STRING`\n\n\n\n"
},
{
"name": "AEAD.ENCRYPT",
"arguments": [],
"category": "AEAD_encryption",
"description_markdown": "```\nAEAD.ENCRYPT(keyset, plaintext, additional_data)\n```\n\n **Description** \n\nEncrypts`plaintext`using the primary cryptographic key in`keyset`. The\nalgorithm of the primary key must be`AEAD_AES_GCM_256`. Binds the ciphertext to\nthe context defined by`additional_data`. Returns`NULL`if any input is`NULL`.\n\n`keyset`is a serialized`BYTES`value returned by one of the`KEYS`functions or a`STRUCT`returned by`KEYS.KEYSET_CHAIN`.\n\n`plaintext`is the`STRING`or`BYTES`value to be encrypted.\n\n`additional_data`is a`STRING`or`BYTES`value that binds the ciphertext to\nits context. This forces the ciphertext to be decrypted in the same context in\nwhich it was encrypted.`plaintext`and`additional_data`must be of the same\ntype.`AEAD.ENCRYPT(keyset, string1, string2)`is equivalent to`AEAD.ENCRYPT(keyset, CAST(string1 AS BYTES), CAST(string2 AS BYTES))`.\n\nThe output is ciphertext`BYTES`. The ciphertext contains a[Tink-specific](https://github.com/google/tink/blob/master/docs/KEY-MANAGEMENT.md)prefix indicating the key used to perform the encryption.\n\n **Return Data Type** \n\n`BYTES`\n\n **Example** \n\nThe following query uses the keysets for each`customer_id`in the`CustomerKeysets`table to encrypt the value of the plaintext`favorite_animal`in the`PlaintextCustomerData`table corresponding to that`customer_id`. The\noutput contains a column of`customer_id`values and a column of\ncorresponding ciphertext output as`BYTES`.\n\n```\nWITH CustomerKeysets AS (\n SELECT 1 AS customer_id, KEYS.NEW_KEYSET('AEAD_AES_GCM_256') AS keyset UNION ALL\n SELECT 2, KEYS.NEW_KEYSET('AEAD_AES_GCM_256') UNION ALL\n SELECT 3, KEYS.NEW_KEYSET('AEAD_AES_GCM_256')\n), PlaintextCustomerData AS (\n SELECT 1 AS customer_id, 'elephant' AS favorite_animal UNION ALL\n SELECT 2, 'walrus' UNION ALL\n SELECT 3, 'leopard'\n)\nSELECT\n pcd.customer_id,\n AEAD.ENCRYPT(\n (SELECT keyset\n FROM CustomerKeysets AS ck\n WHERE ck.customer_id = pcd.customer_id),\n pcd.favorite_animal,\n CAST(pcd.customer_id AS STRING)\n ) AS encrypted_animal\nFROM PlaintextCustomerData AS pcd;\n```\n\n\n"
},
{
"name": "ANY_VALUE",
"arguments": [],
"category": "Aggregate",
"description_markdown": "```\nANY_VALUE(\n expression\n [ HAVING { MAX | MIN } expression2 ]\n)\n[ OVER over_clause ]\n\nover_clause:\n { named_window | ( [ window_specification ] ) }\n\nwindow_specification:\n [ named_window ]\n [ PARTITION BY partition_expression [, ...] ]\n [ ORDER BY expression [ { ASC | DESC } ] [, ...] ]\n [ window_frame_clause ]\n```\n\n **Description** \n\nReturns`expression`for some row chosen from the group. Which row is chosen is\nnondeterministic, not random. Returns`NULL`when the input produces no\nrows. Returns`NULL`when`expression`or`expression2`is`NULL`for all rows in the group.\n\n`ANY_VALUE`behaves as if`IGNORE NULLS`is specified;\nrows for which`expression`is`NULL`are not considered and won't be\nselected.\n\nIf the`HAVING`clause is included in the`ANY_VALUE`function, the`OVER`clause can't be used with this function.\n\nTo learn more about the optional aggregate clauses that you can pass\ninto this function, see[Aggregate function calls](/bigquery/docs/reference/standard-sql/aggregate-function-calls).\n\nTo learn more about the`OVER`clause and how to use it, see[Window function calls](/bigquery/docs/reference/standard-sql/window-function-calls).\n\n **Supported Argument Types** \n\nAny\n\n **Returned Data Types** \n\nMatches the input data type.\n\n **Examples** \n\n```\nSELECT ANY_VALUE(fruit) as any_value\nFROM UNNEST([\"apple\", \"banana\", \"pear\"]) as fruit;\n\n/*-----------*\n | any_value |\n +-----------+\n | apple |\n *-----------*/\n```\n\n```\nSELECT\n fruit,\n ANY_VALUE(fruit) OVER (ORDER BY LENGTH(fruit) ROWS BETWEEN 1 PRECEDING AND CURRENT ROW) AS any_value\nFROM UNNEST([\"apple\", \"banana\", \"pear\"]) as fruit;\n\n/*--------+-----------*\n | fruit | any_value |\n +--------+-----------+\n | pear | pear |\n | apple | pear |\n | banana | apple |\n *--------+-----------*/\n```\n\n```\nWITH\n Store AS (\n SELECT 20 AS sold, \"apples\" AS fruit\n UNION ALL\n SELECT 30 AS sold, \"pears\" AS fruit\n UNION ALL\n SELECT 30 AS sold, \"bananas\" AS fruit\n UNION ALL\n SELECT 10 AS sold, \"oranges\" AS fruit\n )\nSELECT ANY_VALUE(fruit HAVING MAX sold) AS a_highest_selling_fruit FROM Store;\n\n/*-------------------------*\n | a_highest_selling_fruit |\n +-------------------------+\n | pears |\n *-------------------------*/\n```\n\n```\nWITH\n Store AS (\n SELECT 20 AS sold, \"apples\" AS fruit\n UNION ALL\n SELECT 30 AS sold, \"pears\" AS fruit\n UNION ALL\n SELECT 30 AS sold, \"bananas\" AS fruit\n UNION ALL\n SELECT 10 AS sold, \"oranges\" AS fruit\n )\nSELECT ANY_VALUE(fruit HAVING MIN sold) AS a_lowest_selling_fruit FROM Store;\n\n/*-------------------------*\n | a_lowest_selling_fruit |\n +-------------------------+\n | oranges |\n *-------------------------*/\n```\n\n\n"
},
{
"name": "APPENDS",
"arguments": [],
"category": "Table",
"description_markdown": "Gets all rows that are appended to a table for a given time range.\nFor more information, see[APPENDS TVF](/bigquery/docs/change-history#appends-tvf).\n\n\n\n"
},
{
"name": "APPROX_COUNT_DISTINCT",
"arguments": [],
"category": "Approximate_aggregate",
"description_markdown": "```\nAPPROX_COUNT_DISTINCT(\n expression\n)\n```\n\n **Description** \n\nReturns the approximate result for`COUNT(DISTINCT expression)`. The value\nreturned is a statistical estimate, not necessarily the actual value.\n\nThis function is less accurate than`COUNT(DISTINCT expression)`, but performs\nbetter on huge input.\n\n **Supported Argument Types** \n\nAny data type **except** :\n\n- ` ARRAY`\n- ` STRUCT`\n- ` INTERVAL`\n\n **Returned Data Types** \n\n`INT64`\n\n **Examples** \n\n```\nSELECT APPROX_COUNT_DISTINCT(x) as approx_distinct\nFROM UNNEST([0, 1, 1, 2, 3, 5]) as x;\n\n/*-----------------*\n | approx_distinct |\n +-----------------+\n | 5 |\n *-----------------*/\n```\n\n\n"
},
{
"name": "APPROX_QUANTILES",
"arguments": [],
"category": "Approximate_aggregate",
"description_markdown": "```\nAPPROX_QUANTILES(\n [ DISTINCT ]\n expression, number\n [ { IGNORE | RESPECT } NULLS ]\n)\n```\n\n **Description** \n\nReturns the approximate boundaries for a group of`expression`values, where`number`represents the number of quantiles to create. This function returns an\narray of`number`+ 1 elements, sorted in ascending order, where the\nfirst element is the approximate minimum and the last element is the approximate\nmaximum.\n\nReturns`NULL`if there are zero input rows or`expression`evaluates to`NULL`for all rows.\n\nTo learn more about the optional aggregate clauses that you can pass\ninto this function, see[Aggregate function calls](/bigquery/docs/reference/standard-sql/aggregate-function-calls).\n\n **Supported Argument Types** \n\n- ` expression`: Any supported data type **except** :\n \n \n - ` ARRAY`\n - ` STRUCT`\n - ` INTERVAL`\n- ` number`:` INT64`literal or query parameter.\n \n \n\n **Returned Data Types** \n\n`ARRAY<T>`where`T`is the type specified by`expression`.\n\n **Examples** \n\n```\nSELECT APPROX_QUANTILES(x, 2) AS approx_quantiles\nFROM UNNEST([1, 1, 1, 4, 5, 6, 7, 8, 9, 10]) AS x;\n\n/*------------------*\n | approx_quantiles |\n +------------------+\n | [1, 5, 10] |\n *------------------*/\n```\n\n```\nSELECT APPROX_QUANTILES(x, 100)[OFFSET(90)] AS percentile_90\nFROM UNNEST([1, 2, 3, 4, 5, 6, 7, 8, 9, 10]) AS x;\n\n/*---------------*\n | percentile_90 |\n +---------------+\n | 9 |\n *---------------*/\n```\n\n```\nSELECT APPROX_QUANTILES(DISTINCT x, 2) AS approx_quantiles\nFROM UNNEST([1, 1, 1, 4, 5, 6, 7, 8, 9, 10]) AS x;\n\n/*------------------*\n | approx_quantiles |\n +------------------+\n | [1, 6, 10] |\n *------------------*/\n```\n\n```\nSELECT FORMAT(\"%T\", APPROX_QUANTILES(x, 2 RESPECT NULLS)) AS approx_quantiles\nFROM UNNEST([NULL, NULL, 1, 1, 1, 4, 5, 6, 7, 8, 9, 10]) AS x;\n\n/*------------------*\n | approx_quantiles |\n +------------------+\n | [NULL, 4, 10] |\n *------------------*/\n```\n\n```\nSELECT FORMAT(\"%T\", APPROX_QUANTILES(DISTINCT x, 2 RESPECT NULLS)) AS approx_quantiles\nFROM UNNEST([NULL, NULL, 1, 1, 1, 4, 5, 6, 7, 8, 9, 10]) AS x;\n\n/*------------------*\n | approx_quantiles |\n +------------------+\n | [NULL, 6, 10] |\n *------------------*/\n```\n\n\n"
},
{
"name": "APPROX_TOP_COUNT",
"arguments": [],
"category": "Approximate_aggregate",
"description_markdown": "```\nAPPROX_TOP_COUNT(\n expression, number\n)\n```\n\n **Description** \n\nReturns the approximate top elements of`expression`as an array of`STRUCT`s.\nThe`number`parameter specifies the number of elements returned.\n\nEach`STRUCT`contains two fields. The first field (named`value`) contains an\ninput value. The second field (named`count`) contains an`INT64`specifying the\nnumber of times the value was returned.\n\nReturns`NULL`if there are zero input rows.\n\nTo learn more about the optional aggregate clauses that you can pass\ninto this function, see[Aggregate function calls](/bigquery/docs/reference/standard-sql/aggregate-function-calls).\n\n **Supported Argument Types** \n\n- ` expression`: Any data type that the` GROUP BY`clause supports.\n- ` number`:` INT64`literal or query parameter.\n\n **Returned Data Types** \n\n`ARRAY<STRUCT>`\n\n **Examples** \n\n```\nSELECT APPROX_TOP_COUNT(x, 2) as approx_top_count\nFROM UNNEST([\"apple\", \"apple\", \"pear\", \"pear\", \"pear\", \"banana\"]) as x;\n\n/*-------------------------*\n | approx_top_count |\n +-------------------------+\n | [{pear, 3}, {apple, 2}] |\n *-------------------------*/\n```\n\n **NULL handling** \n\n`APPROX_TOP_COUNT`does not ignore`NULL`s in the input. For example:\n\n```\nSELECT APPROX_TOP_COUNT(x, 2) as approx_top_count\nFROM UNNEST([NULL, \"pear\", \"pear\", \"pear\", \"apple\", NULL]) as x;\n\n/*------------------------*\n | approx_top_count |\n +------------------------+\n | [{pear, 3}, {NULL, 2}] |\n *------------------------*/\n```\n\n\n"
},
{
"name": "APPROX_TOP_SUM",
"arguments": [],
"category": "Approximate_aggregate",
"description_markdown": "```\nAPPROX_TOP_SUM(\n expression, weight, number\n)\n```\n\n **Description** \n\nReturns the approximate top elements of`expression`, based on the sum of an\nassigned`weight`. The`number`parameter specifies the number of elements\nreturned.\n\nIf the`weight`input is negative or`NaN`, this function returns an error.\n\nThe elements are returned as an array of`STRUCT`s.\nEach`STRUCT`contains two fields:`value`and`sum`.\nThe`value`field contains the value of the input expression. The`sum`field is\nthe same type as`weight`, and is the approximate sum of the input weight\nassociated with the`value`field.\n\nReturns`NULL`if there are zero input rows.\n\nTo learn more about the optional aggregate clauses that you can pass\ninto this function, see[Aggregate function calls](/bigquery/docs/reference/standard-sql/aggregate-function-calls).\n\n **Supported Argument Types** \n\n- ` expression`: Any data type that the` GROUP BY`clause supports.\n- ` weight`: One of the following:\n \n \n - ` INT64`\n - ` NUMERIC`\n - ` BIGNUMERIC`\n - ` FLOAT64`\n- ` number`:` INT64`literal or query parameter.\n \n \n\n **Returned Data Types** \n\n`ARRAY<STRUCT>`\n\n **Examples** \n\n```\nSELECT APPROX_TOP_SUM(x, weight, 2) AS approx_top_sum FROM\nUNNEST([\n STRUCT(\"apple\" AS x, 3 AS weight),\n (\"pear\", 2),\n (\"apple\", 0),\n (\"banana\", 5),\n (\"pear\", 4)\n]);\n\n/*--------------------------*\n | approx_top_sum |\n +--------------------------+\n | [{pear, 6}, {banana, 5}] |\n *--------------------------*/\n```\n\n **NULL handling** \n\n`APPROX_TOP_SUM`does not ignore`NULL`values for the`expression`and`weight`parameters.\n\n```\nSELECT APPROX_TOP_SUM(x, weight, 2) AS approx_top_sum FROM\nUNNEST([STRUCT(\"apple\" AS x, NULL AS weight), (\"pear\", 0), (\"pear\", NULL)]);\n\n/*----------------------------*\n | approx_top_sum |\n +----------------------------+\n | [{pear, 0}, {apple, NULL}] |\n *----------------------------*/\n```\n\n```\nSELECT APPROX_TOP_SUM(x, weight, 2) AS approx_top_sum FROM\nUNNEST([STRUCT(\"apple\" AS x, 0 AS weight), (NULL, 2)]);\n\n/*-------------------------*\n | approx_top_sum |\n +-------------------------+\n | [{NULL, 2}, {apple, 0}] |\n *-------------------------*/\n```\n\n```\nSELECT APPROX_TOP_SUM(x, weight, 2) AS approx_top_sum FROM\nUNNEST([STRUCT(\"apple\" AS x, 0 AS weight), (NULL, NULL)]);\n\n/*----------------------------*\n | approx_top_sum |\n +----------------------------+\n | [{apple, 0}, {NULL, NULL}] |\n *----------------------------*/\n```\n\n\n<span id=\"array_functions\">\n## Array functions\n\n</span>\nGoogleSQL for BigQuery supports the following array functions.\n\n\n\n"
},
{
"name": "ARRAY",
"arguments": [],
"category": "Array",
"description_markdown": "```\nARRAY(subquery)\n```\n\n **Description** \n\nThe`ARRAY`function returns an`ARRAY`with one element for each row in a[subquery](/bigquery/docs/reference/standard-sql/subqueries).\n\nIf`subquery`produces a\nSQL table,\nthe table must have exactly one column. Each element in the output`ARRAY`is\nthe value of the single column of a row in the table.\n\nIf`subquery`produces a\nvalue table,\nthen each element in the output`ARRAY`is the entire corresponding row of the\nvalue table.\n\n **Constraints** \n\n- Subqueries are unordered, so the elements of the output` ARRAY`are not\nguaranteed to preserve any order in the source table for the subquery. However,\nif the subquery includes an` ORDER BY`clause, the` ARRAY`function will return\nan` ARRAY`that honors that clause.\n- If the subquery returns more than one column, the` ARRAY`function returns an\nerror.\n- If the subquery returns an` ARRAY`typed column or` ARRAY`typed rows, the` ARRAY`function returns an error that GoogleSQL does not support` ARRAY`s with elements of type[ARRAY](/bigquery/docs/reference/standard-sql/data-types#array_type).\n- If the subquery returns zero rows, the` ARRAY`function returns an empty` ARRAY`. It never returns a` NULL`` ARRAY`.\n\n **Return type** \n\n`ARRAY`\n\n **Examples** \n\n```\nSELECT ARRAY\n (SELECT 1 UNION ALL\n SELECT 2 UNION ALL\n SELECT 3) AS new_array;\n\n/*-----------*\n | new_array |\n +-----------+\n | [1, 2, 3] |\n *-----------*/\n```\n\nTo construct an`ARRAY`from a subquery that contains multiple\ncolumns, change the subquery to use`SELECT AS STRUCT`. Now\nthe`ARRAY`function will return an`ARRAY`of`STRUCT`s. The`ARRAY`will\ncontain one`STRUCT`for each row in the subquery, and each of these`STRUCT`s\nwill contain a field for each column in that row.\n\n```\nSELECT\n ARRAY\n (SELECT AS STRUCT 1, 2, 3\n UNION ALL SELECT AS STRUCT 4, 5, 6) AS new_array;\n\n/*------------------------*\n | new_array |\n +------------------------+\n | [{1, 2, 3}, {4, 5, 6}] |\n *------------------------*/\n```\n\nSimilarly, to construct an`ARRAY`from a subquery that contains\none or more`ARRAY`s, change the subquery to use`SELECT AS STRUCT`.\n\n```\nSELECT ARRAY\n (SELECT AS STRUCT [1, 2, 3] UNION ALL\n SELECT AS STRUCT [4, 5, 6]) AS new_array;\n\n/*----------------------------*\n | new_array |\n +----------------------------+\n | [{[1, 2, 3]}, {[4, 5, 6]}] |\n *----------------------------*/\n```\n\n\n"
},
{
"name": "ARRAY_AGG",
"arguments": [],
"category": "Aggregate",
"description_markdown": "```\nARRAY_AGG(\n [ DISTINCT ]\n expression\n [ { IGNORE | RESPECT } NULLS ]\n [ ORDER BY key [ { ASC | DESC } ] [, ... ] ]\n [ LIMIT n ]\n)\n[ OVER over_clause ]\n\nover_clause:\n { named_window | ( [ window_specification ] ) }\n\nwindow_specification:\n [ named_window ]\n [ PARTITION BY partition_expression [, ...] ]\n [ ORDER BY expression [ { ASC | DESC } ] [, ...] ]\n [ window_frame_clause ]\n```\n\n **Description** \n\nReturns an ARRAY of`expression`values.\n\nTo learn more about the optional aggregate clauses that you can pass\ninto this function, see[Aggregate function calls](/bigquery/docs/reference/standard-sql/aggregate-function-calls).\n\nIf this function is used with the`OVER`clause, it's part of a\nwindow function call. In a window function call,\naggregate function clauses can't be used.\nTo learn more about the`OVER`clause and how to use it, see[Window function calls](/bigquery/docs/reference/standard-sql/window-function-calls).\n\nAn error is raised if an array in the final query result contains a`NULL`element.\n\n **Supported Argument Types** \n\nAll data types except ARRAY.\n\n **Returned Data Types** \n\nARRAY\n\nIf there are zero input rows, this function returns`NULL`.\n\n **Examples** \n\n```\nSELECT ARRAY_AGG(x) AS array_agg FROM UNNEST([2, 1,-2, 3, -2, 1, 2]) AS x;\n\n/*-------------------------*\n | array_agg |\n +-------------------------+\n | [2, 1, -2, 3, -2, 1, 2] |\n *-------------------------*/\n```\n\n```\nSELECT ARRAY_AGG(DISTINCT x) AS array_agg\nFROM UNNEST([2, 1, -2, 3, -2, 1, 2]) AS x;\n\n/*---------------*\n | array_agg |\n +---------------+\n | [2, 1, -2, 3] |\n *---------------*/\n```\n\n```\nSELECT ARRAY_AGG(x IGNORE NULLS) AS array_agg\nFROM UNNEST([NULL, 1, -2, 3, -2, 1, NULL]) AS x;\n\n/*-------------------*\n | array_agg |\n +-------------------+\n | [1, -2, 3, -2, 1] |\n *-------------------*/\n```\n\n```\nSELECT ARRAY_AGG(x ORDER BY ABS(x)) AS array_agg\nFROM UNNEST([2, 1, -2, 3, -2, 1, 2]) AS x;\n\n/*-------------------------*\n | array_agg |\n +-------------------------+\n | [1, 1, 2, -2, -2, 2, 3] |\n *-------------------------*/\n```\n\n```\nSELECT ARRAY_AGG(x LIMIT 5) AS array_agg\nFROM UNNEST([2, 1, -2, 3, -2, 1, 2]) AS x;\n\n/*-------------------*\n | array_agg |\n +-------------------+\n | [2, 1, -2, 3, -2] |\n *-------------------*/\n```\n\n```\nWITH vals AS\n (\n SELECT 1 x UNION ALL\n SELECT -2 x UNION ALL\n SELECT 3 x UNION ALL\n SELECT -2 x UNION ALL\n SELECT 1 x\n )\nSELECT ARRAY_AGG(DISTINCT x ORDER BY x) as array_agg\nFROM vals;\n\n/*------------*\n | array_agg |\n +------------+\n | [-2, 1, 3] |\n *------------*/\n```\n\n```\nWITH vals AS\n (\n SELECT 1 x, 'a' y UNION ALL\n SELECT 1 x, 'b' y UNION ALL\n SELECT 2 x, 'a' y UNION ALL\n SELECT 2 x, 'c' y\n )\nSELECT x, ARRAY_AGG(y) as array_agg\nFROM vals\nGROUP BY x;\n\n/*---------------*\n | x | array_agg |\n +---------------+\n | 1 | [a, b] |\n | 2 | [a, c] |\n *---------------*/\n```\n\n```\nSELECT\n x,\n ARRAY_AGG(x) OVER (ORDER BY ABS(x)) AS array_agg\nFROM UNNEST([2, 1, -2, 3, -2, 1, 2]) AS x;\n\n/*----+-------------------------*\n | x | array_agg |\n +----+-------------------------+\n | 1 | [1, 1] |\n | 1 | [1, 1] |\n | 2 | [1, 1, 2, -2, -2, 2] |\n | -2 | [1, 1, 2, -2, -2, 2] |\n | -2 | [1, 1, 2, -2, -2, 2] |\n | 2 | [1, 1, 2, -2, -2, 2] |\n | 3 | [1, 1, 2, -2, -2, 2, 3] |\n *----+-------------------------*/\n```\n\n\n"
},
{
"name": "ARRAY_CONCAT",
"arguments": [],
"category": "Array",
"description_markdown": "```\nARRAY_CONCAT(array_expression[, ...])\n```\n\n **Description** \n\nConcatenates one or more arrays with the same element type into a single array.\n\nThe function returns`NULL`if any input argument is`NULL`.\n\n **Note:** You can also use the[|| concatenation operator](#operators)to concatenate arrays. **Return type** \n\n`ARRAY`\n\n **Examples** \n\n```\nSELECT ARRAY_CONCAT([1, 2], [3, 4], [5, 6]) as count_to_six;\n\n/*--------------------------------------------------*\n | count_to_six |\n +--------------------------------------------------+\n | [1, 2, 3, 4, 5, 6] |\n *--------------------------------------------------*/\n```\n\n\n"
},
{
"name": "ARRAY_CONCAT_AGG",
"arguments": [],
"category": "Aggregate",
"description_markdown": "```\nARRAY_CONCAT_AGG(\n expression\n [ ORDER BY key [ { ASC | DESC } ] [, ... ] ]\n [ LIMIT n ]\n)\n```\n\n **Description** \n\nConcatenates elements from`expression`of type`ARRAY`, returning a single\narray as a result.\n\nThis function ignores`NULL`input arrays, but respects the`NULL`elements in\nnon-`NULL`input arrays. An\nerror is raised, however, if an array in the final query result contains a`NULL`element. Returns`NULL`if there are zero input rows or`expression`evaluates to`NULL`for all rows.\n\nTo learn more about the optional aggregate clauses that you can pass\ninto this function, see[Aggregate function calls](/bigquery/docs/reference/standard-sql/aggregate-function-calls).\n\n **Supported Argument Types** \n\n`ARRAY`\n\n **Returned Data Types** \n\n`ARRAY`\n\n **Examples** \n\n```\nSELECT FORMAT(\"%T\", ARRAY_CONCAT_AGG(x)) AS array_concat_agg FROM (\n SELECT [NULL, 1, 2, 3, 4] AS x\n UNION ALL SELECT NULL\n UNION ALL SELECT [5, 6]\n UNION ALL SELECT [7, 8, 9]\n);\n\n/*-----------------------------------*\n | array_concat_agg |\n +-----------------------------------+\n | [NULL, 1, 2, 3, 4, 5, 6, 7, 8, 9] |\n *-----------------------------------*/\n```\n\n```\nSELECT FORMAT(\"%T\", ARRAY_CONCAT_AGG(x ORDER BY ARRAY_LENGTH(x))) AS array_concat_agg FROM (\n SELECT [1, 2, 3, 4] AS x\n UNION ALL SELECT [5, 6]\n UNION ALL SELECT [7, 8, 9]\n);\n\n/*-----------------------------------*\n | array_concat_agg |\n +-----------------------------------+\n | [5, 6, 7, 8, 9, 1, 2, 3, 4] |\n *-----------------------------------*/\n```\n\n```\nSELECT FORMAT(\"%T\", ARRAY_CONCAT_AGG(x LIMIT 2)) AS array_concat_agg FROM (\n SELECT [1, 2, 3, 4] AS x\n UNION ALL SELECT [5, 6]\n UNION ALL SELECT [7, 8, 9]\n);\n\n/*--------------------------*\n | array_concat_agg |\n +--------------------------+\n | [1, 2, 3, 4, 5, 6] |\n *--------------------------*/\n```\n\n```\nSELECT FORMAT(\"%T\", ARRAY_CONCAT_AGG(x ORDER BY ARRAY_LENGTH(x) LIMIT 2)) AS array_concat_agg FROM (\n SELECT [1, 2, 3, 4] AS x\n UNION ALL SELECT [5, 6]\n UNION ALL SELECT [7, 8, 9]\n);\n\n/*------------------*\n | array_concat_agg |\n +------------------+\n | [5, 6, 7, 8, 9] |\n *------------------*/\n```\n\n\n"
},
{
"name": "ARRAY_LENGTH",
"arguments": [],
"category": "Array",
"description_markdown": "```\nARRAY_LENGTH(array_expression)\n```\n\n **Description** \n\nReturns the size of the array. Returns 0 for an empty array. Returns`NULL`if\nthe`array_expression`is`NULL`.\n\n **Return type** \n\n`INT64`\n\n **Examples** \n\n```\nWITH items AS\n (SELECT [\"coffee\", NULL, \"milk\" ] as list\n UNION ALL\n SELECT [\"cake\", \"pie\"] as list)\nSELECT ARRAY_TO_STRING(list, ', ', 'NULL'), ARRAY_LENGTH(list) AS size\nFROM items\nORDER BY size DESC;\n\n/*--------------------+------*\n | list | size |\n +--------------------+------+\n | coffee, NULL, milk | 3 |\n | cake, pie | 2 |\n *--------------------+------*/\n```\n\n\n"
},
{
"name": "ARRAY_REVERSE",
"arguments": [],
"category": "Array",
"description_markdown": "```\nARRAY_REVERSE(value)\n```\n\n **Description** \n\nReturns the input`ARRAY`with elements in reverse order.\n\n **Return type** \n\n`ARRAY`\n\n **Examples** \n\n```\nWITH example AS (\n SELECT [1, 2, 3] AS arr UNION ALL\n SELECT [4, 5] AS arr UNION ALL\n SELECT [] AS arr\n)\nSELECT\n arr,\n ARRAY_REVERSE(arr) AS reverse_arr\nFROM example;\n\n/*-----------+-------------*\n | arr | reverse_arr |\n +-----------+-------------+\n | [1, 2, 3] | [3, 2, 1] |\n | [4, 5] | [5, 4] |\n | [] | [] |\n *-----------+-------------*/\n```\n\n\n"
},
{
"name": "ARRAY_TO_STRING",
"arguments": [],
"category": "Array",
"description_markdown": "```\nARRAY_TO_STRING(array_expression, delimiter[, null_text])\n```\n\n **Description** \n\nReturns a concatenation of the elements in`array_expression`as a`STRING`. The value for`array_expression`can either be an array of`STRING`or`BYTES`data types.\n\nIf the`null_text`parameter is used, the function replaces any`NULL`values in\nthe array with the value of`null_text`.\n\nIf the`null_text`parameter is not used, the function omits the`NULL`value\nand its preceding delimiter.\n\n **Return type** \n\n`STRING`\n\n **Examples** \n\n```\nWITH items AS\n (SELECT ['coffee', 'tea', 'milk' ] as list\n UNION ALL\n SELECT ['cake', 'pie', NULL] as list)\n\nSELECT ARRAY_TO_STRING(list, '--') AS text\nFROM items;\n\n/*--------------------------------*\n | text |\n +--------------------------------+\n | coffee--tea--milk |\n | cake--pie |\n *--------------------------------*/\n```\n\n```\nWITH items AS\n (SELECT ['coffee', 'tea', 'milk' ] as list\n UNION ALL\n SELECT ['cake', 'pie', NULL] as list)\n\nSELECT ARRAY_TO_STRING(list, '--', 'MISSING') AS text\nFROM items;\n\n/*--------------------------------*\n | text |\n +--------------------------------+\n | coffee--tea--milk |\n | cake--pie--MISSING |\n *--------------------------------*/\n```\n\n\n"
},
{
"name": "ASCII",
"arguments": [],
"category": "String",
"description_markdown": "```\nASCII(value)\n```\n\n **Description** \n\nReturns the ASCII code for the first character or byte in`value`. Returns`0`if`value`is empty or the ASCII code is`0`for the first character\nor byte.\n\n **Return type** \n\n`INT64`\n\n **Examples** \n\n```\nSELECT ASCII('abcd') as A, ASCII('a') as B, ASCII('') as C, ASCII(NULL) as D;\n\n/*-------+-------+-------+-------*\n | A | B | C | D |\n +-------+-------+-------+-------+\n | 97 | 97 | 0 | NULL |\n *-------+-------+-------+-------*/\n```\n\n\n"
},
{
"name": "ASIN",
"arguments": [],
"category": "Mathematical",
"description_markdown": "```\nASIN(X)\n```\n\n **Description** \n\nComputes the principal value of the inverse sine of X. The return value is in\nthe range [-π/2,π/2]. Generates an error if X is outside of\nthe range [-1, 1].\n\n| X | ASIN(X) |\n| --- | --- |\n| `+inf` | `NaN` |\n| `-inf` | `NaN` |\n| `NaN` | `NaN` |\n| X < -1 | Error |\n| X > 1 | Error |\n\n\n\n"
},
{
"name": "ASINH",
"arguments": [],
"category": "Mathematical",
"description_markdown": "```\nASINH(X)\n```\n\n **Description** \n\nComputes the inverse hyperbolic sine of X. Does not fail.\n\n| X | ASINH(X) |\n| --- | --- |\n| `+inf` | `+inf` |\n| `-inf` | `-inf` |\n| `NaN` | `NaN` |\n\n\n\n"
},
{
"name": "ATAN",
"arguments": [],
"category": "Mathematical",
"description_markdown": "```\nATAN(X)\n```\n\n **Description** \n\nComputes the principal value of the inverse tangent of X. The return value is\nin the range [-π/2,π/2]. Does not fail.\n\n| X | ATAN(X) |\n| --- | --- |\n| `+inf` | π/2 |\n| `-inf` | -π/2 |\n| `NaN` | `NaN` |\n\n\n\n"
},
{
"name": "ATAN2",
"arguments": [],
"category": "Mathematical",
"description_markdown": "```\nATAN2(X, Y)\n```\n\n **Description** \n\nCalculates the principal value of the inverse tangent of X/Y using the signs of\nthe two arguments to determine the quadrant. The return value is in the range\n[-π,π].\n\n| X | Y | ATAN2(X, Y) |\n| --- | --- | --- |\n| `NaN` | Any value | `NaN` |\n| Any value | `NaN` | `NaN` |\n| 0.0 | 0.0 | 0.0 |\n| Positive Finite value | `-inf` | π |\n| Negative Finite value | `-inf` | -π |\n| Finite value | `+inf` | 0.0 |\n| `+inf` | Finite value | π/2 |\n| `-inf` | Finite value | -π/2 |\n| `+inf` | `-inf` | ¾π |\n| `-inf` | `-inf` | -¾π |\n| `+inf` | `+inf` | π/4 |\n| `-inf` | `+inf` | -π/4 |\n\n\n\n"
},
{
"name": "ATANH",
"arguments": [],
"category": "Mathematical",
"description_markdown": "```\nATANH(X)\n```\n\n **Description** \n\nComputes the inverse hyperbolic tangent of X. Generates an error if X is outside\nof the range (-1, 1).\n\n| X | ATANH(X) |\n| --- | --- |\n| `+inf` | `NaN` |\n| `-inf` | `NaN` |\n| `NaN` | `NaN` |\n| X < -1 | Error |\n| X > 1 | Error |\n\n\n\n"
},
{
"name": "AVG",
"arguments": [],
"category": "Aggregate",
"description_markdown": "```\nAVG(\n [ DISTINCT ]\n expression\n)\n[ OVER over_clause ]\n\nover_clause:\n { named_window | ( [ window_specification ] ) }\n\nwindow_specification:\n [ named_window ]\n [ PARTITION BY partition_expression [, ...] ]\n [ ORDER BY expression [ { ASC | DESC } ] [, ...] ]\n [ window_frame_clause ]\n```\n\n **Description** \n\nReturns the average of non-`NULL`values in an aggregated group.\n\nTo learn more about the optional aggregate clauses that you can pass\ninto this function, see[Aggregate function calls](/bigquery/docs/reference/standard-sql/aggregate-function-calls).\n\nThis function can be used with the[AGGREGATION_THRESHOLD clause](/bigquery/docs/reference/standard-sql/query-syntax#agg_threshold_clause).\n\nIf this function is used with the`OVER`clause, it's part of a\nwindow function call. In a window function call,\naggregate function clauses can't be used.\nTo learn more about the`OVER`clause and how to use it, see[Window function calls](/bigquery/docs/reference/standard-sql/window-function-calls).\n\n`AVG`can be used with differential privacy. For more information, see[Differentially private aggregate functions](#aggregate-dp-functions).\n\nCaveats:\n\n- If the aggregated group is empty or the argument is` NULL`for all rows in\nthe group, returns` NULL`.\n- If the argument is` NaN`for any row in the group, returns` NaN`.\n- If the argument is` [+|-]Infinity`for any row in the group, returns either` [+|-]Infinity`or` NaN`.\n- If there is numeric overflow, produces an error.\n- If a[floating-point type](/bigquery/docs/reference/standard-sql/data-types#floating_point_types)is returned, the result is[non-deterministic](/bigquery/docs/reference/standard-sql/data-types#floating-point-semantics), which means you might receive a\ndifferent result each time you use this function.\n\n **Supported Argument Types** \n\n- Any numeric input type\n- ` INTERVAL`\n\n **Returned Data Types** \n\n| INPUT | `INT64` | `NUMERIC` | `BIGNUMERIC` | `FLOAT64` | `INTERVAL` |\n| --- | --- | --- | --- | --- | --- |\n| OUTPUT | `FLOAT64` | `NUMERIC` | `BIGNUMERIC` | `FLOAT64` | `INTERVAL` |\n\n **Examples** \n\n```\nSELECT AVG(x) as avg\nFROM UNNEST([0, 2, 4, 4, 5]) as x;\n\n/*-----*\n | avg |\n +-----+\n | 3 |\n *-----*/\n```\n\n```\nSELECT AVG(DISTINCT x) AS avg\nFROM UNNEST([0, 2, 4, 4, 5]) AS x;\n\n/*------*\n | avg |\n +------+\n | 2.75 |\n *------*/\n```\n\n```\nSELECT\n x,\n AVG(x) OVER (ORDER BY x ROWS BETWEEN 1 PRECEDING AND CURRENT ROW) AS avg\nFROM UNNEST([0, 2, NULL, 4, 4, 5]) AS x;\n\n/*------+------*\n | x | avg |\n +------+------+\n | NULL | NULL |\n | 0 | 0 |\n | 2 | 1 |\n | 4 | 3 |\n | 4 | 4 |\n | 5 | 4.5 |\n *------+------*/\n```\n\n\n"
},
{
"name": "BIT_AND",
"arguments": [],
"category": "Aggregate",
"description_markdown": "```\nBIT_AND(\n expression\n)\n```\n\n **Description** \n\nPerforms a bitwise AND operation on`expression`and returns the result.\n\nTo learn more about the optional aggregate clauses that you can pass\ninto this function, see[Aggregate function calls](/bigquery/docs/reference/standard-sql/aggregate-function-calls).\n\n **Supported Argument Types** \n\n- INT64\n\n **Returned Data Types** \n\nINT64\n\n **Examples** \n\n```\nSELECT BIT_AND(x) as bit_and FROM UNNEST([0xF001, 0x00A1]) as x;\n\n/*---------*\n | bit_and |\n +---------+\n | 1 |\n *---------*/\n```\n\n\n"
},
{
"name": "BIT_COUNT",
"arguments": [],
"category": "Bit",
"description_markdown": "```\nBIT_COUNT(expression)\n```\n\n **Description** \n\nThe input,`expression`, must be an\ninteger or`BYTES`.\n\nReturns the number of bits that are set in the input`expression`.\nFor signed integers, this is the number of bits in two's complement form.\n\n **Return Data Type** \n\n`INT64`\n\n **Example** \n\n```\nSELECT a, BIT_COUNT(a) AS a_bits, FORMAT(\"%T\", b) as b, BIT_COUNT(b) AS b_bits\nFROM UNNEST([\n STRUCT(0 AS a, b'' AS b), (0, b'\\x00'), (5, b'\\x05'), (8, b'\\x00\\x08'),\n (0xFFFF, b'\\xFF\\xFF'), (-2, b'\\xFF\\xFF\\xFF\\xFF\\xFF\\xFF\\xFF\\xFE'),\n (-1, b'\\xFF\\xFF\\xFF\\xFF\\xFF\\xFF\\xFF\\xFF'),\n (NULL, b'\\xFF\\xFF\\xFF\\xFF\\xFF\\xFF\\xFF\\xFF\\xFF\\xFF')\n]) AS x;\n\n/*-------+--------+---------------------------------------------+--------*\n | a | a_bits | b | b_bits |\n +-------+--------+---------------------------------------------+--------+\n | 0 | 0 | b\"\" | 0 |\n | 0 | 0 | b\"\\x00\" | 0 |\n | 5 | 2 | b\"\\x05\" | 2 |\n | 8 | 1 | b\"\\x00\\x08\" | 1 |\n | 65535 | 16 | b\"\\xff\\xff\" | 16 |\n | -2 | 63 | b\"\\xff\\xff\\xff\\xff\\xff\\xff\\xff\\xfe\" | 63 |\n | -1 | 64 | b\"\\xff\\xff\\xff\\xff\\xff\\xff\\xff\\xff\" | 64 |\n | NULL | NULL | b\"\\xff\\xff\\xff\\xff\\xff\\xff\\xff\\xff\\xff\\xff\" | 80 |\n *-------+--------+---------------------------------------------+--------*/\n```\n\n\n<span id=\"conversion_functions\">\n## Conversion functions\n\n</span>\nGoogleSQL for BigQuery supports conversion functions. These data type\nconversions are explicit, but some conversions can happen implicitly. You can\nlearn more about implicit and explicit conversion[here](/bigquery/docs/reference/standard-sql/conversion_rules#conversion_rules).\n\n\n\n"
},
{
"name": "BIT_OR",
"arguments": [],
"category": "Aggregate",
"description_markdown": "```\nBIT_OR(\n expression\n)\n```\n\n **Description** \n\nPerforms a bitwise OR operation on`expression`and returns the result.\n\nTo learn more about the optional aggregate clauses that you can pass\ninto this function, see[Aggregate function calls](/bigquery/docs/reference/standard-sql/aggregate-function-calls).\n\n **Supported Argument Types** \n\n- INT64\n\n **Returned Data Types** \n\nINT64\n\n **Examples** \n\n```\nSELECT BIT_OR(x) as bit_or FROM UNNEST([0xF001, 0x00A1]) as x;\n\n/*--------*\n | bit_or |\n +--------+\n | 61601 |\n *--------*/\n```\n\n\n"
},
{
"name": "BIT_XOR",
"arguments": [],
"category": "Aggregate",
"description_markdown": "```\nBIT_XOR(\n [ DISTINCT ]\n expression\n)\n```\n\n **Description** \n\nPerforms a bitwise XOR operation on`expression`and returns the result.\n\nTo learn more about the optional aggregate clauses that you can pass\ninto this function, see[Aggregate function calls](/bigquery/docs/reference/standard-sql/aggregate-function-calls).\n\n **Supported Argument Types** \n\n- INT64\n\n **Returned Data Types** \n\nINT64\n\n **Examples** \n\n```\nSELECT BIT_XOR(x) AS bit_xor FROM UNNEST([5678, 1234]) AS x;\n\n/*---------*\n | bit_xor |\n +---------+\n | 4860 |\n *---------*/\n```\n\n```\nSELECT BIT_XOR(x) AS bit_xor FROM UNNEST([1234, 5678, 1234]) AS x;\n\n/*---------*\n | bit_xor |\n +---------+\n | 5678 |\n *---------*/\n```\n\n```\nSELECT BIT_XOR(DISTINCT x) AS bit_xor FROM UNNEST([1234, 5678, 1234]) AS x;\n\n/*---------*\n | bit_xor |\n +---------+\n | 4860 |\n *---------*/\n```\n\n\n"
},
{
"name": "BOOL",
"arguments": [],
"category": "JSON",
"description_markdown": "```\nBOOL(json_expr)\n```\n\n **Description** \n\nConverts a JSON boolean to a SQL`BOOL`value.\n\nArguments:\n\n- ` json_expr`: JSON. For example:\n \n \n ```\n JSON 'true'\n ```\n \n If the JSON value is not a boolean, an error is produced. If the expression\nis SQL` NULL`, the function returns SQL` NULL`.\n \n \n\n **Return type** \n\n`BOOL`\n\n **Examples** \n\n```\nSELECT BOOL(JSON 'true') AS vacancy;\n\n/*---------*\n | vacancy |\n +---------+\n | true |\n *---------*/\n```\n\n```\nSELECT BOOL(JSON_QUERY(JSON '{\"hotel class\": \"5-star\", \"vacancy\": true}', \"$.vacancy\")) AS vacancy;\n\n/*---------*\n | vacancy |\n +---------+\n | true |\n *---------*/\n```\n\nThe following examples show how invalid requests are handled:\n\n```\n-- An error is thrown if JSON is not of type bool.\nSELECT BOOL(JSON '123') AS result; -- Throws an error\nSELECT BOOL(JSON 'null') AS result; -- Throws an error\nSELECT SAFE.BOOL(JSON '123') AS result; -- Returns a SQL NULL\n```\n\n\n"
},
{
"name": "BYTE_LENGTH",
"arguments": [],
"category": "String",
"description_markdown": "```\nBYTE_LENGTH(value)\n```\n\n **Description** \n\nGets the number of`BYTES`in a`STRING`or`BYTES`value,\nregardless of whether the value is a`STRING`or`BYTES`type.\n\n **Return type** \n\n`INT64`\n\n **Examples** \n\n```\nWITH example AS\n (SELECT 'абвгд' AS characters, b'абвгд' AS bytes)\n\nSELECT\n characters,\n BYTE_LENGTH(characters) AS string_example,\n bytes,\n BYTE_LENGTH(bytes) AS bytes_example\nFROM example;\n\n/*------------+----------------+-------+---------------*\n | characters | string_example | bytes | bytes_example |\n +------------+----------------+-------+---------------+\n | абвгд | 10 | абвгд | 10 |\n *------------+----------------+-------+---------------*/\n```\n\n\n"
},
{
"name": "CAST",
"arguments": [],
"category": "Conversion",
"description_markdown": "```\nCAST(expression AS typename [format_clause])\n```\n\n **Description** \n\nCast syntax is used in a query to indicate that the result type of an\nexpression should be converted to some other type.\n\nWhen using`CAST`, a query can fail if GoogleSQL is unable to perform\nthe cast. If you want to protect your queries from these types of errors, you\ncan use[SAFE_CAST](#safe_casting).\n\nCasts between supported types that do not successfully map from the original\nvalue to the target domain produce runtime errors. For example, casting`BYTES`to`STRING`where the byte sequence is not valid UTF-8 results in a\nruntime error.\n\nSome casts can include a[format clause](/bigquery/docs/reference/standard-sql/format-elements#formatting_syntax), which provides\ninstructions for how to conduct the\ncast. For example, you could\ninstruct a cast to convert a sequence of bytes to a BASE64-encoded string\ninstead of a UTF-8-encoded string.\n\nThe structure of the format clause is unique to each type of cast and more\ninformation is available in the section for that cast.\n\n **Examples** \n\nThe following query results in`\"true\"`if`x`is`1`,`\"false\"`for any other\nnon-`NULL`value, and`NULL`if`x`is`NULL`.\n\n```\nCAST(x=1 AS STRING)\n```\n\n\n"
},
{
"name": "CATEGORIES",
"arguments": [],
"category": "Geography",
"description_markdown": "The geography functions are grouped into the following categories based on their\nbehavior:\n\n| Category | Functions | Description |\n| --- | --- | --- |\n| Constructors | [ST_GEOGPOINT](#st_geogpoint) \n[ST_MAKELINE](#st_makeline) \n[ST_MAKEPOLYGON](#st_makepolygon) \n[ST_MAKEPOLYGONORIENTED](#st_makepolygonoriented) | Functions that build new\n geography values from coordinates\n or existing geographies. |\n| Parsers | [ST_GEOGFROM](#st_geogfrom) \n[ST_GEOGFROMGEOJSON](#st_geogfromgeojson) \n[ST_GEOGFROMTEXT](#st_geogfromtext) \n[ST_GEOGFROMWKB](#st_geogfromwkb) \n[ST_GEOGPOINTFROMGEOHASH](#st_geogpointfromgeohash) \n | Functions that create geographies\n from an external format such as[WKT](https://en.wikipedia.org/wiki/Well-known_text)and[GeoJSON](https://en.wikipedia.org/wiki/GeoJSON). |\n| Formatters | [ST_ASBINARY](#st_asbinary) \n[ST_ASGEOJSON](#st_asgeojson) \n[ST_ASTEXT](#st_astext) \n[ST_GEOHASH](#st_geohash) | Functions that export geographies\n to an external format such as WKT. |\n| Transformations | [ST_BOUNDARY](#st_boundary) \n[ST_BUFFER](#st_buffer) \n[ST_BUFFERWITHTOLERANCE](#st_bufferwithtolerance) \n[ST_CENTROID](#st_centroid) \n[ST_CENTROID_AGG](#st_centroid_agg)(Aggregate) \n[ST_CLOSESTPOINT](#st_closestpoint) \n[ST_CONVEXHULL](#st_convexhull) \n[ST_DIFFERENCE](#st_difference) \n[ST_EXTERIORRING](#st_exteriorring) \n[ST_INTERIORRINGS](#st_interiorrings) \n[ST_INTERSECTION](#st_intersection) \n[ST_LINEINTERPOLATEPOINT](#st_lineinterpolatepoint) \n[ST_LINESUBSTRING](#st_linesubstring) \n[ST_SIMPLIFY](#st_simplify) \n[ST_SNAPTOGRID](#st_snaptogrid) \n[ST_UNION](#st_union) \n[ST_UNION_AGG](#st_union_agg)(Aggregate) \n | Functions that generate a new\n geography based on input. |\n| Accessors | [ST_DIMENSION](#st_dimension) \n[ST_DUMP](#st_dump) \n[ST_ENDPOINT](#st_endpoint) \n[ST_GEOMETRYTYPE](#st_geometrytype) \n[ST_ISCLOSED](#st_isclosed) \n[ST_ISCOLLECTION](#st_iscollection) \n[ST_ISEMPTY](#st_isempty) \n[ST_ISRING](#st_isring) \n[ST_NPOINTS](#st_npoints) \n[ST_NUMGEOMETRIES](#st_numgeometries) \n[ST_NUMPOINTS](#st_numpoints) \n[ST_POINTN](#st_pointn) \n[ST_STARTPOINT](#st_startpoint) \n[ST_X](#st_x) \n[ST_Y](#st_y) \n | Functions that provide access to\n properties of a geography without\n side-effects. |\n| Predicates | [ST_CONTAINS](#st_contains) \n[ST_COVEREDBY](#st_coveredby) \n[ST_COVERS](#st_covers) \n[ST_DISJOINT](#st_disjoint) \n[ST_DWITHIN](#st_dwithin) \n[ST_EQUALS](#st_equals) \n[ST_INTERSECTS](#st_intersects) \n[ST_INTERSECTSBOX](#st_intersectsbox) \n[ST_TOUCHES](#st_touches) \n[ST_WITHIN](#st_within) \n | Functions that return`TRUE`or`FALSE`for some spatial\n relationship between two\n geographies or some property of\n a geography. These functions\n are commonly used in filter\n clauses. |\n| Measures | [ST_ANGLE](#st_angle) \n[ST_AREA](#st_area) \n[ST_AZIMUTH](#st_azimuth) \n[ST_BOUNDINGBOX](#st_boundingbox) \n[ST_DISTANCE](#st_distance) \n[ST_EXTENT](#st_extent)(Aggregate) \n[ST_HAUSDORFFDISTANCE](#st_hausdorffdistance) \n[ST_LINELOCATEPOINT](#st_linelocatepoint) \n[ST_LENGTH](#st_length) \n[ST_MAXDISTANCE](#st_maxdistance) \n[ST_PERIMETER](#st_perimeter) \n | Functions that compute measurements\n of one or more geographies. |\n| Clustering | [ST_CLUSTERDBSCAN](#st_clusterdbscan) | Functions that perform clustering on geographies. |\n| S2 functions | [S2_CELLIDFROMPOINT](#s2_cellidfrompoint) \n[S2_COVERINGCELLIDS](#s2_coveringcellids) \n | Functions for working with S2 cell coverings of GEOGRAPHY. |\n\n\n\n"
},
{
"name": "CBRT",
"arguments": [],
"category": "Mathematical",
"description_markdown": "```\nCBRT(X)\n```\n\n **Description** \n\nComputes the cube root of`X`.`X`can be any data type\nthat[coerces to FLOAT64](/bigquery/docs/reference/standard-sql/conversion_rules#conversion_rules).\nSupports the`SAFE.`prefix.\n\n| X | CBRT(X) |\n| --- | --- |\n| `+inf` | `inf` |\n| `-inf` | `-inf` |\n| `NaN` | `NaN` |\n| `0` | `0` |\n| `NULL` | `NULL` |\n\n **Return Data Type** \n\n`FLOAT64`\n\n **Example** \n\n```\nSELECT CBRT(27) AS cube_root;\n\n/*--------------------*\n | cube_root |\n +--------------------+\n | 3.0000000000000004 |\n *--------------------*/\n```\n\n\n"
},
{
"name": "CEIL",
"arguments": [],
"category": "Mathematical",
"description_markdown": "```\nCEIL(X)\n```\n\n **Description** \n\nReturns the smallest integral value that is not less than X.\n\n| X | CEIL(X) |\n| --- | --- |\n| 2.0 | 2.0 |\n| 2.3 | 3.0 |\n| 2.8 | 3.0 |\n| 2.5 | 3.0 |\n| -2.3 | -2.0 |\n| -2.8 | -2.0 |\n| -2.5 | -2.0 |\n| 0 | 0 |\n| `+inf` | `+inf` |\n| `-inf` | `-inf` |\n| `NaN` | `NaN` |\n\n **Return Data Type** \n\n| INPUT | `INT64` | `NUMERIC` | `BIGNUMERIC` | `FLOAT64` |\n| --- | --- | --- | --- | --- |\n| OUTPUT | `FLOAT64` | `NUMERIC` | `BIGNUMERIC` | `FLOAT64` |\n\n\n\n"
},
{
"name": "CEILING",
"arguments": [],
"category": "Mathematical",
"description_markdown": "```\nCEILING(X)\n```\n\n **Description** \n\nSynonym of CEIL(X)\n\n\n\n"
},
{
"name": "CHARACTER_LENGTH",
"arguments": [],
"category": "String",
"description_markdown": "```\nCHARACTER_LENGTH(value)\n```\n\n **Description** \n\nSynonym for[CHAR_LENGTH](#char_length).\n\n **Return type** \n\n`INT64`\n\n **Examples** \n\n```\nWITH example AS\n (SELECT 'абвгд' AS characters)\n\nSELECT\n characters,\n CHARACTER_LENGTH(characters) AS char_length_example\nFROM example;\n\n/*------------+---------------------*\n | characters | char_length_example |\n +------------+---------------------+\n | абвгд | 5 |\n *------------+---------------------*/\n```\n\n\n"
},
{
"name": "CHAR_LENGTH",
"arguments": [],
"category": "String",
"description_markdown": "```\nCHAR_LENGTH(value)\n```\n\n **Description** \n\nGets the number of characters in a`STRING`value.\n\n **Return type** \n\n`INT64`\n\n **Examples** \n\n```\nWITH example AS\n (SELECT 'абвгд' AS characters)\n\nSELECT\n characters,\n CHAR_LENGTH(characters) AS char_length_example\nFROM example;\n\n/*------------+---------------------*\n | characters | char_length_example |\n +------------+---------------------+\n | абвгд | 5 |\n *------------+---------------------*/\n```\n\n\n"
},
{
"name": "CHR",
"arguments": [],
"category": "String",
"description_markdown": "```\nCHR(value)\n```\n\n **Description** \n\nTakes a Unicode[code point](https://en.wikipedia.org/wiki/Code_point)and returns\nthe character that matches the code point. Each valid code point should fall\nwithin the range of [0, 0xD7FF] and [0xE000, 0x10FFFF]. Returns an empty string\nif the code point is`0`. If an invalid Unicode code point is specified, an\nerror is returned.\n\nTo work with an array of Unicode code points, see[CODE_POINTS_TO_STRING](#code_points_to_string)\n\n **Return type** \n\n`STRING`\n\n **Examples** \n\n```\nSELECT CHR(65) AS A, CHR(255) AS B, CHR(513) AS C, CHR(1024) AS D;\n\n/*-------+-------+-------+-------*\n | A | B | C | D |\n +-------+-------+-------+-------+\n | A | ÿ | ȁ | Ѐ |\n *-------+-------+-------+-------*/\n```\n\n```\nSELECT CHR(97) AS A, CHR(0xF9B5) AS B, CHR(0) AS C, CHR(NULL) AS D;\n\n/*-------+-------+-------+-------*\n | A | B | C | D |\n +-------+-------+-------+-------+\n | a | 例 | | NULL |\n *-------+-------+-------+-------*/\n```\n\n\n"
},
{
"name": "CODE_POINTS_TO_BYTES",
"arguments": [],
"category": "String",
"description_markdown": "```\nCODE_POINTS_TO_BYTES(ascii_code_points)\n```\n\n **Description** \n\nTakes an array of extended ASCII[code points](https://en.wikipedia.org/wiki/Code_point)as`ARRAY<INT64>`and returns`BYTES`.\n\nTo convert from`BYTES`to an array of code points, see[TO_CODE_POINTS](#to_code_points).\n\n **Return type** \n\n`BYTES`\n\n **Examples** \n\nThe following is a basic example using`CODE_POINTS_TO_BYTES`.\n\n```\nSELECT CODE_POINTS_TO_BYTES([65, 98, 67, 100]) AS bytes;\n\n-- Note that the result of CODE_POINTS_TO_BYTES is of type BYTES, displayed as a base64-encoded string.\n-- In BYTES format, b'AbCd' is the result.\n/*----------*\n | bytes |\n +----------+\n | QWJDZA== |\n *----------*/\n```\n\nThe following example uses a rotate-by-13 places (ROT13) algorithm to encode a\nstring.\n\n```\nSELECT CODE_POINTS_TO_BYTES(ARRAY_AGG(\n (SELECT\n CASE\n WHEN chr BETWEEN b'a' and b'z'\n THEN TO_CODE_POINTS(b'a')[offset(0)] +\n MOD(code+13-TO_CODE_POINTS(b'a')[offset(0)],26)\n WHEN chr BETWEEN b'A' and b'Z'\n THEN TO_CODE_POINTS(b'A')[offset(0)] +\n MOD(code+13-TO_CODE_POINTS(b'A')[offset(0)],26)\n ELSE code\n END\n FROM\n (SELECT code, CODE_POINTS_TO_BYTES([code]) chr)\n ) ORDER BY OFFSET)) AS encoded_string\nFROM UNNEST(TO_CODE_POINTS(b'Test String!')) code WITH OFFSET;\n\n-- Note that the result of CODE_POINTS_TO_BYTES is of type BYTES, displayed as a base64-encoded string.\n-- In BYTES format, b'Grfg Fgevat!' is the result.\n/*------------------*\n | encoded_string |\n +------------------+\n | R3JmZyBGZ2V2YXQh |\n *------------------*/\n```\n\n\n"
},
{
"name": "CODE_POINTS_TO_STRING",
"arguments": [],
"category": "String",
"description_markdown": "```\nCODE_POINTS_TO_STRING(unicode_code_points)\n```\n\n **Description** \n\nTakes an array of Unicode[code points](https://en.wikipedia.org/wiki/Code_point)as`ARRAY<INT64>`and returns a`STRING`.\n\nTo convert from a string to an array of code points, see[TO_CODE_POINTS](#to_code_points).\n\n **Return type** \n\n`STRING`\n\n **Examples** \n\nThe following are basic examples using`CODE_POINTS_TO_STRING`.\n\n```\nSELECT CODE_POINTS_TO_STRING([65, 255, 513, 1024]) AS string;\n\n/*--------*\n | string |\n +--------+\n | AÿȁЀ |\n *--------*/\n```\n\n```\nSELECT CODE_POINTS_TO_STRING([97, 0, 0xF9B5]) AS string;\n\n/*--------*\n | string |\n +--------+\n | a例 |\n *--------*/\n```\n\n```\nSELECT CODE_POINTS_TO_STRING([65, 255, NULL, 1024]) AS string;\n\n/*--------*\n | string |\n +--------+\n | NULL |\n *--------*/\n```\n\nThe following example computes the frequency of letters in a set of words.\n\n```\nWITH Words AS (\n SELECT word\n FROM UNNEST(['foo', 'bar', 'baz', 'giraffe', 'llama']) AS word\n)\nSELECT\n CODE_POINTS_TO_STRING([code_point]) AS letter,\n COUNT(*) AS letter_count\nFROM Words,\n UNNEST(TO_CODE_POINTS(word)) AS code_point\nGROUP BY 1\nORDER BY 2 DESC;\n\n/*--------+--------------*\n | letter | letter_count |\n +--------+--------------+\n | a | 5 |\n | f | 3 |\n | r | 2 |\n | b | 2 |\n | l | 2 |\n | o | 2 |\n | g | 1 |\n | z | 1 |\n | e | 1 |\n | m | 1 |\n | i | 1 |\n *--------+--------------*/\n```\n\n\n"
},
{
"name": "COLLATE",
"arguments": [],
"category": "String",
"description_markdown": "```\nCOLLATE(value, collate_specification)\n```\n\nTakes a`STRING`and a[collation specification](/bigquery/docs/reference/standard-sql/collation-concepts#collate_spec_details). Returns\na`STRING`with a collation specification. If`collate_specification`is empty,\nreturns a value with collation removed from the`STRING`.\n\nThe collation specification defines how the resulting`STRING`can be compared\nand sorted. To learn more, see[Working with collation](/bigquery/docs/reference/standard-sql/collation-concepts#working_with_collation).\n\n- ` collation_specification`must be a string literal, otherwise an error is\nthrown.\n- Returns` NULL`if` value`is` NULL`.\n\n **Return type** \n\n`STRING`\n\n **Examples** \n\nIn this example, the weight of`a`is less than the weight of`Z`. This\nis because the collate specification,`und:ci`assigns more weight to`Z`.\n\n```\nWITH Words AS (\n SELECT\n COLLATE('a', 'und:ci') AS char1,\n COLLATE('Z', 'und:ci') AS char2\n)\nSELECT ( Words.char1 < Words.char2 ) AS a_less_than_Z\nFROM Words;\n\n/*----------------*\n | a_less_than_Z |\n +----------------+\n | TRUE |\n *----------------*/\n```\n\nIn this example, the weight of`a`is greater than the weight of`Z`. This\nis because the default collate specification assigns more weight to`a`.\n\n```\nWITH Words AS (\n SELECT\n 'a' AS char1,\n 'Z' AS char2\n)\nSELECT ( Words.char1 < Words.char2 ) AS a_less_than_Z\nFROM Words;\n\n/*----------------*\n | a_less_than_Z |\n +----------------+\n | FALSE |\n *----------------*/\n```\n\n\n"
},
{
"name": "CONCAT",
"arguments": [],
"category": "String",
"description_markdown": "```\nCONCAT(value1[, ...])\n```\n\n **Description** \n\nConcatenates one or more values into a single result. All values must be`BYTES`or data types that can be cast to`STRING`.\n\nThe function returns`NULL`if any input argument is`NULL`.\n\n **Note:** You can also use the[|| concatenation operator](#operators)to concatenate\nvalues into a string. **Return type** \n\n`STRING`or`BYTES`\n\n **Examples** \n\n```\nSELECT CONCAT('T.P.', ' ', 'Bar') as author;\n\n/*---------------------*\n | author |\n +---------------------+\n | T.P. Bar |\n *---------------------*/\n```\n\n```\nSELECT CONCAT('Summer', ' ', 1923) as release_date;\n\n/*---------------------*\n | release_date |\n +---------------------+\n | Summer 1923 |\n *---------------------*/\n```\n\n```\nWith Employees AS\n (SELECT\n 'John' AS first_name,\n 'Doe' AS last_name\n UNION ALL\n SELECT\n 'Jane' AS first_name,\n 'Smith' AS last_name\n UNION ALL\n SELECT\n 'Joe' AS first_name,\n 'Jackson' AS last_name)\n\nSELECT\n CONCAT(first_name, ' ', last_name)\n AS full_name\nFROM Employees;\n\n/*---------------------*\n | full_name |\n +---------------------+\n | John Doe |\n | Jane Smith |\n | Joe Jackson |\n *---------------------*/\n```\n\n\n"
},
{
"name": "CONTAINS_SUBSTR",
"arguments": [],
"category": "String",
"description_markdown": "```\nCONTAINS_SUBSTR(expression, search_value_literal[, json_scope=>json_scope_value])\n\njson_scope_value:\n { 'JSON_VALUES' | 'JSON_KEYS' | 'JSON_KEYS_AND_VALUES' }\n```\n\n **Description** \n\nPerforms a normalized, case-insensitive search to see if a value exists as a\nsubstring in an expression. Returns`TRUE`if the value exists, otherwise\nreturns`FALSE`.\n\nBefore values are compared, they are[normalized and case folded with NFKC normalization](#normalize_and_casefold). Wildcard searches are not\nsupported.\n\n **Arguments** \n\n- ` search_value_literal`: The value to search for. It must be a` STRING`literal or a` STRING`constant expression.\n- ` expression`: The data to search over. The expression can be a column or\ntable reference. A table reference is evaluated as a` STRUCT`whose fields\nare the columns of the table. A column reference is evaluated as one the\nfollowing data types:\n \n \n - ` STRING`\n - ` INT64`\n - ` BOOL`\n - ` NUMERIC`\n - ` BIGNUMERIC`\n - ` TIMESTAMP`\n - ` TIME`\n - ` DATE`\n - ` DATETIME`\n - ` ARRAY`\n - ` STRUCT`When the expression is evaluated, the result is cast to a` STRING`, and then\nthe function looks for the search value in the result.\n \n You can perform a cross-field search on an expression that evaluates to a` STRUCT`or` ARRAY`. If the expression evaluates to a` STRUCT`, the\ncross-field search is recursive and includes all subfields inside the` STRUCT`.\n \n In a cross-field search, each field and subfield is individually converted\nto a string and searched for the value. The function returns` TRUE`if at\nleast one field includes the search value; otherwise, if at least one field\nis` NULL`, it returns` NULL`; otherwise, if the search value is not found\nand all fields are non-` NULL`, it returns` FALSE`.\n \n If the expression is` NULL`, the return value is` NULL`.\n \n \n- ` json_scope`: This optional[mandatory-named argument](/bigquery/docs/reference/standard-sql/functions-reference#named_arguments)takes one of the following values to indicate the scope of` JSON`data to be\nsearched. It has no effect if` expression`is not` JSON`or does not\ncontain a` JSON`field.\n \n \n - ` 'JSON_VALUES'`: Only the` JSON`values are searched. If` json_scope`is\nnot provided, this is used by default.\n - ` 'JSON_KEYS'`: Only the` JSON`keys are searched.\n - ` 'JSON_KEYS_AND_VALUES'`: The` JSON`keys and values are searched.\n\n **Return type** \n\n`BOOL`\n\n **Examples** \n\nThe following query returns`TRUE`because this case-insensitive match\nwas found:`blue house`and`Blue house`.\n\n```\nSELECT CONTAINS_SUBSTR('the blue house', 'Blue house') AS result;\n\n/*--------*\n | result |\n +--------+\n | true |\n *--------*/\n```\n\nThe following query returns`TRUE`similar to the above example, but in this\ncase the search value is a constant expression with CONCAT function.\n\n```\nSELECT CONTAINS_SUBSTR('the blue house', CONCAT('Blue ', 'house')) AS result;\n\n/*--------*\n | result |\n +--------+\n | true |\n *--------*/\n```\n\nThe following query returns`FALSE`because`blue`was not found\nin`the red house`.\n\n```\nSELECT CONTAINS_SUBSTR('the red house', 'blue') AS result;\n\n/*--------*\n | result |\n +--------+\n | false |\n *--------*/\n```\n\nThe following query returns`TRUE`because`Ⅸ`and`IX`represent the same\nnormalized value.\n\n```\nSELECT '\\u2168 day' AS a, 'IX' AS b, CONTAINS_SUBSTR('\\u2168', 'IX') AS result;\n\n/*----------------------*\n | a | b | result |\n +----------------------+\n | Ⅸ day | IX | true |\n *----------------------*/\n```\n\nThe following query returns`TRUE`because`35`was found inside a`STRUCT`field.\n\n```\nSELECT CONTAINS_SUBSTR((23, 35, 41), '35') AS result;\n\n/*--------*\n | result |\n +--------+\n | true |\n *--------*/\n```\n\nThe following query returns`TRUE`because`jk`was found during a\nrecursive search inside a`STRUCT`.\n\n```\nSELECT CONTAINS_SUBSTR(('abc', ['def', 'ghi', 'jkl'], 'mno'), 'jk');\n\n/*--------*\n | result |\n +--------+\n | true |\n *--------*/\n```\n\nThe following query returns`TRUE`because`NULL`s are ignored when\na match is found found inside a`STRUCT`or`ARRAY`.\n\n```\nSELECT CONTAINS_SUBSTR((23, NULL, 41), '41') AS result;\n\n/*--------*\n | result |\n +--------+\n | true |\n *--------*/\n```\n\nThe following query returns`NULL`because a`NULL`existed in a`STRUCT`that\ndid not result in a match.\n\n```\nSELECT CONTAINS_SUBSTR((23, NULL, 41), '35') AS result;\n\n/*--------*\n | result |\n +--------+\n | null |\n *--------*/\n```\n\nIn the following query, an error is thrown because the search value cannot be\na literal`NULL`.\n\n```\nSELECT CONTAINS_SUBSTR('hello', NULL) AS result;\n-- Throws an error\n```\n\nThe following examples reference a table called`Recipes`that you can emulate\nwith a`WITH`clause like this:\n\n```\nWITH Recipes AS\n (SELECT 'Blueberry pancakes' as Breakfast, 'Egg salad sandwich' as Lunch, 'Potato dumplings' as Dinner UNION ALL\n SELECT 'Potato pancakes', 'Toasted cheese sandwich', 'Beef stroganoff' UNION ALL\n SELECT 'Ham scramble', 'Steak avocado salad', 'Tomato pasta' UNION ALL\n SELECT 'Avocado toast', 'Tomato soup', 'Blueberry salmon' UNION ALL\n SELECT 'Corned beef hash', 'Lentil potato soup', 'Glazed ham')\nSELECT * FROM Recipes;\n\n/*-------------------+-------------------------+------------------*\n | Breakfast | Lunch | Dinner |\n +-------------------+-------------------------+------------------+\n | Bluberry pancakes | Egg salad sandwich | Potato dumplings |\n | Potato pancakes | Toasted cheese sandwich | Beef stroganoff |\n | Ham scramble | Steak avocado salad | Tomato pasta |\n | Avocado toast | Tomato soup | Blueberry samon |\n | Corned beef hash | Lentil potato soup | Glazed ham |\n *-------------------+-------------------------+------------------*/\n```\n\nThe following query searches across all columns of the`Recipes`table for the\nvalue`toast`and returns the rows that contain this value.\n\n```\nSELECT * FROM Recipes WHERE CONTAINS_SUBSTR(Recipes, 'toast');\n\n/*-------------------+-------------------------+------------------*\n | Breakfast | Lunch | Dinner |\n +-------------------+-------------------------+------------------+\n | Potato pancakes | Toasted cheese sandwich | Beef stroganoff |\n | Avocado toast | Tomato soup | Blueberry samon |\n *-------------------+-------------------------+------------------*/\n```\n\nThe following query searches the`Lunch`and`Dinner`columns of the`Recipe`table for the value`potato`and returns the row if either column\ncontains this value.\n\n```\nSELECT * FROM Recipes WHERE CONTAINS_SUBSTR((Lunch, Dinner), 'potato');\n\n/*-------------------+-------------------------+------------------*\n | Breakfast | Lunch | Dinner |\n +-------------------+-------------------------+------------------+\n | Bluberry pancakes | Egg salad sandwich | Potato dumplings |\n | Corned beef hash | Lentil potato soup | Glazed ham |\n *-------------------+-------------------------+------------------*/\n```\n\nThe following query searches across all columns of the`Recipes`table\nexcept for the`Lunch`and`Dinner`columns. It returns the rows of any\ncolumns other than`Lunch`or`Dinner`that contain the value`potato`.\n\n```\nSELECT *\nFROM Recipes\nWHERE CONTAINS_SUBSTR(\n (SELECT AS STRUCT Recipes.* EXCEPT (Lunch, Dinner)),\n 'potato'\n);\n\n/*-------------------+-------------------------+------------------*\n | Breakfast | Lunch | Dinner |\n +-------------------+-------------------------+------------------+\n | Potato pancakes | Toasted cheese sandwich | Beef stroganoff |\n *-------------------+-------------------------+------------------*/\n```\n\nThe following query searches for the value`lunch`in the JSON`{\"lunch\":\"soup\"}`and returns`FALSE`because the default`json_scope`is`\"JSON_VALUES\"`, and`lunch`is a`JSON`key, not a`JSON`value.\n\n```\nSELECT CONTAINS_SUBSTR(JSON '{\"lunch\":\"soup\"}',\"lunch\") AS result;\n\n/*--------*\n | result |\n +--------+\n | FALSE |\n *--------*/\n```\n\nThe following query searches for the value`lunch`in the values of the JSON`{\"lunch\":\"soup\"}`and returns`FALSE`because`lunch`is a`JSON`key, not a`JSON`value.\n\n```\nSELECT CONTAINS_SUBSTR(JSON '{\"lunch\":\"soup\"}',\n \"lunch\",\n json_scope=>\"JSON_VALUES\") AS result;\n\n/*--------*\n | result |\n +--------+\n | FALSE |\n *--------*/\n```\n\nThe following query searches for the value`lunch`in the keys and values of the\nJSON`{\"lunch\":\"soup\"}`and returns`TRUE`because`lunch`is a`JSON`key.\n\n```\nSELECT CONTAINS_SUBSTR(JSON '{\"lunch\":\"soup\"}',\n \"lunch\",\n json_scope=>\"JSON_KEYS_AND_VALUES\") AS result;\n\n/*--------*\n | result |\n +--------+\n | TRUE |\n *--------*/\n```\n\nThe following query searches for the value`lunch`in the keys of the JSON`{\"lunch\":\"soup\"}`and returns`TRUE`because`lunch`is a`JSON`key.\n\n```\nSELECT CONTAINS_SUBSTR(JSON '{\"lunch\":\"soup\"}',\n \"lunch\",\n json_scope=>\"JSON_KEYS\") AS result;\n\n/*--------*\n | result |\n +--------+\n | TRUE |\n *--------*/\n```\n\n\n"
},
{
"name": "CORR",
"arguments": [],
"category": "Statistical_aggregate",
"description_markdown": "```\nCORR(\n X1, X2\n)\n[ OVER over_clause ]\n\nover_clause:\n { named_window | ( [ window_specification ] ) }\n\nwindow_specification:\n [ named_window ]\n [ PARTITION BY partition_expression [, ...] ]\n [ ORDER BY expression [ { ASC | DESC } ] [, ...] ]\n [ window_frame_clause ]\n```\n\n **Description** \n\nReturns the[Pearson coefficient](https://en.wikipedia.org/wiki/Pearson_product-moment_correlation_coefficient)of correlation of a set of number pairs. For each number pair, the first number\nis the dependent variable and the second number is the independent variable.\nThe return result is between`-1`and`1`. A result of`0`indicates no\ncorrelation.\n\nAll numeric types are supported. If the\ninput is`NUMERIC`or`BIGNUMERIC`then the internal aggregation is\nstable with the final output converted to a`FLOAT64`.\nOtherwise the input is converted to a`FLOAT64`before aggregation, resulting in a potentially unstable result.\n\nThis function ignores any input pairs that contain one or more`NULL`values. If\nthere are fewer than two input pairs without`NULL`values, this function\nreturns`NULL`.\n\n`NaN`is produced if:\n\n- Any input value is` NaN`\n- Any input value is positive infinity or negative infinity.\n- The variance of` X1`or` X2`is` 0`.\n- The covariance of` X1`and` X2`is` 0`.\n\nTo learn more about the optional aggregate clauses that you can pass\ninto this function, see[Aggregate function calls](/bigquery/docs/reference/standard-sql/aggregate-function-calls).\n\nTo learn more about the`OVER`clause and how to use it, see[Window function calls](/bigquery/docs/reference/standard-sql/window-function-calls).\n\n **Return Data Type** \n\n`FLOAT64`\n\n **Examples** \n\n```\nSELECT CORR(y, x) AS results\nFROM\n UNNEST(\n [\n STRUCT(1.0 AS y, 5.0 AS x),\n (3.0, 9.0),\n (4.0, 7.0)]);\n\n/*--------------------*\n | results |\n +--------------------+\n | 0.6546536707079772 |\n *--------------------*/\n```\n\n```\nSELECT CORR(y, x) AS results\nFROM\n UNNEST(\n [\n STRUCT(1.0 AS y, 5.0 AS x),\n (3.0, 9.0),\n (4.0, NULL)]);\n\n/*---------*\n | results |\n +---------+\n | 1 |\n *---------*/\n```\n\n```\nSELECT CORR(y, x) AS results\nFROM UNNEST([STRUCT(1.0 AS y, NULL AS x),(9.0, 3.0)])\n\n/*---------*\n | results |\n +---------+\n | NULL |\n *---------*/\n```\n\n```\nSELECT CORR(y, x) AS results\nFROM UNNEST([STRUCT(1.0 AS y, NULL AS x),(9.0, NULL)])\n\n/*---------*\n | results |\n +---------+\n | NULL |\n *---------*/\n```\n\n```\nSELECT CORR(y, x) AS results\nFROM\n UNNEST(\n [\n STRUCT(1.0 AS y, 5.0 AS x),\n (3.0, 9.0),\n (4.0, 7.0),\n (5.0, 1.0),\n (7.0, CAST('Infinity' as FLOAT64))])\n\n/*---------*\n | results |\n +---------+\n | NaN |\n *---------*/\n```\n\n```\nSELECT CORR(x, y) AS results\nFROM\n (\n SELECT 0 AS x, 0 AS y\n UNION ALL\n SELECT 0 AS x, 0 AS y\n )\n\n/*---------*\n | results |\n +---------+\n | NaN |\n *---------*/\n```\n\n\n"
},
{
"name": "COS",
"arguments": [],
"category": "Mathematical",
"description_markdown": "```\nCOS(X)\n```\n\n **Description** \n\nComputes the cosine of X where X is specified in radians. Never fails.\n\n| X | COS(X) |\n| --- | --- |\n| `+inf` | `NaN` |\n| `-inf` | `NaN` |\n| `NaN` | `NaN` |\n\n\n\n"
},
{
"name": "COSH",
"arguments": [],
"category": "Mathematical",
"description_markdown": "```\nCOSH(X)\n```\n\n **Description** \n\nComputes the hyperbolic cosine of X where X is specified in radians.\nGenerates an error if overflow occurs.\n\n| X | COSH(X) |\n| --- | --- |\n| `+inf` | `+inf` |\n| `-inf` | `+inf` |\n| `NaN` | `NaN` |\n\n\n\n"
},
{
"name": "COSINE_DISTANCE",
"arguments": [],
"category": "Mathematical",
"description_markdown": "```\nCOSINE_DISTANCE(vector1, vector2)\n```\n\n **Description** \n\nComputes the[cosine distance](https://en.wikipedia.org/wiki/Cosine_similarity#Cosine_distance)between two vectors.\n\n **Definitions** \n\n- ` vector1`: A vector that is represented by an` ARRAY<T>`value or a sparse vector that is\nrepresented by an` ARRAY<STRUCT<dimension,magnitude>>`value.\n- ` vector2`: A vector that is represented by an` ARRAY<T>`value or a sparse vector that is\nrepresented by an` ARRAY<STRUCT<dimension,magnitude>>`value.\n\n **Details** \n\n- ` ARRAY<T>`can be used to represent a vector. Each zero-based index in this\narray represents a dimension. The value for each element in this array\nrepresents a magnitude.\n \n ` T`can represent the following and must be the same for both\nvectors:\n \n \n - ` FLOAT64`In the following example vector, there are four dimensions. The magnitude\nis` 10.0`for dimension` 0`,` 55.0`for dimension` 1`,` 40.0`for\ndimension` 2`, and` 34.0`for dimension` 3`:\n \n \n ```\n [10.0, 55.0, 40.0, 34.0]\n ```\n \n \n- ` ARRAY<STRUCT<dimension,magnitude>>`can be used to represent a\nsparse vector. With a sparse vector, you only need to include\ndimension-magnitude pairs for non-zero magnitudes. If a magnitude isn't\npresent in the sparse vector, the magnitude is implicitly understood to be\nzero.\n \n For example, if you have a vector with 10,000 dimensions, but only 10\ndimensions have non-zero magnitudes, then the vector is a sparse vector.\nAs a result, it's more efficient to describe a sparse vector by only\nmentioning its non-zero magnitudes.\n \n In` ARRAY<STRUCT<dimension,magnitude>>`,` STRUCT<dimension,magnitude>`represents a dimension-magnitude pair for each non-zero magnitude in a\nsparse vector. These parts need to be included for each dimension-magnitude\npair:\n \n \n - ` dimension`: A` STRING`or` INT64`value that represents a\ndimension in a vector.\n \n \n - ` magnitude`: A` FLOAT64`value that represents a\nnon-zero magnitude for a specific dimension in a vector.\n \n You don't need to include empty dimension-magnitude pairs in a\nsparse vector. For example, the following sparse vector and\nnon-sparse vector are equivalent:\n \n \n ```\n -- sparse vector ARRAY<STRUCT<INT64, FLOAT64>> [(1, 10.0), (2: 30.0), (5, 40.0)]\n ```\n \n \n ```\n -- vector ARRAY<FLOAT64> [0.0, 10.0, 30.0, 0.0, 0.0, 40.0]\n ```\n \n In a sparse vector, dimension-magnitude pairs don't need to be in any\nparticular order. The following sparse vectors are equivalent:\n \n \n ```\n [('a', 10.0), ('b': 30.0), ('d': 40.0)]\n ```\n \n \n ```\n [('d': 40.0), ('a', 10.0), ('b': 30.0)]\n ```\n \n \n- Both non-sparse vectors\nin this function must share the same dimensions, and if they don't, an error\nis produced.\n \n \n- A vector can't be a zero vector. A vector is a zero vector if it has\nno dimensions or all dimensions have a magnitude of` 0`, such as` []`or` [0.0, 0.0]`. If a zero vector is encountered, an error is produced.\n \n \n- An error is produced if a magnitude in a vector is` NULL`.\n \n \n- If a vector is` NULL`,` NULL`is returned.\n \n \n\n **Return type** \n\n`FLOAT64`\n\n **Examples** \n\nIn the following example, non-sparsevectors\nare used to compute the cosine distance:\n\n```\nSELECT COSINE_DISTANCE([1.0, 2.0], [3.0, 4.0]) AS results;\n\n/*----------*\n | results |\n +----------+\n | 0.016130 |\n *----------*/\n```\n\nIn the following example, sparse vectors are used to compute the\ncosine distance:\n\n```\nSELECT COSINE_DISTANCE(\n [(1, 1.0), (2, 2.0)],\n [(2, 4.0), (1, 3.0)]) AS results;\n\n /*----------*\n | results |\n +----------+\n | 0.016130 |\n *----------*/\n```\n\nThe ordering of numeric values in a vector doesn't impact the results\nproduced by this function. For example these queries produce the same results\neven though the numeric values in each vector is in a different order:\n\n```\nSELECT COSINE_DISTANCE([1.0, 2.0], [3.0, 4.0]) AS results;\n```\n\n```\nSELECT COSINE_DISTANCE([2.0, 1.0], [4.0, 3.0]) AS results;\n```\n\n```\nSELECT COSINE_DISTANCE([(1, 1.0), (2, 2.0)], [(1, 3.0), (2, 4.0)]) AS results;\n```\n\n```\n/*----------*\n | results |\n +----------+\n | 0.016130 |\n *----------*/\n```\n\nIn the following example, the function can't compute cosine distance against\nthe first vector, which is a zero vector:\n\n```\n-- ERROR\nSELECT COSINE_DISTANCE([0.0, 0.0], [3.0, 4.0]) AS results;\n```\n\n```\n-- ERROR\nSELECT COSINE_DISTANCE([(1, 0.0), (2, 0.0)], [(1, 3.0), (2, 4.0)]) AS results;\n```\n\nBoth non-sparse vectors must have the same\ndimensions. If not, an error is produced. In the following example, the\nfirst vector has two dimensions and the second vector has three:\n\n```\n-- ERROR\nSELECT COSINE_DISTANCE([9.0, 7.0], [8.0, 4.0, 5.0]) AS results;\n```\n\nIf you use sparse vectors and you repeat a dimension, an error is\nproduced:\n\n```\n-- ERROR\nSELECT COSINE_DISTANCE(\n [(1, 9.0), (2, 7.0), (2, 8.0)], [(1, 8.0), (2, 4.0), (3, 5.0)]) AS results;\n```\n\n\n"
},
{
"name": "COT",
"arguments": [],
"category": "Mathematical",
"description_markdown": "```\nCOT(X)\n```\n\n **Description** \n\nComputes the cotangent for the angle of`X`, where`X`is specified in radians.`X`can be any data type\nthat[coerces to FLOAT64](/bigquery/docs/reference/standard-sql/conversion_rules#conversion_rules).\nSupports the`SAFE.`prefix.\n\n| X | COT(X) |\n| --- | --- |\n| `+inf` | `NaN` |\n| `-inf` | `NaN` |\n| `NaN` | `NaN` |\n| `0` | `Error` |\n| `NULL` | `NULL` |\n\n **Return Data Type** \n\n`FLOAT64`\n\n **Example** \n\n```\nSELECT COT(1) AS a, SAFE.COT(0) AS b;\n\n/*---------------------+------*\n | a | b |\n +---------------------+------+\n | 0.64209261593433065 | NULL |\n *---------------------+------*/\n```\n\n\n"
},
{
"name": "COTH",
"arguments": [],
"category": "Mathematical",
"description_markdown": "```\nCOTH(X)\n```\n\n **Description** \n\nComputes the hyperbolic cotangent for the angle of`X`, where`X`is specified\nin radians.`X`can be any data type\nthat[coerces to FLOAT64](/bigquery/docs/reference/standard-sql/conversion_rules#conversion_rules).\nSupports the`SAFE.`prefix.\n\n| X | COTH(X) |\n| --- | --- |\n| `+inf` | `1` |\n| `-inf` | `-1` |\n| `NaN` | `NaN` |\n| `0` | `Error` |\n| `NULL` | `NULL` |\n\n **Return Data Type** \n\n`FLOAT64`\n\n **Example** \n\n```\nSELECT COTH(1) AS a, SAFE.COTH(0) AS b;\n\n/*----------------+------*\n | a | b |\n +----------------+------+\n | 1.313035285499 | NULL |\n *----------------+------*/\n```\n\n\n"
},
{
"name": "COUNT",
"arguments": [],
"category": "Aggregate",
"description_markdown": "1.\n\n```\nCOUNT(*)\n[OVER over_clause]\n```\n\n2.\n\n```\nCOUNT(\n [ DISTINCT ]\n expression\n)\n[ OVER over_clause ]\n\nover_clause:\n { named_window | ( [ window_specification ] ) }\n\nwindow_specification:\n [ named_window ]\n [ PARTITION BY partition_expression [, ...] ]\n [ ORDER BY expression [ { ASC | DESC } ] [, ...] ]\n [ window_frame_clause ]\n```\n\n **Description** \n\n1. Returns the number of rows in the input.\n1. Returns the number of rows with` expression`evaluated to any value other\nthan` NULL`.\n\nTo learn more about the optional aggregate clauses that you can pass\ninto this function, see[Aggregate function calls](/bigquery/docs/reference/standard-sql/aggregate-function-calls).\n\nThis function can be used with the[AGGREGATION_THRESHOLD clause](/bigquery/docs/reference/standard-sql/query-syntax#agg_threshold_clause).\n\nTo learn more about the`OVER`clause and how to use it, see[Window function calls](/bigquery/docs/reference/standard-sql/window-function-calls).\n\nThis function with DISTINCT supports specifying[collation](/bigquery/docs/reference/standard-sql/collation-concepts#collate_about).\n\n`COUNT`can be used with differential privacy. For more information, see[Differentially private aggregate functions](#aggregate-dp-functions).\n\n **Supported Argument Types** \n\n`expression`can be any data type. If`DISTINCT`is present,`expression`can only be a data type that is[groupable](/bigquery/docs/reference/standard-sql/data-types#data_type_properties).\n\n **Return Data Types** \n\nINT64\n\n **Examples** \n\nYou can use the`COUNT`function to return the number of rows in a table or the\nnumber of distinct values of an expression. For example:\n\n```\nSELECT\n COUNT(*) AS count_star,\n COUNT(DISTINCT x) AS count_dist_x\nFROM UNNEST([1, 4, 4, 5]) AS x;\n\n/*------------+--------------*\n | count_star | count_dist_x |\n +------------+--------------+\n | 4 | 3 |\n *------------+--------------*/\n```\n\n```\nSELECT\n x,\n COUNT(*) OVER (PARTITION BY MOD(x, 3)) AS count_star,\n COUNT(DISTINCT x) OVER (PARTITION BY MOD(x, 3)) AS count_dist_x\nFROM UNNEST([1, 4, 4, 5]) AS x;\n\n/*------+------------+--------------*\n | x | count_star | count_dist_x |\n +------+------------+--------------+\n | 1 | 3 | 2 |\n | 4 | 3 | 2 |\n | 4 | 3 | 2 |\n | 5 | 1 | 1 |\n *------+------------+--------------*/\n```\n\n```\nSELECT\n x,\n COUNT(*) OVER (PARTITION BY MOD(x, 3)) AS count_star,\n COUNT(x) OVER (PARTITION BY MOD(x, 3)) AS count_x\nFROM UNNEST([1, 4, NULL, 4, 5]) AS x;\n\n/*------+------------+---------*\n | x | count_star | count_x |\n +------+------------+---------+\n | NULL | 1 | 0 |\n | 1 | 3 | 3 |\n | 4 | 3 | 3 |\n | 4 | 3 | 3 |\n | 5 | 1 | 1 |\n *------+------------+---------*/\n```\n\nIf you want to count the number of distinct values of an expression for which a\ncertain condition is satisfied, this is one recipe that you can use:\n\n```\nCOUNT(DISTINCT IF(condition, expression, NULL))\n```\n\nHere,`IF`will return the value of`expression`if`condition`is`TRUE`, or`NULL`otherwise. The surrounding`COUNT(DISTINCT ...)`will ignore the`NULL`values, so it will count only the distinct values of`expression`for which`condition`is`TRUE`.\n\nFor example, to count the number of distinct positive values of`x`:\n\n```\nSELECT COUNT(DISTINCT IF(x > 0, x, NULL)) AS distinct_positive\nFROM UNNEST([1, -2, 4, 1, -5, 4, 1, 3, -6, 1]) AS x;\n\n/*-------------------*\n | distinct_positive |\n +-------------------+\n | 3 |\n *-------------------*/\n```\n\nOr to count the number of distinct dates on which a certain kind of event\noccurred:\n\n```\nWITH Events AS (\n SELECT DATE '2021-01-01' AS event_date, 'SUCCESS' AS event_type\n UNION ALL\n SELECT DATE '2021-01-02' AS event_date, 'SUCCESS' AS event_type\n UNION ALL\n SELECT DATE '2021-01-02' AS event_date, 'FAILURE' AS event_type\n UNION ALL\n SELECT DATE '2021-01-03' AS event_date, 'SUCCESS' AS event_type\n UNION ALL\n SELECT DATE '2021-01-04' AS event_date, 'FAILURE' AS event_type\n UNION ALL\n SELECT DATE '2021-01-04' AS event_date, 'FAILURE' AS event_type\n)\nSELECT\n COUNT(DISTINCT IF(event_type = 'FAILURE', event_date, NULL))\n AS distinct_dates_with_failures\nFROM Events;\n\n/*------------------------------*\n | distinct_dates_with_failures |\n +------------------------------+\n | 2 |\n *------------------------------*/\n```\n\n\n"
},
{
"name": "COUNTIF",
"arguments": [],
"category": "Aggregate",
"description_markdown": "```\nCOUNTIF(\n [ DISTINCT ]\n expression\n)\n[ OVER over_clause ]\n\nover_clause:\n { named_window | ( [ window_specification ] ) }\n\nwindow_specification:\n [ named_window ]\n [ PARTITION BY partition_expression [, ...] ]\n [ ORDER BY expression [ { ASC | DESC } ] [, ...] ]\n [ window_frame_clause ]\n```\n\n **Description** \n\nReturns the count of`TRUE`values for`expression`. Returns`0`if there are\nzero input rows, or if`expression`evaluates to`FALSE`or`NULL`for all rows.\n\nSince`expression`must be a`BOOL`, the form`COUNTIF(DISTINCT ...)`is\ngenerally not useful: there is only one distinct value of`TRUE`. So`COUNTIF(DISTINCT ...)`will return 1 if`expression`evaluates to`TRUE`for\none or more input rows, or 0 otherwise.\nUsually when someone wants to combine`COUNTIF`and`DISTINCT`, they\nwant to count the number of distinct values of an expression for which a certain\ncondition is satisfied. One recipe to achieve this is the following:\n\n```\nCOUNT(DISTINCT IF(condition, expression, NULL))\n```\n\nNote that this uses`COUNT`, not`COUNTIF`; the`IF`part has been moved inside.\nTo learn more, see the examples for[COUNT](#count).\n\nTo learn more about the optional aggregate clauses that you can pass\ninto this function, see[Aggregate function calls](/bigquery/docs/reference/standard-sql/aggregate-function-calls).\n\nThis function can be used with the[AGGREGATION_THRESHOLD clause](/bigquery/docs/reference/standard-sql/query-syntax#agg_threshold_clause).\n\nTo learn more about the`OVER`clause and how to use it, see[Window function calls](/bigquery/docs/reference/standard-sql/window-function-calls).\n\n **Supported Argument Types** \n\nBOOL\n\n **Return Data Types** \n\nINT64\n\n **Examples** \n\n```\nSELECT COUNTIF(x<0) AS num_negative, COUNTIF(x>0) AS num_positive\nFROM UNNEST([5, -2, 3, 6, -10, -7, 4, 0]) AS x;\n\n/*--------------+--------------*\n | num_negative | num_positive |\n +--------------+--------------+\n | 3 | 4 |\n *--------------+--------------*/\n```\n\n```\nSELECT\n x,\n COUNTIF(x<0) OVER (ORDER BY ABS(x) ROWS BETWEEN 1 PRECEDING AND 1 FOLLOWING) AS num_negative\nFROM UNNEST([5, -2, 3, 6, -10, NULL, -7, 4, 0]) AS x;\n\n/*------+--------------*\n | x | num_negative |\n +------+--------------+\n | NULL | 0 |\n | 0 | 1 |\n | -2 | 1 |\n | 3 | 1 |\n | 4 | 0 |\n | 5 | 0 |\n | 6 | 1 |\n | -7 | 2 |\n | -10 | 2 |\n *------+--------------*/\n```\n\n\n"
},
{
"name": "COVAR_POP",
"arguments": [],
"category": "Statistical_aggregate",
"description_markdown": "```\nCOVAR_POP(\n X1, X2\n)\n[ OVER over_clause ]\n\nover_clause:\n { named_window | ( [ window_specification ] ) }\n\nwindow_specification:\n [ named_window ]\n [ PARTITION BY partition_expression [, ...] ]\n [ ORDER BY expression [ { ASC | DESC } ] [, ...] ]\n [ window_frame_clause ]\n```\n\n **Description** \n\nReturns the population[covariance](https://en.wikipedia.org/wiki/Covariance)of\na set of number pairs. The first number is the dependent variable; the second\nnumber is the independent variable. The return result is between`-Inf`and`+Inf`.\n\nAll numeric types are supported. If the\ninput is`NUMERIC`or`BIGNUMERIC`then the internal aggregation is\nstable with the final output converted to a`FLOAT64`.\nOtherwise the input is converted to a`FLOAT64`before aggregation, resulting in a potentially unstable result.\n\nThis function ignores any input pairs that contain one or more`NULL`values. If\nthere is no input pair without`NULL`values, this function returns`NULL`.\nIf there is exactly one input pair without`NULL`values, this function returns`0`.\n\n`NaN`is produced if:\n\n- Any input value is` NaN`\n- Any input value is positive infinity or negative infinity.\n\nTo learn more about the optional aggregate clauses that you can pass\ninto this function, see[Aggregate function calls](/bigquery/docs/reference/standard-sql/aggregate-function-calls).\n\nThis function can be used with the[AGGREGATION_THRESHOLD clause](/bigquery/docs/reference/standard-sql/query-syntax#agg_threshold_clause).\n\nTo learn more about the`OVER`clause and how to use it, see[Window function calls](/bigquery/docs/reference/standard-sql/window-function-calls).\n\n **Return Data Type** \n\n`FLOAT64`\n\n **Examples** \n\n```\nSELECT COVAR_POP(y, x) AS results\nFROM\n UNNEST(\n [\n STRUCT(1.0 AS y, 1.0 AS x),\n (2.0, 6.0),\n (9.0, 3.0),\n (2.0, 6.0),\n (9.0, 3.0)])\n\n/*---------------------*\n | results |\n +---------------------+\n | -1.6800000000000002 |\n *---------------------*/\n```\n\n```\nSELECT COVAR_POP(y, x) AS results\nFROM UNNEST([STRUCT(1.0 AS y, NULL AS x),(9.0, 3.0)])\n\n/*---------*\n | results |\n +---------+\n | 0 |\n *---------*/\n```\n\n```\nSELECT COVAR_POP(y, x) AS results\nFROM UNNEST([STRUCT(1.0 AS y, NULL AS x),(9.0, NULL)])\n\n/*---------*\n | results |\n +---------+\n | NULL |\n *---------*/\n```\n\n```\nSELECT COVAR_POP(y, x) AS results\nFROM\n UNNEST(\n [\n STRUCT(1.0 AS y, 1.0 AS x),\n (2.0, 6.0),\n (9.0, 3.0),\n (2.0, 6.0),\n (NULL, 3.0)])\n\n/*---------*\n | results |\n +---------+\n | -1 |\n *---------*/\n```\n\n```\nSELECT COVAR_POP(y, x) AS results\nFROM\n UNNEST(\n [\n STRUCT(1.0 AS y, 1.0 AS x),\n (2.0, 6.0),\n (9.0, 3.0),\n (2.0, 6.0),\n (CAST('Infinity' as FLOAT64), 3.0)])\n\n/*---------*\n | results |\n +---------+\n | NaN |\n *---------*/\n```\n\n\n"
},
{
"name": "COVAR_SAMP",
"arguments": [],
"category": "Statistical_aggregate",
"description_markdown": "```\nCOVAR_SAMP(\n X1, X2\n)\n[ OVER over_clause ]\n\nover_clause:\n { named_window | ( [ window_specification ] ) }\n\nwindow_specification:\n [ named_window ]\n [ PARTITION BY partition_expression [, ...] ]\n [ ORDER BY expression [ { ASC | DESC } ] [, ...] ]\n [ window_frame_clause ]\n```\n\n **Description** \n\nReturns the sample[covariance](https://en.wikipedia.org/wiki/Covariance)of a\nset of number pairs. The first number is the dependent variable; the second\nnumber is the independent variable. The return result is between`-Inf`and`+Inf`.\n\nAll numeric types are supported. If the\ninput is`NUMERIC`or`BIGNUMERIC`then the internal aggregation is\nstable with the final output converted to a`FLOAT64`.\nOtherwise the input is converted to a`FLOAT64`before aggregation, resulting in a potentially unstable result.\n\nThis function ignores any input pairs that contain one or more`NULL`values. If\nthere are fewer than two input pairs without`NULL`values, this function\nreturns`NULL`.\n\n`NaN`is produced if:\n\n- Any input value is` NaN`\n- Any input value is positive infinity or negative infinity.\n\nTo learn more about the optional aggregate clauses that you can pass\ninto this function, see[Aggregate function calls](/bigquery/docs/reference/standard-sql/aggregate-function-calls).\n\nThis function can be used with the[AGGREGATION_THRESHOLD clause](/bigquery/docs/reference/standard-sql/query-syntax#agg_threshold_clause).\n\nTo learn more about the`OVER`clause and how to use it, see[Window function calls](/bigquery/docs/reference/standard-sql/window-function-calls).\n\n **Return Data Type** \n\n`FLOAT64`\n\n **Examples** \n\n```\nSELECT COVAR_SAMP(y, x) AS results\nFROM\n UNNEST(\n [\n STRUCT(1.0 AS y, 1.0 AS x),\n (2.0, 6.0),\n (9.0, 3.0),\n (2.0, 6.0),\n (9.0, 3.0)])\n\n/*---------*\n | results |\n +---------+\n | -2.1 |\n *---------*/\n```\n\n```\nSELECT COVAR_SAMP(y, x) AS results\nFROM\n UNNEST(\n [\n STRUCT(1.0 AS y, 1.0 AS x),\n (2.0, 6.0),\n (9.0, 3.0),\n (2.0, 6.0),\n (NULL, 3.0)])\n\n/*----------------------*\n | results |\n +----------------------+\n | --1.3333333333333333 |\n *----------------------*/\n```\n\n```\nSELECT COVAR_SAMP(y, x) AS results\nFROM UNNEST([STRUCT(1.0 AS y, NULL AS x),(9.0, 3.0)])\n\n/*---------*\n | results |\n +---------+\n | NULL |\n *---------*/\n```\n\n```\nSELECT COVAR_SAMP(y, x) AS results\nFROM UNNEST([STRUCT(1.0 AS y, NULL AS x),(9.0, NULL)])\n\n/*---------*\n | results |\n +---------+\n | NULL |\n *---------*/\n```\n\n```\nSELECT COVAR_SAMP(y, x) AS results\nFROM\n UNNEST(\n [\n STRUCT(1.0 AS y, 1.0 AS x),\n (2.0, 6.0),\n (9.0, 3.0),\n (2.0, 6.0),\n (CAST('Infinity' as FLOAT64), 3.0)])\n\n/*---------*\n | results |\n +---------+\n | NaN |\n *---------*/\n```\n\n\n"
},
{
"name": "CSC",
"arguments": [],
"category": "Mathematical",
"description_markdown": "```\nCSC(X)\n```\n\n **Description** \n\nComputes the cosecant of the input angle, which is in radians.`X`can be any data type\nthat[coerces to FLOAT64](/bigquery/docs/reference/standard-sql/conversion_rules#conversion_rules).\nSupports the`SAFE.`prefix.\n\n| X | CSC(X) |\n| --- | --- |\n| `+inf` | `NaN` |\n| `-inf` | `NaN` |\n| `NaN` | `NaN` |\n| `0` | `Error` |\n| `NULL` | `NULL` |\n\n **Return Data Type** \n\n`FLOAT64`\n\n **Example** \n\n```\nSELECT CSC(100) AS a, CSC(-1) AS b, SAFE.CSC(0) AS c;\n\n/*----------------+-----------------+------*\n | a | b | c |\n +----------------+-----------------+------+\n | -1.97485753142 | -1.188395105778 | NULL |\n *----------------+-----------------+------*/\n```\n\n\n"
},
{
"name": "CSCH",
"arguments": [],
"category": "Mathematical",
"description_markdown": "```\nCSCH(X)\n```\n\n **Description** \n\nComputes the hyperbolic cosecant of the input angle, which is in radians.`X`can be any data type\nthat[coerces to FLOAT64](/bigquery/docs/reference/standard-sql/conversion_rules#conversion_rules).\nSupports the`SAFE.`prefix.\n\n| X | CSCH(X) |\n| --- | --- |\n| `+inf` | `0` |\n| `-inf` | `0` |\n| `NaN` | `NaN` |\n| `0` | `Error` |\n| `NULL` | `NULL` |\n\n **Return Data Type** \n\n`FLOAT64`\n\n **Example** \n\n```\nSELECT CSCH(0.5) AS a, CSCH(-2) AS b, SAFE.CSCH(0) AS c;\n\n/*----------------+----------------+------*\n | a | b | c |\n +----------------+----------------+------+\n | 1.919034751334 | -0.27572056477 | NULL |\n *----------------+----------------+------*/\n```\n\n\n"
},
{
"name": "CUME_DIST",
"arguments": [],
"category": "Numbering",
"description_markdown": "```\nCUME_DIST()\nOVER over_clause\n\nover_clause:\n { named_window | ( [ window_specification ] ) }\n\nwindow_specification:\n [ named_window ]\n [ PARTITION BY partition_expression [, ...] ]\n ORDER BY expression [ { ASC | DESC } ] [, ...]\n```\n\n **Description** \n\nReturn the relative rank of a row defined as NP/NR. NP is defined to be the\nnumber of rows that either precede or are peers with the current row. NR is the\nnumber of rows in the partition.\n\nTo learn more about the`OVER`clause and how to use it, see[Window function calls](/bigquery/docs/reference/standard-sql/window-function-calls).\n\n **Return Type** \n\n`FLOAT64`\n\n **Example** \n\n```\nWITH finishers AS\n (SELECT 'Sophia Liu' as name,\n TIMESTAMP '2016-10-18 2:51:45' as finish_time,\n 'F30-34' as division\n UNION ALL SELECT 'Lisa Stelzner', TIMESTAMP '2016-10-18 2:54:11', 'F35-39'\n UNION ALL SELECT 'Nikki Leith', TIMESTAMP '2016-10-18 2:59:01', 'F30-34'\n UNION ALL SELECT 'Lauren Matthews', TIMESTAMP '2016-10-18 3:01:17', 'F35-39'\n UNION ALL SELECT 'Desiree Berry', TIMESTAMP '2016-10-18 3:05:42', 'F35-39'\n UNION ALL SELECT 'Suzy Slane', TIMESTAMP '2016-10-18 3:06:24', 'F35-39'\n UNION ALL SELECT 'Jen Edwards', TIMESTAMP '2016-10-18 3:06:36', 'F30-34'\n UNION ALL SELECT 'Meghan Lederer', TIMESTAMP '2016-10-18 2:59:01', 'F30-34')\nSELECT name,\n finish_time,\n division,\n CUME_DIST() OVER (PARTITION BY division ORDER BY finish_time ASC) AS finish_rank\nFROM finishers;\n\n/*-----------------+------------------------+----------+-------------*\n | name | finish_time | division | finish_rank |\n +-----------------+------------------------+----------+-------------+\n | Sophia Liu | 2016-10-18 09:51:45+00 | F30-34 | 0.25 |\n | Meghan Lederer | 2016-10-18 09:59:01+00 | F30-34 | 0.75 |\n | Nikki Leith | 2016-10-18 09:59:01+00 | F30-34 | 0.75 |\n | Jen Edwards | 2016-10-18 10:06:36+00 | F30-34 | 1 |\n | Lisa Stelzner | 2016-10-18 09:54:11+00 | F35-39 | 0.25 |\n | Lauren Matthews | 2016-10-18 10:01:17+00 | F35-39 | 0.5 |\n | Desiree Berry | 2016-10-18 10:05:42+00 | F35-39 | 0.75 |\n | Suzy Slane | 2016-10-18 10:06:24+00 | F35-39 | 1 |\n *-----------------+------------------------+----------+-------------*/\n```\n\n\n"
},
{
"name": "CURRENT_DATE",
"arguments": [],
"category": "Date",
"description_markdown": "```\nCURRENT_DATE()\n```\n\n```\nCURRENT_DATE(time_zone_expression)\n```\n\n```\nCURRENT_DATE\n```\n\n **Description** \n\nReturns the current date as a`DATE`object. Parentheses are optional when\ncalled with no arguments.\n\nThis function supports the following arguments:\n\n- ` time_zone_expression`: A` STRING`expression that represents a[time zone](#timezone_definitions). If no time zone is specified, the\ndefault time zone, UTC, is used. If this expression is\nused and it evaluates to` NULL`, this function returns` NULL`.\n\nThe current date is recorded at the start of the query\nstatement which contains this function, not when this specific function is\nevaluated.\n\n **Return Data Type** \n\n`DATE`\n\n **Examples** \n\nThe following query produces the current date in the default time zone:\n\n```\nSELECT CURRENT_DATE() AS the_date;\n\n/*--------------*\n | the_date |\n +--------------+\n | 2016-12-25 |\n *--------------*/\n```\n\nThe following queries produce the current date in a specified time zone:\n\n```\nSELECT CURRENT_DATE('America/Los_Angeles') AS the_date;\n\n/*--------------*\n | the_date |\n +--------------+\n | 2016-12-25 |\n *--------------*/\n```\n\n```\nSELECT CURRENT_DATE('-08') AS the_date;\n\n/*--------------*\n | the_date |\n +--------------+\n | 2016-12-25 |\n *--------------*/\n```\n\nThe following query produces the current date in the default time zone.\nParentheses are not needed if the function has no arguments.\n\n```\nSELECT CURRENT_DATE AS the_date;\n\n/*--------------*\n | the_date |\n +--------------+\n | 2016-12-25 |\n *--------------*/\n```\n\nWhen a column named`current_date`is present, the column name and the function\ncall without parentheses are ambiguous. To ensure the function call, add\nparentheses; to ensure the column name, qualify it with its[range variable](/bigquery/docs/reference/standard-sql/query-syntax#range_variables). For example, the\nfollowing query will select the function in the`the_date`column and the table\ncolumn in the`current_date`column.\n\n```\nWITH t AS (SELECT 'column value' AS `current_date`)\nSELECT current_date() AS the_date, t.current_date FROM t;\n\n/*------------+--------------*\n | the_date | current_date |\n +------------+--------------+\n | 2016-12-25 | column value |\n *------------+--------------*/\n```\n\n\n"
},
{
"name": "CURRENT_DATETIME",
"arguments": [],
"category": "Datetime",
"description_markdown": "```\nCURRENT_DATETIME([time_zone])\n```\n\n```\nCURRENT_DATETIME\n```\n\n **Description** \n\nReturns the current time as a`DATETIME`object. Parentheses are optional when\ncalled with no arguments.\n\nThis function supports an optional`time_zone`parameter.\nSee[Time zone definitions](#timezone_definitions)for\ninformation on how to specify a time zone.\n\nThe current date and time is recorded at the start of the query\nstatement which contains this function, not when this specific function is\nevaluated.\n\n **Return Data Type** \n\n`DATETIME`\n\n **Example** \n\n```\nSELECT CURRENT_DATETIME() as now;\n\n/*----------------------------*\n | now |\n +----------------------------+\n | 2016-05-19T10:38:47.046465 |\n *----------------------------*/\n```\n\nWhen a column named`current_datetime`is present, the column name and the\nfunction call without parentheses are ambiguous. To ensure the function call,\nadd parentheses; to ensure the column name, qualify it with its[range variable](/bigquery/docs/reference/standard-sql/query-syntax#range_variables). For example, the\nfollowing query will select the function in the`now`column and the table\ncolumn in the`current_datetime`column.\n\n```\nWITH t AS (SELECT 'column value' AS `current_datetime`)\nSELECT current_datetime() as now, t.current_datetime FROM t;\n\n/*----------------------------+------------------*\n | now | current_datetime |\n +----------------------------+------------------+\n | 2016-05-19T10:38:47.046465 | column value |\n *----------------------------+------------------*/\n```\n\n\n"
},
{
"name": "CURRENT_TIME",
"arguments": [],
"category": "Time",
"description_markdown": "```\nCURRENT_TIME([time_zone])\n```\n\n```\nCURRENT_TIME\n```\n\n **Description** \n\nReturns the current time as a`TIME`object. Parentheses are optional when\ncalled with no arguments.\n\nThis function supports an optional`time_zone`parameter.\nSee[Time zone definitions](#timezone_definitions)for information\non how to specify a time zone.\n\nThe current time is recorded at the start of the query\nstatement which contains this function, not when this specific function is\nevaluated.\n\n **Return Data Type** \n\n`TIME`\n\n **Example** \n\n```\nSELECT CURRENT_TIME() as now;\n\n/*----------------------------*\n | now |\n +----------------------------+\n | 15:31:38.776361 |\n *----------------------------*/\n```\n\nWhen a column named`current_time`is present, the column name and the function\ncall without parentheses are ambiguous. To ensure the function call, add\nparentheses; to ensure the column name, qualify it with its[range variable](/bigquery/docs/reference/standard-sql/query-syntax#range_variables). For example, the\nfollowing query will select the function in the`now`column and the table\ncolumn in the`current_time`column.\n\n```\nWITH t AS (SELECT 'column value' AS `current_time`)\nSELECT current_time() as now, t.current_time FROM t;\n\n/*-----------------+--------------*\n | now | current_time |\n +-----------------+--------------+\n | 15:31:38.776361 | column value |\n *-----------------+--------------*/\n```\n\n\n"
},
{
"name": "CURRENT_TIMESTAMP",
"arguments": [],
"category": "Timestamp",
"description_markdown": "```\nCURRENT_TIMESTAMP()\n```\n\n```\nCURRENT_TIMESTAMP\n```\n\n **Description** \n\nReturns the current date and time as a timestamp object. The timestamp is\ncontinuous, non-ambiguous, has exactly 60 seconds per minute and does not repeat\nvalues over the leap second. Parentheses are optional.\n\nThis function handles leap seconds by smearing them across a window of 20 hours\naround the inserted leap second.\n\nThe current date and time is recorded at the start of the query\nstatement which contains this function, not when this specific function is\nevaluated.\n\n **Supported Input Types** \n\nNot applicable\n\n **Result Data Type** \n\n`TIMESTAMP`\n\n **Examples** \n\n```\nSELECT CURRENT_TIMESTAMP() AS now;\n\n/*--------------------------------*\n | now |\n +--------------------------------+\n | 2020-06-02 23:57:12.120174 UTC |\n *--------------------------------*/\n```\n\nWhen a column named`current_timestamp`is present, the column name and the\nfunction call without parentheses are ambiguous. To ensure the function call,\nadd parentheses; to ensure the column name, qualify it with its[range variable](/bigquery/docs/reference/standard-sql/query-syntax#range_variables). For example, the\nfollowing query selects the function in the`now`column and the table\ncolumn in the`current_timestamp`column.\n\n```\nWITH t AS (SELECT 'column value' AS `current_timestamp`)\nSELECT current_timestamp() AS now, t.current_timestamp FROM t;\n\n/*--------------------------------+-------------------*\n | now | current_timestamp |\n +--------------------------------+-------------------+\n | 2020-06-02 23:57:12.120174 UTC | column value |\n *--------------------------------+-------------------*/\n```\n\n\n"
},
{
"name": "DATE",
"arguments": [],
"category": "Date",
"description_markdown": "```\nDATE(year, month, day)\n```\n\n```\nDATE(timestamp_expression)\n```\n\n```\nDATE(timestamp_expression, time_zone_expression)\n```\n\n```\nDATE(datetime_expression)\n```\n\n **Description** \n\nConstructs or extracts a date.\n\nThis function supports the following arguments:\n\n- ` year`: The` INT64`value for year.\n- ` month`: The` INT64`value for month.\n- ` day`: The` INT64`value for day.\n- ` timestamp_expression`: A` TIMESTAMP`expression that contains the date.\n- ` time_zone_expression`: A` STRING`expression that represents a[time zone](#timezone_definitions). If no time zone is specified with` timestamp_expression`, the default time zone, UTC, is\nused.\n- ` datetime_expression`: A` DATETIME`expression that contains the date.\n\n **Return Data Type** \n\n`DATE`\n\n **Example** \n\n```\nSELECT\n DATE(2016, 12, 25) AS date_ymd,\n DATE(DATETIME '2016-12-25 23:59:59') AS date_dt,\n DATE(TIMESTAMP '2016-12-25 05:30:00+07', 'America/Los_Angeles') AS date_tstz;\n\n/*------------+------------+------------*\n | date_ymd | date_dt | date_tstz |\n +------------+------------+------------+\n | 2016-12-25 | 2016-12-25 | 2016-12-24 |\n *------------+------------+------------*/\n```\n\n\n"
},
{
"name": "DATETIME",
"arguments": [],
"category": "Datetime",
"description_markdown": "```\n1. DATETIME(year, month, day, hour, minute, second)\n2. DATETIME(date_expression[, time_expression])\n3. DATETIME(timestamp_expression [, time_zone])\n```\n\n **Description** \n\n1. Constructs a` DATETIME`object using` INT64`values\nrepresenting the year, month, day, hour, minute, and second.\n1. Constructs a` DATETIME`object using a DATE object and an optional` TIME`object.\n1. Constructs a` DATETIME`object using a` TIMESTAMP`object. It supports an\noptional parameter to[specify a time zone](#timezone_definitions).\nIf no time zone is specified, the default time zone, UTC,\nis used.\n\n **Return Data Type** \n\n`DATETIME`\n\n **Example** \n\n```\nSELECT\n DATETIME(2008, 12, 25, 05, 30, 00) as datetime_ymdhms,\n DATETIME(TIMESTAMP \"2008-12-25 05:30:00+00\", \"America/Los_Angeles\") as datetime_tstz;\n\n/*---------------------+---------------------*\n | datetime_ymdhms | datetime_tstz |\n +---------------------+---------------------+\n | 2008-12-25T05:30:00 | 2008-12-24T21:30:00 |\n *---------------------+---------------------*/\n```\n\n\n"
},
{
"name": "DATETIME_ADD",
"arguments": [],
"category": "Datetime",
"description_markdown": "```\nDATETIME_ADD(datetime_expression, INTERVAL int64_expression part)\n```\n\n **Description** \n\nAdds`int64_expression`units of`part`to the`DATETIME`object.\n\n`DATETIME_ADD`supports the following values for`part`:\n\n- ` MICROSECOND`\n- ` MILLISECOND`\n- ` SECOND`\n- ` MINUTE`\n- ` HOUR`\n- ` DAY`\n- ` WEEK`. Equivalent to 7` DAY`s.\n- ` MONTH`\n- ` QUARTER`\n- ` YEAR`\n\nSpecial handling is required for MONTH, QUARTER, and YEAR parts when the\ndate is at (or near) the last day of the month. If the resulting month has fewer\ndays than the original DATETIME's day, then the result day is the last day of\nthe new month.\n\n **Return Data Type** \n\n`DATETIME`\n\n **Example** \n\n```\nSELECT\n DATETIME \"2008-12-25 15:30:00\" as original_date,\n DATETIME_ADD(DATETIME \"2008-12-25 15:30:00\", INTERVAL 10 MINUTE) as later;\n\n/*-----------------------------+------------------------*\n | original_date | later |\n +-----------------------------+------------------------+\n | 2008-12-25T15:30:00 | 2008-12-25T15:40:00 |\n *-----------------------------+------------------------*/\n```\n\n\n"
},
{
"name": "DATETIME_BUCKET",
"arguments": [],
"category": "Time_series",
"description_markdown": " **Preview** \n\nThis product or feature is subject to the \"Pre-GA Offerings Terms\"\n in the General Service Terms section of the[Service Specific Terms](/terms/service-terms).\n Pre-GA products and features are available \"as is\" and might have\n limited support. For more information, see the[launch stage descriptions](/products#product-launch-stages).\n\n **Note:** To provide feedback or request support for this feature, send an email to[bigquery-time-series-preview-support@google.com](mailto:bigquery-time-series-preview-support@google.com).```\nDATETIME_BUCKET(datetime_in_bucket, bucket_width)\n```\n\n```\nDATETIME_BUCKET(datetime_in_bucket, bucket_width, bucket_origin_datetime)\n```\n\n **Description** \n\nGets the lower bound of the datetime bucket that contains a datetime.\n\n **Definitions** \n\n- ` datetime_in_bucket`: A` DATETIME`value that you can use to look up a\ndatetime bucket.\n- ` bucket_width`: An` INTERVAL`value that represents the width of\na datetime bucket. A[single interval](/bigquery/docs/reference/standard-sql/data-types#single_datetime_part_interval)with[date and time parts](/bigquery/docs/reference/standard-sql/data-types#interval_datetime_parts)is supported.\n- ` bucket_origin_datetime`: A` DATETIME`value that represents a point in\ntime. All buckets expand left and right from this point. If this argument\nis not set,` 1950-01-01 00:00:00`is used by default.\n\n **Return type** \n\n`DATETIME`\n\n **Examples** \n\nIn the following example, the origin is omitted and the default origin,`1950-01-01 00:00:00`is used. All buckets expand in both directions from the\norigin, and the size of each bucket is 12 hours. The lower bound of the bucket\nin which`my_datetime`belongs is returned:\n\n```\nWITH some_datetimes AS (\n SELECT DATETIME '1949-12-30 13:00:00' AS my_datetime UNION ALL\n SELECT DATETIME '1949-12-31 00:00:00' UNION ALL\n SELECT DATETIME '1949-12-31 13:00:00' UNION ALL\n SELECT DATETIME '1950-01-01 00:00:00' UNION ALL\n SELECT DATETIME '1950-01-01 13:00:00' UNION ALL\n SELECT DATETIME '1950-01-02 00:00:00'\n)\nSELECT DATETIME_BUCKET(my_datetime, INTERVAL 12 HOUR) AS bucket_lower_bound\nFROM some_datetimes;\n\n/*---------------------+\n | bucket_lower_bound |\n +---------------------+\n | 1949-12-30T12:00:00 |\n | 1949-12-31T00:00:00 |\n | 1949-12-31T12:00:00 |\n | 1950-01-01T00:00:00 |\n | 1950-01-01T12:00:00 |\n | 1950-01-02T00:00:00 |\n +---------------------*/\n\n-- Some datetime buckets that originate from 1950-01-01 00:00:00:\n-- + Bucket: ...\n-- + Bucket: [1949-12-30 00:00:00, 1949-12-30 12:00:00)\n-- + Bucket: [1949-12-30 12:00:00, 1950-01-01 00:00:00)\n-- + Origin: [1950-01-01 00:00:00]\n-- + Bucket: [1950-01-01 00:00:00, 1950-01-01 12:00:00)\n-- + Bucket: [1950-01-01 12:00:00, 1950-02-00 00:00:00)\n-- + Bucket: ...\n```\n\nIn the following example, the origin has been changed to`2000-12-24 12:00:00`,\nand all buckets expand in both directions from this point. The size of each\nbucket is seven days. The lower bound of the bucket in which`my_datetime`belongs is returned:\n\n```\nWITH some_datetimes AS (\n SELECT DATETIME '2000-12-20 00:00:00' AS my_datetime UNION ALL\n SELECT DATETIME '2000-12-21 00:00:00' UNION ALL\n SELECT DATETIME '2000-12-22 00:00:00' UNION ALL\n SELECT DATETIME '2000-12-23 00:00:00' UNION ALL\n SELECT DATETIME '2000-12-24 00:00:00' UNION ALL\n SELECT DATETIME '2000-12-25 00:00:00'\n)\nSELECT DATETIME_BUCKET(\n my_datetime,\n INTERVAL 7 DAY,\n DATETIME '2000-12-22 12:00:00') AS bucket_lower_bound\nFROM some_datetimes;\n\n/*--------------------+\n | bucket_lower_bound |\n +--------------------+\n | 2000-12-15T12:00:00 |\n | 2000-12-15T12:00:00 |\n | 2000-12-15T12:00:00 |\n | 2000-12-22T12:00:00 |\n | 2000-12-22T12:00:00 |\n | 2000-12-22T12:00:00 |\n +--------------------*/\n\n-- Some datetime buckets that originate from 2000-12-22 12:00:00:\n-- + Bucket: ...\n-- + Bucket: [2000-12-08 12:00:00, 2000-12-15 12:00:00)\n-- + Bucket: [2000-12-15 12:00:00, 2000-12-22 12:00:00)\n-- + Origin: [2000-12-22 12:00:00]\n-- + Bucket: [2000-12-22 12:00:00, 2000-12-29 12:00:00)\n-- + Bucket: [2000-12-29 12:00:00, 2000-01-05 12:00:00)\n-- + Bucket: ...\n```\n\n\n"
},
{
"name": "DATETIME_DIFF",
"arguments": [],
"category": "Datetime",
"description_markdown": "```\nDATETIME_DIFF(end_datetime, start_datetime, granularity)\n```\n\n **Description** \n\nGets the number of unit boundaries between two`DATETIME`values\n(`end_datetime`-`start_datetime`) at a particular time granularity.\n\n **Definitions** \n\n- ` start_datetime`: The starting` DATETIME`value.\n- ` end_datetime`: The ending` DATETIME`value.\n- ` granularity`: The datetime part that represents the granularity.\nThis can be:\n \n \n - ` MICROSECOND`\n - ` MILLISECOND`\n - ` SECOND`\n - ` MINUTE`\n - ` HOUR`\n - ` DAY`\n - ` WEEK`: This date part begins on Sunday.\n - ` WEEK(<WEEKDAY>)`: This date part begins on` WEEKDAY`. Valid values for` WEEKDAY`are` SUNDAY`,` MONDAY`,` TUESDAY`,` WEDNESDAY`,` THURSDAY`,` FRIDAY`, and` SATURDAY`.\n - ` ISOWEEK`: Uses[ISO 8601 week](https://en.wikipedia.org/wiki/ISO_week_date)boundaries. ISO weeks begin on Monday.\n - ` MONTH`, except when the first two\narguments are` TIMESTAMP`values.\n - ` QUARTER`\n - ` YEAR`\n - ` ISOYEAR`: Uses the[ISO 8601](https://en.wikipedia.org/wiki/ISO_8601)week-numbering year boundary. The ISO year boundary is the Monday of the\nfirst week whose Thursday belongs to the corresponding Gregorian calendar\nyear.\n\n **Details** \n\nIf`end_datetime`is earlier than`start_datetime`, the output is negative.\nProduces an error if the computation overflows, such as if the difference\nin microseconds\nbetween the two`DATETIME`values overflows.\n\n **Note:** The behavior of the this function follows the type of arguments passed in.\nFor example,`DATETIME_DIFF(TIMESTAMP, TIMESTAMP, PART)`behaves like`TIMESTAMP_DIFF(TIMESTAMP, TIMESTAMP, PART)`. **Return Data Type** \n\n`INT64`\n\n **Example** \n\n```\nSELECT\n DATETIME \"2010-07-07 10:20:00\" as first_datetime,\n DATETIME \"2008-12-25 15:30:00\" as second_datetime,\n DATETIME_DIFF(DATETIME \"2010-07-07 10:20:00\",\n DATETIME \"2008-12-25 15:30:00\", DAY) as difference;\n\n/*----------------------------+------------------------+------------------------*\n | first_datetime | second_datetime | difference |\n +----------------------------+------------------------+------------------------+\n | 2010-07-07T10:20:00 | 2008-12-25T15:30:00 | 559 |\n *----------------------------+------------------------+------------------------*/\n```\n\n```\nSELECT\n DATETIME_DIFF(DATETIME '2017-10-15 00:00:00',\n DATETIME '2017-10-14 00:00:00', DAY) as days_diff,\n DATETIME_DIFF(DATETIME '2017-10-15 00:00:00',\n DATETIME '2017-10-14 00:00:00', WEEK) as weeks_diff;\n\n/*-----------+------------*\n | days_diff | weeks_diff |\n +-----------+------------+\n | 1 | 1 |\n *-----------+------------*/\n```\n\nThe example above shows the result of`DATETIME_DIFF`for two`DATETIME`s that\nare 24 hours apart.`DATETIME_DIFF`with the part`WEEK`returns 1 because`DATETIME_DIFF`counts the number of part boundaries in this range of`DATETIME`s. Each`WEEK`begins on Sunday, so there is one part boundary between\nSaturday,`2017-10-14 00:00:00`and Sunday,`2017-10-15 00:00:00`.\n\nThe following example shows the result of`DATETIME_DIFF`for two dates in\ndifferent years.`DATETIME_DIFF`with the date part`YEAR`returns 3 because it\ncounts the number of Gregorian calendar year boundaries between the two`DATETIME`s.`DATETIME_DIFF`with the date part`ISOYEAR`returns 2 because the\nsecond`DATETIME`belongs to the ISO year 2015. The first Thursday of the 2015\ncalendar year was 2015-01-01, so the ISO year 2015 begins on the preceding\nMonday, 2014-12-29.\n\n```\nSELECT\n DATETIME_DIFF('2017-12-30 00:00:00',\n '2014-12-30 00:00:00', YEAR) AS year_diff,\n DATETIME_DIFF('2017-12-30 00:00:00',\n '2014-12-30 00:00:00', ISOYEAR) AS isoyear_diff;\n\n/*-----------+--------------*\n | year_diff | isoyear_diff |\n +-----------+--------------+\n | 3 | 2 |\n *-----------+--------------*/\n```\n\nThe following example shows the result of`DATETIME_DIFF`for two days in\nsuccession. The first date falls on a Monday and the second date falls on a\nSunday.`DATETIME_DIFF`with the date part`WEEK`returns 0 because this time\npart uses weeks that begin on Sunday.`DATETIME_DIFF`with the date part`WEEK(MONDAY)`returns 1.`DATETIME_DIFF`with the date part`ISOWEEK`also returns 1 because ISO weeks begin on Monday.\n\n```\nSELECT\n DATETIME_DIFF('2017-12-18', '2017-12-17', WEEK) AS week_diff,\n DATETIME_DIFF('2017-12-18', '2017-12-17', WEEK(MONDAY)) AS week_weekday_diff,\n DATETIME_DIFF('2017-12-18', '2017-12-17', ISOWEEK) AS isoweek_diff;\n\n/*-----------+-------------------+--------------*\n | week_diff | week_weekday_diff | isoweek_diff |\n +-----------+-------------------+--------------+\n | 0 | 1 | 1 |\n *-----------+-------------------+--------------*/\n```\n\n\n"
},
{
"name": "DATETIME_SUB",
"arguments": [],
"category": "Datetime",
"description_markdown": "```\nDATETIME_SUB(datetime_expression, INTERVAL int64_expression part)\n```\n\n **Description** \n\nSubtracts`int64_expression`units of`part`from the`DATETIME`.\n\n`DATETIME_SUB`supports the following values for`part`:\n\n- ` MICROSECOND`\n- ` MILLISECOND`\n- ` SECOND`\n- ` MINUTE`\n- ` HOUR`\n- ` DAY`\n- ` WEEK`. Equivalent to 7` DAY`s.\n- ` MONTH`\n- ` QUARTER`\n- ` YEAR`\n\nSpecial handling is required for`MONTH`,`QUARTER`, and`YEAR`parts when the\ndate is at (or near) the last day of the month. If the resulting month has fewer\ndays than the original`DATETIME`'s day, then the result day is the last day of\nthe new month.\n\n **Return Data Type** \n\n`DATETIME`\n\n **Example** \n\n```\nSELECT\n DATETIME \"2008-12-25 15:30:00\" as original_date,\n DATETIME_SUB(DATETIME \"2008-12-25 15:30:00\", INTERVAL 10 MINUTE) as earlier;\n\n/*-----------------------------+------------------------*\n | original_date | earlier |\n +-----------------------------+------------------------+\n | 2008-12-25T15:30:00 | 2008-12-25T15:20:00 |\n *-----------------------------+------------------------*/\n```\n\n\n"
},
{
"name": "DATETIME_TRUNC",
"arguments": [],
"category": "Datetime",
"description_markdown": "```\nDATETIME_TRUNC(datetime_expression, date_time_part)\n```\n\n **Description** \n\nTruncates a`DATETIME`value to the granularity of`date_time_part`.\nThe`DATETIME`value is always rounded to the beginning of`date_time_part`,\nwhich can be one of the following:\n\n- ` MICROSECOND`: If used, nothing is truncated from the value.\n- ` MILLISECOND`: The nearest lessor or equal millisecond.\n- ` SECOND`: The nearest lessor or equal second.\n- ` MINUTE`: The nearest lessor or equal minute.\n- ` HOUR`: The nearest lessor or equal hour.\n- ` DAY`: The day in the Gregorian calendar year that contains the` DATETIME`value.\n- ` WEEK`: The first day of the week in the week that contains the` DATETIME`value. Weeks begin on Sundays.` WEEK`is equivalent to` WEEK(SUNDAY)`.\n- ` WEEK(WEEKDAY)`: The first day of the week in the week that contains the` DATETIME`value. Weeks begin on` WEEKDAY`.` WEEKDAY`must be one of the\nfollowing:` SUNDAY`,` MONDAY`,` TUESDAY`,` WEDNESDAY`,` THURSDAY`,` FRIDAY`,\nor` SATURDAY`.\n- ` ISOWEEK`: The first day of the[ISO 8601 week](https://en.wikipedia.org/wiki/ISO_week_date)in the\nISO week that contains the` DATETIME`value. The ISO week begins on\nMonday. The first ISO week of each ISO year contains the first Thursday of the\ncorresponding Gregorian calendar year.\n- ` MONTH`: The first day of the month in the month that contains the` DATETIME`value.\n- ` QUARTER`: The first day of the quarter in the quarter that contains the` DATETIME`value.\n- ` YEAR`: The first day of the year in the year that contains the` DATETIME`value.\n- ` ISOYEAR`: The first day of the[ISO 8601](https://en.wikipedia.org/wiki/ISO_8601)week-numbering year\nin the ISO year that contains the` DATETIME`value. The ISO year is the\nMonday of the first week whose Thursday belongs to the corresponding\nGregorian calendar year.\n\n **Return Data Type** \n\n`DATETIME`\n\n **Examples** \n\n```\nSELECT\n DATETIME \"2008-12-25 15:30:00\" as original,\n DATETIME_TRUNC(DATETIME \"2008-12-25 15:30:00\", DAY) as truncated;\n\n/*----------------------------+------------------------*\n | original | truncated |\n +----------------------------+------------------------+\n | 2008-12-25T15:30:00 | 2008-12-25T00:00:00 |\n *----------------------------+------------------------*/\n```\n\nIn the following example, the original`DATETIME`falls on a Sunday. Because the`part`is`WEEK(MONDAY)`,`DATE_TRUNC`returns the`DATETIME`for the\npreceding Monday.\n\n```\nSELECT\n datetime AS original,\n DATETIME_TRUNC(datetime, WEEK(MONDAY)) AS truncated\nFROM (SELECT DATETIME(TIMESTAMP \"2017-11-05 00:00:00+00\", \"UTC\") AS datetime);\n\n/*---------------------+---------------------*\n | original | truncated |\n +---------------------+---------------------+\n | 2017-11-05T00:00:00 | 2017-10-30T00:00:00 |\n *---------------------+---------------------*/\n```\n\nIn the following example, the original`datetime_expression`is in the Gregorian\ncalendar year 2015. However,`DATETIME_TRUNC`with the`ISOYEAR`date part\ntruncates the`datetime_expression`to the beginning of the ISO year, not the\nGregorian calendar year. The first Thursday of the 2015 calendar year was\n2015-01-01, so the ISO year 2015 begins on the preceding Monday, 2014-12-29.\nTherefore the ISO year boundary preceding the`datetime_expression`2015-06-15 00:00:00 is 2014-12-29.\n\n```\nSELECT\n DATETIME_TRUNC('2015-06-15 00:00:00', ISOYEAR) AS isoyear_boundary,\n EXTRACT(ISOYEAR FROM DATETIME '2015-06-15 00:00:00') AS isoyear_number;\n\n/*---------------------+----------------*\n | isoyear_boundary | isoyear_number |\n +---------------------+----------------+\n | 2014-12-29T00:00:00 | 2015 |\n *---------------------+----------------*/\n```\n\n\n"
},
{
"name": "DATE_ADD",
"arguments": [],
"category": "Date",
"description_markdown": "```\nDATE_ADD(date_expression, INTERVAL int64_expression date_part)\n```\n\n **Description** \n\nAdds a specified time interval to a DATE.\n\n`DATE_ADD`supports the following`date_part`values:\n\n- ` DAY`\n- ` WEEK`. Equivalent to 7` DAY`s.\n- ` MONTH`\n- ` QUARTER`\n- ` YEAR`\n\nSpecial handling is required for MONTH, QUARTER, and YEAR parts when\nthe date is at (or near) the last day of the month. If the resulting\nmonth has fewer days than the original date's day, then the resulting\ndate is the last date of that month.\n\n **Return Data Type** \n\nDATE\n\n **Example** \n\n```\nSELECT DATE_ADD(DATE '2008-12-25', INTERVAL 5 DAY) AS five_days_later;\n\n/*--------------------*\n | five_days_later |\n +--------------------+\n | 2008-12-30 |\n *--------------------*/\n```\n\n\n"
},
{
"name": "DATE_BUCKET",
"arguments": [],
"category": "Time_series",
"description_markdown": " **Preview** \n\nThis product or feature is subject to the \"Pre-GA Offerings Terms\"\n in the General Service Terms section of the[Service Specific Terms](/terms/service-terms).\n Pre-GA products and features are available \"as is\" and might have\n limited support. For more information, see the[launch stage descriptions](/products#product-launch-stages).\n\n **Note:** To provide feedback or request support for this feature, send an email to[bigquery-time-series-preview-support@google.com](mailto:bigquery-time-series-preview-support@google.com).```\nDATE_BUCKET(date_in_bucket, bucket_width)\n```\n\n```\nDATE_BUCKET(date_in_bucket, bucket_width, bucket_origin_date)\n```\n\n **Description** \n\nGets the lower bound of the date bucket that contains a date.\n\n **Definitions** \n\n- ` date_in_bucket`: A` DATE`value that you can use to look up a date bucket.\n- ` bucket_width`: An` INTERVAL`value that represents the width of\na date bucket. A[single interval](/bigquery/docs/reference/standard-sql/data-types#single_datetime_part_interval)with[date parts](/bigquery/docs/reference/standard-sql/data-types#interval_datetime_parts)is supported.\n- ` bucket_origin_date`: A` DATE`value that represents a point in time. All\nbuckets expand left and right from this point. If this argument is not set,` 1950-01-01`is used by default.\n\n **Return type** \n\n`DATE`\n\n **Examples** \n\nIn the following example, the origin is omitted and the default origin,`1950-01-01`is used. All buckets expand in both directions from the origin,\nand the size of each bucket is two days. The lower bound of the bucket in\nwhich`my_date`belongs is returned.\n\n```\nWITH some_dates AS (\n SELECT DATE '1949-12-29' AS my_date UNION ALL\n SELECT DATE '1949-12-30' UNION ALL\n SELECT DATE '1949-12-31' UNION ALL\n SELECT DATE '1950-01-01' UNION ALL\n SELECT DATE '1950-01-02' UNION ALL\n SELECT DATE '1950-01-03'\n)\nSELECT DATE_BUCKET(my_date, INTERVAL 2 DAY) AS bucket_lower_bound\nFROM some_dates;\n\n/*--------------------+\n | bucket_lower_bound |\n +--------------------+\n | 1949-12-28 |\n | 1949-12-30 |\n | 1949-12-30 |\n | 1950-12-01 |\n | 1950-12-01 |\n | 1950-12-03 |\n +--------------------*/\n\n-- Some date buckets that originate from 1950-01-01:\n-- + Bucket: ...\n-- + Bucket: [1949-12-28, 1949-12-30)\n-- + Bucket: [1949-12-30, 1950-01-01)\n-- + Origin: [1950-01-01]\n-- + Bucket: [1950-01-01, 1950-01-03)\n-- + Bucket: [1950-01-03, 1950-01-05)\n-- + Bucket: ...\n```\n\nIn the following example, the origin has been changed to`2000-12-24`,\nand all buckets expand in both directions from this point. The size of each\nbucket is seven days. The lower bound of the bucket in which`my_date`belongs\nis returned:\n\n```\nWITH some_dates AS (\n SELECT DATE '2000-12-20' AS my_date UNION ALL\n SELECT DATE '2000-12-21' UNION ALL\n SELECT DATE '2000-12-22' UNION ALL\n SELECT DATE '2000-12-23' UNION ALL\n SELECT DATE '2000-12-24' UNION ALL\n SELECT DATE '2000-12-25'\n)\nSELECT DATE_BUCKET(\n my_date,\n INTERVAL 7 DAY,\n DATE '2000-12-24') AS bucket_lower_bound\nFROM some_dates;\n\n/*--------------------+\n | bucket_lower_bound |\n +--------------------+\n | 2000-12-17 |\n | 2000-12-17 |\n | 2000-12-17 |\n | 2000-12-17 |\n | 2000-12-24 |\n | 2000-12-24 |\n +--------------------*/\n\n-- Some date buckets that originate from 2000-12-24:\n-- + Bucket: ...\n-- + Bucket: [2000-12-10, 2000-12-17)\n-- + Bucket: [2000-12-17, 2000-12-24)\n-- + Origin: [2000-12-24]\n-- + Bucket: [2000-12-24, 2000-12-31)\n-- + Bucket: [2000-12-31, 2000-01-07)\n-- + Bucket: ...\n```\n\n\n"
},
{
"name": "DATE_DIFF",
"arguments": [],
"category": "Date",
"description_markdown": "```\nDATE_DIFF(end_date, start_date, granularity)\n```\n\n **Description** \n\nGets the number of unit boundaries between two`DATE`values (`end_date`-`start_date`) at a particular time granularity.\n\n **Definitions** \n\n- ` start_date`: The starting` DATE`value.\n- ` end_date`: The ending` DATE`value.\n- ` granularity`: The date part that represents the granularity. This can be:\n \n \n - ` DAY`\n - ` WEEK`This date part begins on Sunday.\n - ` WEEK(<WEEKDAY>)`: This date part begins on` WEEKDAY`. Valid values for` WEEKDAY`are` SUNDAY`,` MONDAY`,` TUESDAY`,` WEDNESDAY`,` THURSDAY`,` FRIDAY`, and` SATURDAY`.\n - ` ISOWEEK`: Uses[ISO 8601 week](https://en.wikipedia.org/wiki/ISO_week_date)boundaries. ISO weeks\nbegin on Monday.\n - ` MONTH`, except when the first two\narguments are` TIMESTAMP`values.\n - ` QUARTER`\n - ` YEAR`\n - ` ISOYEAR`: Uses the[ISO 8601](https://en.wikipedia.org/wiki/ISO_8601)week-numbering year boundary.\nThe ISO year boundary is the Monday of the first week whose Thursday\nbelongs to the corresponding Gregorian calendar year.\n\n **Details** \n\nIf`end_date`is earlier than`start_date`, the output is negative.\n\n **Note:** The behavior of the this function follows the type of arguments passed in.\nFor example,`DATE_DIFF(TIMESTAMP, TIMESTAMP, PART)`behaves like`TIMESTAMP_DIFF(TIMESTAMP, TIMESTAMP, PART)`. **Return Data Type** \n\n`INT64`\n\n **Example** \n\n```\nSELECT DATE_DIFF(DATE '2010-07-07', DATE '2008-12-25', DAY) AS days_diff;\n\n/*-----------*\n | days_diff |\n +-----------+\n | 559 |\n *-----------*/\n```\n\n```\nSELECT\n DATE_DIFF(DATE '2017-10-15', DATE '2017-10-14', DAY) AS days_diff,\n DATE_DIFF(DATE '2017-10-15', DATE '2017-10-14', WEEK) AS weeks_diff;\n\n/*-----------+------------*\n | days_diff | weeks_diff |\n +-----------+------------+\n | 1 | 1 |\n *-----------+------------*/\n```\n\nThe example above shows the result of`DATE_DIFF`for two days in succession.`DATE_DIFF`with the date part`WEEK`returns 1 because`DATE_DIFF`counts the\nnumber of date part boundaries in this range of dates. Each`WEEK`begins on\nSunday, so there is one date part boundary between Saturday, 2017-10-14\nand Sunday, 2017-10-15.\n\nThe following example shows the result of`DATE_DIFF`for two dates in different\nyears.`DATE_DIFF`with the date part`YEAR`returns 3 because it counts the\nnumber of Gregorian calendar year boundaries between the two dates.`DATE_DIFF`with the date part`ISOYEAR`returns 2 because the second date belongs to the\nISO year 2015. The first Thursday of the 2015 calendar year was 2015-01-01, so\nthe ISO year 2015 begins on the preceding Monday, 2014-12-29.\n\n```\nSELECT\n DATE_DIFF('2017-12-30', '2014-12-30', YEAR) AS year_diff,\n DATE_DIFF('2017-12-30', '2014-12-30', ISOYEAR) AS isoyear_diff;\n\n/*-----------+--------------*\n | year_diff | isoyear_diff |\n +-----------+--------------+\n | 3 | 2 |\n *-----------+--------------*/\n```\n\nThe following example shows the result of`DATE_DIFF`for two days in\nsuccession. The first date falls on a Monday and the second date falls on a\nSunday.`DATE_DIFF`with the date part`WEEK`returns 0 because this date part\nuses weeks that begin on Sunday.`DATE_DIFF`with the date part`WEEK(MONDAY)`returns 1.`DATE_DIFF`with the date part`ISOWEEK`also returns 1 because\nISO weeks begin on Monday.\n\n```\nSELECT\n DATE_DIFF('2017-12-18', '2017-12-17', WEEK) AS week_diff,\n DATE_DIFF('2017-12-18', '2017-12-17', WEEK(MONDAY)) AS week_weekday_diff,\n DATE_DIFF('2017-12-18', '2017-12-17', ISOWEEK) AS isoweek_diff;\n\n/*-----------+-------------------+--------------*\n | week_diff | week_weekday_diff | isoweek_diff |\n +-----------+-------------------+--------------+\n | 0 | 1 | 1 |\n *-----------+-------------------+--------------*/\n```\n\n\n"
},
{
"name": "DATE_FROM_UNIX_DATE",
"arguments": [],
"category": "Date",
"description_markdown": "```\nDATE_FROM_UNIX_DATE(int64_expression)\n```\n\n **Description** \n\nInterprets`int64_expression`as the number of days since 1970-01-01.\n\n **Return Data Type** \n\nDATE\n\n **Example** \n\n```\nSELECT DATE_FROM_UNIX_DATE(14238) AS date_from_epoch;\n\n/*-----------------*\n | date_from_epoch |\n +-----------------+\n | 2008-12-25 |\n *-----------------+*/\n```\n\n\n"
},
{
"name": "DATE_SUB",
"arguments": [],
"category": "Date",
"description_markdown": "```\nDATE_SUB(date_expression, INTERVAL int64_expression date_part)\n```\n\n **Description** \n\nSubtracts a specified time interval from a DATE.\n\n`DATE_SUB`supports the following`date_part`values:\n\n- ` DAY`\n- ` WEEK`. Equivalent to 7` DAY`s.\n- ` MONTH`\n- ` QUARTER`\n- ` YEAR`\n\nSpecial handling is required for MONTH, QUARTER, and YEAR parts when\nthe date is at (or near) the last day of the month. If the resulting\nmonth has fewer days than the original date's day, then the resulting\ndate is the last date of that month.\n\n **Return Data Type** \n\nDATE\n\n **Example** \n\n```\nSELECT DATE_SUB(DATE '2008-12-25', INTERVAL 5 DAY) AS five_days_ago;\n\n/*---------------*\n | five_days_ago |\n +---------------+\n | 2008-12-20 |\n *---------------*/\n```\n\n\n"
},
{
"name": "DATE_TRUNC",
"arguments": [],
"category": "Date",
"description_markdown": "```\nDATE_TRUNC(date_expression, date_part)\n```\n\n **Description** \n\nTruncates a`DATE`value to the granularity of`date_part`. The`DATE`value\nis always rounded to the beginning of`date_part`, which can be one of the\nfollowing:\n\n- ` DAY`: The day in the Gregorian calendar year that contains the` DATE`value.\n- ` WEEK`: The first day of the week in the week that contains the` DATE`value. Weeks begin on Sundays.` WEEK`is equivalent to` WEEK(SUNDAY)`.\n- ` WEEK(WEEKDAY)`: The first day of the week in the week that contains the` DATE`value. Weeks begin on` WEEKDAY`.` WEEKDAY`must be one of the\nfollowing:` SUNDAY`,` MONDAY`,` TUESDAY`,` WEDNESDAY`,` THURSDAY`,` FRIDAY`,\nor` SATURDAY`.\n- ` ISOWEEK`: The first day of the[ISO 8601 week](https://en.wikipedia.org/wiki/ISO_week_date)in the\nISO week that contains the` DATE`value. The ISO week begins on\nMonday. The first ISO week of each ISO year contains the first Thursday of the\ncorresponding Gregorian calendar year.\n- ` MONTH`: The first day of the month in the month that contains the` DATE`value.\n- ` QUARTER`: The first day of the quarter in the quarter that contains the` DATE`value.\n- ` YEAR`: The first day of the year in the year that contains the` DATE`value.\n- ` ISOYEAR`: The first day of the[ISO 8601](https://en.wikipedia.org/wiki/ISO_8601)week-numbering year\nin the ISO year that contains the` DATE`value. The ISO year is the\nMonday of the first week whose Thursday belongs to the corresponding\nGregorian calendar year.\n\n **Return Data Type** \n\nDATE\n\n **Examples** \n\n```\nSELECT DATE_TRUNC(DATE '2008-12-25', MONTH) AS month;\n\n/*------------*\n | month |\n +------------+\n | 2008-12-01 |\n *------------*/\n```\n\nIn the following example, the original date falls on a Sunday. Because\nthe`date_part`is`WEEK(MONDAY)`,`DATE_TRUNC`returns the`DATE`for the\npreceding Monday.\n\n```\nSELECT date AS original, DATE_TRUNC(date, WEEK(MONDAY)) AS truncated\nFROM (SELECT DATE('2017-11-05') AS date);\n\n/*------------+------------*\n | original | truncated |\n +------------+------------+\n | 2017-11-05 | 2017-10-30 |\n *------------+------------*/\n```\n\nIn the following example, the original`date_expression`is in the Gregorian\ncalendar year 2015. However,`DATE_TRUNC`with the`ISOYEAR`date part\ntruncates the`date_expression`to the beginning of the ISO year, not the\nGregorian calendar year. The first Thursday of the 2015 calendar year was\n2015-01-01, so the ISO year 2015 begins on the preceding Monday, 2014-12-29.\nTherefore the ISO year boundary preceding the`date_expression`2015-06-15 is\n2014-12-29.\n\n```\nSELECT\n DATE_TRUNC('2015-06-15', ISOYEAR) AS isoyear_boundary,\n EXTRACT(ISOYEAR FROM DATE '2015-06-15') AS isoyear_number;\n\n/*------------------+----------------*\n | isoyear_boundary | isoyear_number |\n +------------------+----------------+\n | 2014-12-29 | 2015 |\n *------------------+----------------*/\n```\n\n\n"
},
{
"name": "DENSE_RANK",
"arguments": [],
"category": "Numbering",
"description_markdown": "```\nDENSE_RANK()\nOVER over_clause\n\nover_clause:\n { named_window | ( [ window_specification ] ) }\n\nwindow_specification:\n [ named_window ]\n [ PARTITION BY partition_expression [, ...] ]\n ORDER BY expression [ { ASC | DESC } ] [, ...]\n```\n\n **Description** \n\nReturns the ordinal (1-based) rank of each row within the window partition.\nAll peer rows receive the same rank value, and the subsequent rank value is\nincremented by one.\n\nTo learn more about the`OVER`clause and how to use it, see[Window function calls](/bigquery/docs/reference/standard-sql/window-function-calls).\n\n **Return Type** \n\n`INT64`\n\n **Examples** \n\n```\nWITH Numbers AS\n (SELECT 1 as x\n UNION ALL SELECT 2\n UNION ALL SELECT 2\n UNION ALL SELECT 5\n UNION ALL SELECT 8\n UNION ALL SELECT 10\n UNION ALL SELECT 10\n)\nSELECT x,\n DENSE_RANK() OVER (ORDER BY x ASC) AS dense_rank\nFROM Numbers\n\n/*-------------------------*\n | x | dense_rank |\n +-------------------------+\n | 1 | 1 |\n | 2 | 2 |\n | 2 | 2 |\n | 5 | 3 |\n | 8 | 4 |\n | 10 | 5 |\n | 10 | 5 |\n *-------------------------*/\n```\n\n```\nWITH finishers AS\n (SELECT 'Sophia Liu' as name,\n TIMESTAMP '2016-10-18 2:51:45' as finish_time,\n 'F30-34' as division\n UNION ALL SELECT 'Lisa Stelzner', TIMESTAMP '2016-10-18 2:54:11', 'F35-39'\n UNION ALL SELECT 'Nikki Leith', TIMESTAMP '2016-10-18 2:59:01', 'F30-34'\n UNION ALL SELECT 'Lauren Matthews', TIMESTAMP '2016-10-18 3:01:17', 'F35-39'\n UNION ALL SELECT 'Desiree Berry', TIMESTAMP '2016-10-18 3:05:42', 'F35-39'\n UNION ALL SELECT 'Suzy Slane', TIMESTAMP '2016-10-18 3:06:24', 'F35-39'\n UNION ALL SELECT 'Jen Edwards', TIMESTAMP '2016-10-18 3:06:36', 'F30-34'\n UNION ALL SELECT 'Meghan Lederer', TIMESTAMP '2016-10-18 2:59:01', 'F30-34')\nSELECT name,\n finish_time,\n division,\n DENSE_RANK() OVER (PARTITION BY division ORDER BY finish_time ASC) AS finish_rank\nFROM finishers;\n\n/*-----------------+------------------------+----------+-------------*\n | name | finish_time | division | finish_rank |\n +-----------------+------------------------+----------+-------------+\n | Sophia Liu | 2016-10-18 09:51:45+00 | F30-34 | 1 |\n | Meghan Lederer | 2016-10-18 09:59:01+00 | F30-34 | 2 |\n | Nikki Leith | 2016-10-18 09:59:01+00 | F30-34 | 2 |\n | Jen Edwards | 2016-10-18 10:06:36+00 | F30-34 | 3 |\n | Lisa Stelzner | 2016-10-18 09:54:11+00 | F35-39 | 1 |\n | Lauren Matthews | 2016-10-18 10:01:17+00 | F35-39 | 2 |\n | Desiree Berry | 2016-10-18 10:05:42+00 | F35-39 | 3 |\n | Suzy Slane | 2016-10-18 10:06:24+00 | F35-39 | 4 |\n *-----------------+------------------------+----------+-------------*/\n```\n\n\n"
},
{
"name": "DETERMINISTIC_DECRYPT_BYTES",
"arguments": [],
"category": "AEAD_encryption",
"description_markdown": "```\nDETERMINISTIC_DECRYPT_BYTES(keyset, ciphertext, additional_data)\n```\n\n **Description** \n\nUses the matching key from`keyset`to decrypt`ciphertext`and verifies the\nintegrity of the data using`additional_data`. Returns an error if decryption\nfails.\n\n`keyset`is a serialized`BYTES`value or a`STRUCT`value returned by one of the`KEYS`functions.`keyset`must contain\nthe key that was used to encrypt`ciphertext`, the key must be in an`'ENABLED'`state, and the key must be of type`DETERMINISTIC_AEAD_AES_SIV_CMAC_256`, or\nelse the function returns an error.`DETERMINISTIC_DECRYPT_BYTES`identifies the\nmatching key in`keyset`by finding the key with the key ID that matches the one\nencrypted in`ciphertext`.\n\n`ciphertext`is a`BYTES`value that is the result of a call to`DETERMINISTIC_ENCRYPT`where the input`plaintext`was of type`BYTES`.\n\nThe ciphertext must follow Tink's[wire format](https://developers.google.com/tink/wire-format#deterministic_aead). The first\nbyte of`ciphertext`should contain a Tink key version followed by a 4 byte key\nhint. If`ciphertext`includes an initialization vector (IV), it should be the\nnext bytes of`ciphertext`. If`ciphertext`includes an authentication tag, it\nshould be the last bytes of`ciphertext`. If the IV and authentic tag are one\n(SIV), it should be the first bytes of`ciphertext`. The IV and authentication\ntag commonly require 16 bytes, but may vary in size.\n\n`additional_data`is a`STRING`or`BYTES`value that binds the ciphertext to\nits context. This forces the ciphertext to be decrypted in the same context in\nwhich it was encrypted. This function casts any`STRING`value to`BYTES`. This\nmust be the same as the`additional_data`provided to`DETERMINISTIC_ENCRYPT`to\nencrypt`ciphertext`, ignoring its type, or else the function returns an error.\n\n **Return Data Type** \n\n`BYTES`\n\n **Example** \n\nThis example creates a table of unique IDs with associated plaintext values and\nkeysets. Then it uses these keysets to encrypt the plaintext values as`BYTES`and store them in a new table. Finally, it uses`DETERMINISTIC_DECRYPT_BYTES`to\ndecrypt the encrypted values and display them as plaintext.\n\nThe following statement creates a table`CustomerKeysets`containing a column of\nunique IDs, a column of`DETERMINISTIC_AEAD_AES_SIV_CMAC_256`keysets, and a\ncolumn of favorite animals.\n\n```\nCREATE TABLE deterministic.CustomerKeysets AS\nSELECT\n 1 AS customer_id,\n KEYS.NEW_KEYSET('DETERMINISTIC_AEAD_AES_SIV_CMAC_256') AS keyset,\n b'jaguar' AS favorite_animal\nUNION ALL\nSELECT\n 2 AS customer_id,\n KEYS.NEW_KEYSET('DETERMINISTIC_AEAD_AES_SIV_CMAC_256') AS keyset,\n b'zebra' AS favorite_animal\nUNION ALL\nSELECT\n 3 AS customer_id,\n KEYS.NEW_KEYSET('DETERMINISTIC_AEAD_AES_SIV_CMAC_256') AS keyset,\n b'nautilus' AS favorite_animal;\n```\n\nThe following statement creates a table`EncryptedCustomerData`containing a\ncolumn of unique IDs and a column of ciphertext. The statement encrypts the\nplaintext`favorite_animal`using the keyset value from`CustomerKeysets`corresponding to each unique ID.\n\n```\nCREATE TABLE deterministic.EncryptedCustomerData AS\nSELECT\n customer_id,\n DETERMINISTIC_ENCRYPT(ck.keyset, favorite_animal, CAST(CAST(customer_id AS STRING) AS BYTES))\n AS encrypted_animal\nFROM\n deterministic.CustomerKeysets AS ck;\n```\n\nThe following query uses the keysets in the`CustomerKeysets`table to decrypt\ndata in the`EncryptedCustomerData`table.\n\n```\nSELECT\n ecd.customer_id,\n DETERMINISTIC_DECRYPT_BYTES(\n (SELECT ck.keyset\n FROM deterministic.CustomerKeysets AS ck\n WHERE ecd.customer_id = ck.customer_id),\n ecd.encrypted_animal,\n CAST(CAST(ecd.customer_id AS STRING) AS BYTES)\n ) AS favorite_animal\nFROM deterministic.EncryptedCustomerData AS ecd;\n```\n\n\n"
},
{
"name": "DETERMINISTIC_DECRYPT_STRING",
"arguments": [],
"category": "AEAD_encryption",
"description_markdown": "```\nDETERMINISTIC_DECRYPT_STRING(keyset, ciphertext, additional_data)\n```\n\n **Description** \n\nLike[DETERMINISTIC_DECRYPT_BYTES](#deterministic_decrypt_bytes), but where`plaintext`is of type`STRING`.\n\n **Return Data Type** \n\n`STRING`\n\n\n\n"
},
{
"name": "DETERMINISTIC_ENCRYPT",
"arguments": [],
"category": "AEAD_encryption",
"description_markdown": "```\nDETERMINISTIC_ENCRYPT(keyset, plaintext, additional_data)\n```\n\n **Description** \n\nEncrypts`plaintext`using the primary cryptographic key in`keyset`using[deterministic AEAD](https://developers.google.com/tink/deterministic-aead). The algorithm of the primary key must\nbe`DETERMINISTIC_AEAD_AES_SIV_CMAC_256`. Binds the ciphertext to the context\ndefined by`additional_data`. Returns`NULL`if any input is`NULL`.\n\n`keyset`is a serialized`BYTES`value or a`STRUCT`value returned by one of the`KEYS`functions.\n\n`plaintext`is the`STRING`or`BYTES`value to be encrypted.\n\n`additional_data`is a`STRING`or`BYTES`value that binds the ciphertext to\nits context. This forces the ciphertext to be decrypted in the same context in\nwhich it was encrypted.`plaintext`and`additional_data`must be of the same\ntype.`DETERMINISTIC_ENCRYPT(keyset, string1, string2)`is equivalent to`DETERMINISTIC_ENCRYPT(keyset, CAST(string1 AS BYTES), CAST(string2 AS BYTES))`.\n\nThe output is ciphertext`BYTES`. The ciphertext contains a[Tink-specific](https://github.com/google/tink/blob/master/docs/KEY-MANAGEMENT.md)prefix indicating the key used to perform the encryption.\nGiven an identical`keyset`and`plaintext`, this function returns the same\nciphertext each time it is invoked (including across queries).\n\n **Return Data Type** \n\n`BYTES`\n\n **Example** \n\nThe following query uses the keysets for each`customer_id`in the`CustomerKeysets`table to encrypt the value of the plaintext`favorite_animal`in the`PlaintextCustomerData`table corresponding to that`customer_id`. The\noutput contains a column of`customer_id`values and a column of corresponding\nciphertext output as`BYTES`.\n\n```\nWITH CustomerKeysets AS (\n SELECT 1 AS customer_id,\n KEYS.NEW_KEYSET('DETERMINISTIC_AEAD_AES_SIV_CMAC_256') AS keyset UNION ALL\n SELECT 2, KEYS.NEW_KEYSET('DETERMINISTIC_AEAD_AES_SIV_CMAC_256') UNION ALL\n SELECT 3, KEYS.NEW_KEYSET('DETERMINISTIC_AEAD_AES_SIV_CMAC_256')\n), PlaintextCustomerData AS (\n SELECT 1 AS customer_id, 'elephant' AS favorite_animal UNION ALL\n SELECT 2, 'walrus' UNION ALL\n SELECT 3, 'leopard'\n)\nSELECT\n pcd.customer_id,\n DETERMINISTIC_ENCRYPT(\n (SELECT keyset\n FROM CustomerKeysets AS ck\n WHERE ck.customer_id = pcd.customer_id),\n pcd.favorite_animal,\n CAST(pcd.customer_id AS STRING)\n ) AS encrypted_animal\nFROM PlaintextCustomerData AS pcd;\n```\n\n\n"
},
{
"name": "DIV",
"arguments": [],
"category": "Mathematical",
"description_markdown": "```\nDIV(X, Y)\n```\n\n **Description** \n\nReturns the result of integer division of X by Y. Division by zero returns\nan error. Division by -1 may overflow.\n\n| X | Y | DIV(X, Y) |\n| --- | --- | --- |\n| 20 | 4 | 5 |\n| 12 | -7 | -1 |\n| 20 | 3 | 6 |\n| 0 | 20 | 0 |\n| 20 | 0 | Error |\n\n **Return Data Type** \n\nThe return data type is determined by the argument types with the following\ntable.\n\n| INPUT | `INT64` | `NUMERIC` | `BIGNUMERIC` |\n| --- | --- | --- | --- |\n| `INT64` | `INT64` | `NUMERIC` | `BIGNUMERIC` |\n| `NUMERIC` | `NUMERIC` | `NUMERIC` | `BIGNUMERIC` |\n| `BIGNUMERIC` | `BIGNUMERIC` | `BIGNUMERIC` | `BIGNUMERIC` |\n\n\n\n"
},
{
"name": "DLP_DETERMINISTIC_DECRYPT",
"arguments": [],
"category": "DLP_encryption",
"description_markdown": "```\nDLP_DETERMINISTIC_DECRYPT(key, ciphertext, surrogate)\n```\n\n```\nDLP_DETERMINISTIC_DECRYPT(key, ciphertext, surrogate, context)\n```\n\n **Description** \n\nThis function decrypts`ciphertext`using an encryption key derived from`key`and`context`. You can use`surrogate`to prepend the decryption\nresult. To use DLP functions, you need a[new cryptographic key and then use that key to get a wrapped key](/bigquery/docs/column-key-encrypt#wrapped-key-dlp-functions).\n\n **Definitions** \n\n- ` key`: A serialized` BYTES`value returned by[DLP_KEY_CHAIN](#dlp_key_chain).` key`must be set to` ENABLED`in Cloud KMS. For\ninformation about how to generate a wrapped key, see[gcloud kms encrypt](https://cloud.google.com/sdk/gcloud/reference/kms/encrypt).\n- ` ciphertext`: The` STRING`value to decrypt.\n- ` surrogate`: A` STRING`value that you can prepend to output. If you don't\nwant to use` surrogate`, pass an empty string (enclosed in` \"\"`).\n- ` context`: A` STRING`value that is used with a\nCloud KMS key to derive a data encryption key. For more information,\nsee[CryptoDeterministicConfig:context](https://cloud.google.com/dlp/docs/reference/rest/v2/projects.deidentifyTemplates#cryptodeterministicconfig).\n\n **Return data type** \n\n`STRING`\n\n **Examples** \n\nIn the following query, the wrapped key is presented in a`BYTES`literal format:\n\n```\nSELECT\n DLP_DETERMINISTIC_DECRYPT(\n DLP_KEY_CHAIN(\n 'gcp-kms://projects/myproject/locations/us/keyRings/kms-test/cryptoKeys/test-Kek',\n b'\\012\\044\\000\\325\\155\\264\\153\\246\\071\\172\\130\\372\\305\\103\\047\\342\\356\\061\\077\\014\\030\\126\\147\\041\\126\\150\\012\\036\\020\\202\\215\\044\\267\\310\\331\\014\\116\\233\\022\\071\\000\\363\\344\\230\\067\\274\\007\\340\\273\\016\\212\\151\\226\\064\\200\\377\\303\\207\\103\\147\\052\\267\\035\\350\\004\\147\\365\\251\\271\\133\\062\\251\\246\\152\\177\\017\\005\\270\\044\\141\\211\\116\\337\\043\\035\\263\\122\\340\\110\\333\\266\\220\\377\\247\\204\\215\\233'),\n 'AWDeSznl9C7+NzTaCgiqiEAZ8Y55fZSuvCQ=',\n '',\n 'aad') AS results;\n\n/*--------------------------------------*\n | results |\n +--------------------------------------+\n | Plaintext |\n *--------------------------------------*/\n```\n\nIn the following query, the wrapped key is presented in the base64 format:\n\n```\nDECLARE DLP_KEY_VALUE BYTES;\n\nSET DLP_KEY_VALUE =\n FROM_BASE64(\n 'CiQA1W20a6Y5elj6xUMn4u4xPwwYVmchVmgKHhCCjSS3yNkMTpsSOQDz5Jg3vAfguw6KaZY0gP/Dh0NnKrcd6ARn9am5WzKppmp/DwW4JGGJTt8jHbNS4EjbtpD/p4SNmw==');\n\nSELECT\n DLP_DETERMINISTIC_DECRYPT(\n DLP_KEY_CHAIN(\n 'gcp-kms://projects/myproject/locations/us/keyRings/kms-test/cryptoKeys/test-Kek',\n DLP_KEY_VALUE),\n 'your_surrogate(36):AWDeSznl9C7+NzTaCgiqiEAZ8Y55fZSuvCQ=',\n 'your_surrogate',\n 'aad') AS results;\n\n/*--------------------------------------*\n | results |\n +--------------------------------------+\n | Plaintext |\n *--------------------------------------*/\n```\n\n\n"
},
{
"name": "DLP_DETERMINISTIC_ENCRYPT",
"arguments": [],
"category": "DLP_encryption",
"description_markdown": "```\nDLP_DETERMINISTIC_ENCRYPT(key, plaintext, surrogate)\n```\n\n```\nDLP_DETERMINISTIC_ENCRYPT(key, plaintext, surrogate, context)\n```\n\n **Description** \n\nThis function derives a data encryption key from`key`and`context`, and then\nencrypts`plaintext`. You can use`surrogate`to prepend the\nencryption result. To use DLP functions, you need a[new cryptographic key and then use that key to get a wrapped key](/bigquery/docs/column-key-encrypt#wrapped-key-dlp-functions).\n\n **Definitions** \n\n- ` key`: A serialized` BYTES`value that is returned by[DLP_KEY_CHAIN](#dlp_key_chain).` key`must be set to` ENABLED`in Cloud KMS. For\ninformation about how to generate a wrapped key, see[gcloud kms encrypt](https://cloud.google.com/sdk/gcloud/reference/kms/encrypt).\n- ` plaintext`: The` STRING`value to encrypt.\n- ` surrogate`: A` STRING`value that you can prepend to output. If you don't\nwant to use` surrogate`, pass an empty string (enclosed in` \"\"`).\n- ` context`: A user-provided` STRING`value that is used with a\nCloud KMS key to derive a data encryption key. For more information,\nsee[CryptoDeterministicConfig:context](https://cloud.google.com/dlp/docs/reference/rest/v2/projects.deidentifyTemplates#cryptodeterministicconfig).\n\n **Return data type** \n\n`STRING`\n\n **Examples** \n\nIn the following query, the wrapped key is presented in a`BYTES`literal format:\n\n```\nSELECT\n DLP_DETERMINISTIC_ENCRYPT(\n DLP_KEY_CHAIN(\n 'gcp-kms://projects/myproject/locations/us/keyRings/kms-test/cryptoKeys/test-KEK',\n b'\\012\\044\\000\\325\\155\\264\\153\\246\\071\\172\\130\\372\\305\\103\\047\\342\\356\\061\\077\\014\\030\\126\\147\\041\\126\\150\\012\\036\\020\\202\\215\\044\\267\\310\\331\\014\\116\\233\\022\\071\\000\\363\\344\\230\\067\\274\\007\\340\\273\\016\\212\\151\\226\\064\\200\\377\\303\\207\\103\\147\\052\\267\\035\\350\\004\\147\\365\\251\\271\\133\\062\\251\\246\\152\\177\\017\\005\\270\\044\\141\\211\\116\\337\\043\\035\\263\\122\\340\\110\\333\\266\\220\\377\\247\\204\\215\\233'),\n 'Plaintext',\n '',\n 'aad') AS results;\n\n/*--------------------------------------*\n | results |\n +--------------------------------------+\n | AWDeSznl9C7+NzTaCgiqiEAZ8Y55fZSuvCQ= |\n *--------------------------------------*/\n```\n\nIn the following query, the wrapped key is presented in the base64 format:\n\n```\nDECLARE DLP_KEY_VALUE BYTES;\n\nSET DLP_KEY_VALUE =\n FROM_BASE64(\n 'CiQA1W20a6Y5elj6xUMn4u4xPwwYVmchVmgKHhCCjSS3yNkMTpsSOQDz5Jg3vAfguw6KaZY0gP/Dh0NnKrcd6ARn9am5WzKppmp/DwW4JGGJTt8jHbNS4EjbtpD/p4SNmw==');\n\nSELECT\n DLP_DETERMINISTIC_ENCRYPT(\n DLP_KEY_CHAIN(\n 'gcp-kms://projects/myproject/locations/us/keyRings/kms-test/cryptoKeys/test-Kek',\n DLP_KEY_VALUE),\n 'Plaintext',\n 'your_surrogate',\n 'aad') AS results;\n\n/*---------------------------------------------------------*\n | results |\n +---------------------------------------------------------+\n | your_surrogate(36):AWDeSznl9C7+NzTaCgiqiEAZ8Y55fZSuvCQ= |\n *---------------------------------------------------------*/\n```\n\n\n"
},
{
"name": "DLP_KEY_CHAIN",
"arguments": [],
"category": "DLP_encryption",
"description_markdown": "```\nDLP_KEY_CHAIN(kms_resource_name, wrapped_key)\n```\n\n **Description** \n\nYou can use this function instead of the`key`argument for\nDLP deterministic encryption functions. This function lets\nyou use the[AES-SIV encryption functions](https://cloud.google.com/dlp/docs/pseudonymization#aes-siv)without including`plaintext`keys in a query. To use DLP functions, you need a[new cryptographic key and then use that key to get a wrapped key](/bigquery/docs/column-key-encrypt#wrapped-key-dlp-functions).\n\n **Definitions** \n\n- ` kms_resource_name`: A` STRING`literal that contains the resource path to the\nCloud KMS key.` kms_resource_name`cannot be` NULL`and must reside\nin the same Cloud region where this function is executed. This argument is\nused to derive the data encryption key in the` DLP_DETERMINISTIC_DECRYPT`and` DLP_DETERMINISTIC_ENCRYPT`functions. A Cloud KMS key looks like\nthis:\n \n \n ```\n gcp-kms://projects/my-project/locations/us/keyRings/my-key-ring/cryptoKeys/my-crypto-key\n ```\n \n \n- ` wrapped_key`: A` BYTES`literal that represents a secret text chosen by the\nuser. This secret text can be 16, 24, or 32 bytes. For information about\nhow to generate a wrapped key, see[gcloud kms encrypt](https://cloud.google.com/sdk/gcloud/reference/kms/encrypt).\n \n \n\n **Return data type** \n\n`STRUCT`\n\n **Examples** \n\nIn the following query, the wrapped key is presented in a`BYTES`literal format:\n\n```\nSELECT\n DLP_DETERMINISTIC_ENCRYPT(\n DLP_KEY_CHAIN(\n 'gcp-kms://projects/myproject/locations/us/keyRings/kms-test/cryptoKeys/test-Kek',\n b'\\012\\044\\000\\325\\155\\264\\153\\246\\071\\172\\130\\372\\305\\103\\047\\342\\356\\061\\077\\014\\030\\126\\147\\041\\126\\150\\012\\036\\020\\202\\215\\044\\267\\310\\331\\014\\116\\233\\022\\071\\000\\363\\344\\230\\067\\274\\007\\340\\273\\016\\212\\151\\226\\064\\200\\377\\303\\207\\103\\147\\052\\267\\035\\350\\004\\147\\365\\251\\271\\133\\062\\251\\246\\152\\177\\017\\005\\270\\044\\141\\211\\116\\337\\043\\035\\263\\122\\340\\110\\333\\266\\220\\377\\247\\204\\215\\233'),\n 'Plaintext',\n '',\n 'aad') AS results;\n\n/*--------------------------------------*\n | results |\n +--------------------------------------+\n | AWDeSznl9C7+NzTaCgiqiEAZ8Y55fZSuvCQ= |\n *--------------------------------------*/\n```\n\nIn the following query, the wrapped key is presented in the base64 format:\n\n```\nDECLARE DLP_KEY_VALUE BYTES;\n\nSET DLP_KEY_VALUE =\n FROM_BASE64(\n 'CiQA1W20a6Y5elj6xUMn4u4xPwwYVmchVmgKHhCCjSS3yNkMTpsSOQDz5Jg3vAfguw6KaZY0gP/Dh0NnKrcd6ARn9am5WzKppmp/DwW4JGGJTt8jHbNS4EjbtpD/p4SNmw==');\n\nSELECT\n DLP_DETERMINISTIC_ENCRYPT(\n DLP_KEY_CHAIN(\n 'gcp-kms://projects/myproject/locations/us/keyRings/kms-test/cryptoKeys/test-Kek',\n DLP_KEY_VALUE),\n 'Plaintext',\n '',\n 'aad') AS results;\n\n/*--------------------------------------*\n | results |\n +--------------------------------------+\n | AWDeSznl9C7+NzTaCgiqiEAZ8Y55fZSuvCQ= |\n *--------------------------------------*/\n```\n\n\n<span id=\"geography_functions\">\n## Geography functions\n\n</span>\nGoogleSQL for BigQuery supports geography functions.\nGeography functions operate on or generate GoogleSQL`GEOGRAPHY`values. The signature of most geography\nfunctions starts with`ST_`. GoogleSQL for BigQuery supports the following functions\nthat can be used to analyze geographical data, determine spatial relationships\nbetween geographical features, and construct or manipulate`GEOGRAPHY`s.\n\nAll GoogleSQL geography functions return`NULL`if any input argument\nis`NULL`.\n\n\n\n"
},
{
"name": "EDIT_DISTANCE",
"arguments": [],
"category": "String",
"description_markdown": "```\nEDIT_DISTANCE(value1, value2, [max_distance => max_distance_value])\n```\n\n **Description** \n\nComputes the[Levenshtein distance](https://en.wikipedia.org/wiki/Levenshtein_distance)between two`STRING`or`BYTES`values.\n\n **Definitions** \n\n- ` value1`: The first` STRING`or` BYTES`value to compare.\n- ` value2`: The second` STRING`or` BYTES`value to compare.\n- ` max_distance`: Optional mandatory-named argument. Takes a non-negative` INT64`value that represents the maximum distance between the two values\nto compute.\n \n If this distance is exceeded, the function returns this value.\nThe default value for this argument is the maximum size of` value1`and` value2`.\n \n \n\n **Details** \n\nIf`value1`or`value2`is`NULL`,`NULL`is returned.\n\nYou can only compare values of the same type. Otherwise, an error is produced.\n\n **Return type** \n\n`INT64`\n\n **Examples** \n\nIn the following example, the first character in both strings is different:\n\n```\nSELECT EDIT_DISTANCE('a', 'b') AS results;\n\n/*---------*\n | results |\n +---------+\n | 1 |\n *---------*/\n```\n\nIn the following example, the first and second characters in both strings are\ndifferent:\n\n```\nSELECT EDIT_DISTANCE('aa', 'b') AS results;\n\n/*---------*\n | results |\n +---------+\n | 2 |\n *---------*/\n```\n\nIn the following example, only the first character in both strings is\ndifferent:\n\n```\nSELECT EDIT_DISTANCE('aa', 'ba') AS results;\n\n/*---------*\n | results |\n +---------+\n | 1 |\n *---------*/\n```\n\nIn the following example, the last six characters are different, but because\nthe maximum distance is`2`, this function exits early and returns`2`, the\nmaximum distance:\n\n```\nSELECT EDIT_DISTANCE('abcdefg', 'a', max_distance => 2) AS results;\n\n/*---------*\n | results |\n +---------+\n | 2 |\n *---------*/\n```\n\n\n"
},
{
"name": "ENDS_WITH",
"arguments": [],
"category": "String",
"description_markdown": "```\nENDS_WITH(value, suffix)\n```\n\n **Description** \n\nTakes two`STRING`or`BYTES`values. Returns`TRUE`if`suffix`is a suffix of`value`.\n\nThis function supports specifying[collation](/bigquery/docs/reference/standard-sql/collation-concepts#collate_about).\n\n **Return type** \n\n`BOOL`\n\n **Examples** \n\n```\nWITH items AS\n (SELECT 'apple' as item\n UNION ALL\n SELECT 'banana' as item\n UNION ALL\n SELECT 'orange' as item)\n\nSELECT\n ENDS_WITH(item, 'e') as example\nFROM items;\n\n/*---------*\n | example |\n +---------+\n | True |\n | False |\n | True |\n *---------*/\n```\n\n\n"
},
{
"name": "ERROR",
"arguments": [],
"category": "Debugging",
"description_markdown": "```\nERROR(error_message)\n```\n\n **Description** \n\nReturns an error.\n\n **Definitions** \n\n- ` error_message`: A` STRING`value that represents the error message to\nproduce. Any whitespace characters beyond a\nsingle space are trimmed from the results.\n\n **Details** \n\n`ERROR`is treated like any other expression that may\nresult in an error: there is no special guarantee of evaluation order.\n\n **Return Data Type** \n\nGoogleSQL infers the return type in context.\n\n **Examples** \n\nIn the following example, the query returns an error message if the value of the\nrow does not match one of two defined values.\n\n```\nSELECT\n CASE\n WHEN value = 'foo' THEN 'Value is foo.'\n WHEN value = 'bar' THEN 'Value is bar.'\n ELSE ERROR(CONCAT('Found unexpected value: ', value))\n END AS new_value\nFROM (\n SELECT 'foo' AS value UNION ALL\n SELECT 'bar' AS value UNION ALL\n SELECT 'baz' AS value);\n\n-- Found unexpected value: baz\n```\n\nIn the following example, GoogleSQL may evaluate the`ERROR`function\nbefore or after thecondition, because GoogleSQL\ngenerally provides no ordering guarantees between`WHERE`clause conditions and\nthere are no special guarantees for the`ERROR`function.\n\n```\nSELECT *\nFROM (SELECT -1 AS x)\nWHERE x > 0 AND ERROR('Example error');\n```\n\nIn the next example, the`WHERE`clause evaluates an`IF`condition, which\nensures that GoogleSQL only evaluates the`ERROR`function if the\ncondition fails.\n\n```\nSELECT *\nFROM (SELECT -1 AS x)\nWHERE IF(x > 0, true, ERROR(FORMAT('Error: x must be positive but is %t', x)));\n\n-- Error: x must be positive but is -1\n```\n\n\n<span id=\"aggregate-dp-functions\">\n## Differentially private aggregate functions\n\n</span>\nGoogleSQL for BigQuery supports differentially private aggregate functions.\nFor an explanation of how aggregate functions work, see[Aggregate function calls](/bigquery/docs/reference/standard-sql/aggregate-function-calls).\n\nYou can only use differentially private aggregate functions with[differentially private queries](/bigquery/docs/differential-privacy)in a[differential privacy clause](/bigquery/docs/reference/standard-sql/query-syntax#dp_clause).\n\n **Note:** In this topic, the privacy parameters in the examples are not\nrecommendations. You should work with your privacy or security officer to\ndetermine the optimal privacy parameters for your dataset and organization.\n"
},
{
"name": "EUCLIDEAN_DISTANCE",
"arguments": [],
"category": "Mathematical",
"description_markdown": "```\nEUCLIDEAN_DISTANCE(vector1, vector2)\n```\n\n **Description** \n\nComputes the[Euclidean distance](https://en.wikipedia.org/wiki/Euclidean_distance)between two vectors.\n\n **Definitions** \n\n- ` vector1`: A vector that is represented by an` ARRAY<T>`value or a sparse vector that is\nrepresented by an` ARRAY<STRUCT<dimension,magnitude>>`value.\n- ` vector2`: A vector that is represented by an` ARRAY<T>`value or a sparse vector that is\nrepresented by an` ARRAY<STRUCT<dimension,magnitude>>`value.\n\n **Details** \n\n- ` ARRAY<T>`can be used to represent a vector. Each zero-based index in this\narray represents a dimension. The value for each element in this array\nrepresents a magnitude.\n \n ` T`can represent the following and must be the same for both\nvectors:\n \n \n - ` FLOAT64`In the following example vector, there are four dimensions. The magnitude\nis` 10.0`for dimension` 0`,` 55.0`for dimension` 1`,` 40.0`for\ndimension` 2`, and` 34.0`for dimension` 3`:\n \n \n ```\n [10.0, 55.0, 40.0, 34.0]\n ```\n \n \n- ` ARRAY<STRUCT<dimension,magnitude>>`can be used to represent a\nsparse vector. With a sparse vector, you only need to include\ndimension-magnitude pairs for non-zero magnitudes. If a magnitude isn't\npresent in the sparse vector, the magnitude is implicitly understood to be\nzero.\n \n For example, if you have a vector with 10,000 dimensions, but only 10\ndimensions have non-zero magnitudes, then the vector is a sparse vector.\nAs a result, it's more efficient to describe a sparse vector by only\nmentioning its non-zero magnitudes.\n \n In` ARRAY<STRUCT<dimension,magnitude>>`,` STRUCT<dimension,magnitude>`represents a dimension-magnitude pair for each non-zero magnitude in a\nsparse vector. These parts need to be included for each dimension-magnitude\npair:\n \n \n - ` dimension`: A` STRING`or` INT64`value that represents a\ndimension in a vector.\n \n \n - ` magnitude`: A` FLOAT64`value that represents a\nnon-zero magnitude for a specific dimension in a vector.\n \n You don't need to include empty dimension-magnitude pairs in a\nsparse vector. For example, the following sparse vector and\nnon-sparse vector are equivalent:\n \n \n ```\n -- sparse vector ARRAY<STRUCT<INT64, FLOAT64>> [(1, 10.0), (2: 30.0), (5, 40.0)]\n ```\n \n \n ```\n -- vector ARRAY<FLOAT64> [0.0, 10.0, 30.0, 0.0, 0.0, 40.0]\n ```\n \n In a sparse vector, dimension-magnitude pairs don't need to be in any\nparticular order. The following sparse vectors are equivalent:\n \n \n ```\n [('a', 10.0), ('b': 30.0), ('d': 40.0)]\n ```\n \n \n ```\n [('d': 40.0), ('a', 10.0), ('b': 30.0)]\n ```\n \n \n- Both non-sparse vectors\nin this function must share the same dimensions, and if they don't, an error\nis produced.\n \n \n- A vector can be a zero vector. A vector is a zero vector if it has\nno dimensions or all dimensions have a magnitude of` 0`, such as` []`or` [0.0, 0.0]`.\n \n \n- An error is produced if a magnitude in a vector is` NULL`.\n \n \n- If a vector is` NULL`,` NULL`is returned.\n \n \n\n **Return type** \n\n`FLOAT64`\n\n **Examples** \n\nIn the following example, non-sparse vectors\nare used to compute the Euclidean distance:\n\n```\nSELECT EUCLIDEAN_DISTANCE([1.0, 2.0], [3.0, 4.0]) AS results;\n\n/*----------*\n | results |\n +----------+\n | 2.828 |\n *----------*/\n```\n\nIn the following example, sparse vectors are used to compute the\nEuclidean distance:\n\n```\nSELECT EUCLIDEAN_DISTANCE(\n [(1, 1.0), (2, 2.0)],\n [(2, 4.0), (1, 3.0)]) AS results;\n\n /*----------*\n | results |\n +----------+\n | 2.828 |\n *----------*/\n```\n\nThe ordering of magnitudes in a vector doesn't impact the results\nproduced by this function. For example these queries produce the same results\neven though the magnitudes in each vector is in a different order:\n\n```\nSELECT EUCLIDEAN_DISTANCE([1.0, 2.0], [3.0, 4.0]);\n```\n\n```\nSELECT EUCLIDEAN_DISTANCE([2.0, 1.0], [4.0, 3.0]);\n```\n\n```\nSELECT EUCLIDEAN_DISTANCE([(1, 1.0), (2, 2.0)], [(1, 3.0), (2, 4.0)]) AS results;\n```\n\n```\n/*----------*\n | results |\n +----------+\n | 2.828 |\n *----------*/\n```\n\nBoth non-sparse vectors must have the same\ndimensions. If not, an error is produced. In the following example, the first\nvector has two dimensions and the second vector has three:\n\n```\n-- ERROR\nSELECT EUCLIDEAN_DISTANCE([9.0, 7.0], [8.0, 4.0, 5.0]) AS results;\n```\n\nIf you use sparse vectors and you repeat a dimension, an error is\nproduced:\n\n```\n-- ERROR\nSELECT EUCLIDEAN_DISTANCE(\n [(1, 9.0), (2, 7.0), (2, 8.0)], [(1, 8.0), (2, 4.0), (3, 5.0)]) AS results;\n```\n\n\n"
},
{
"name": "EXP",
"arguments": [],
"category": "Mathematical",
"description_markdown": "```\nEXP(X)\n```\n\n **Description** \n\nComputes *e* to the power of X, also called the natural exponential function. If\nthe result underflows, this function returns a zero. Generates an error if the\nresult overflows.\n\n| X | EXP(X) |\n| --- | --- |\n| 0.0 | 1.0 |\n| `+inf` | `+inf` |\n| `-inf` | 0.0 |\n\n **Return Data Type** \n\n| INPUT | `INT64` | `NUMERIC` | `BIGNUMERIC` | `FLOAT64` |\n| --- | --- | --- | --- | --- |\n| OUTPUT | `FLOAT64` | `NUMERIC` | `BIGNUMERIC` | `FLOAT64` |\n\n\n\n"
},
{
"name": "EXTERNAL_OBJECT_TRANSFORM",
"arguments": [],
"category": "Table",
"description_markdown": "```\nEXTERNAL_OBJECT_TRANSFORM(TABLE object_table_name, transform_types_array)\n```\n\n **Description** \n\nThis function returns a transformed object table with the original columns plus\none or more additional columns, depending on the`transform_types`values\nspecified.\n\nThis function only supports[object tables](https://cloud.google.com/bigquery/docs/object-table-introduction)as inputs. Subqueries or any other types of tables are not supported.\n\n`object_table_name`is the name of the object table to be transformed, in\nthe format`dataset_name.object_table_name`.\n\n`transform_types_array`is an array of`STRING`literals. Currently, the only\nsupported`transform_types_array`value is`SIGNED_URL`. Specifying`SIGNED_URL`creates read-only signed URLs for the objects in the identified object table,\nwhich are returned in a`signed_url`column. Generated signed URLs are\nvalid for 6 hours.\n\n **Return Type** \n\nTABLE\n\n **Example** \n\nRun the following query to return URIs and signed URLs for the objects in the`mydataset.myobjecttable`object table.\n\n```\nSELECT uri, signed_url\nFROM EXTERNAL_OBJECT_TRANSFORM(TABLE mydataset.myobjecttable, ['SIGNED_URL']);\n\n--The preceding statement returns results similar to the following:\n/*-----------------------------------------------------------------------------------------------------------------------------*\n | uri | signed_url |\n +-----------------------------------------------------------------------------------------------------------------------------+\n | gs://myobjecttable/1234_Main_St.jpeg | https://storage.googleapis.com/mybucket/1234_Main_St.jpeg?X-Goog-Algorithm=1234abcd… |\n +-----------------------------------------------------------------------------------------------------------------------------+\n | gs://myobjecttable/345_River_Rd.jpeg | https://storage.googleapis.com/mybucket/345_River_Rd.jpeg?X-Goog-Algorithm=2345bcde… |\n *-----------------------------------------------------------------------------------------------------------------------------*/\n```\n\n\n"
},
{
"name": "EXTRACT",
"arguments": [],
"category": "Date",
"description_markdown": "```\nEXTRACT(part FROM date_expression)\n```\n\n **Description** \n\nReturns the value corresponding to the specified date part. The`part`must\nbe one of:\n\n- ` DAYOFWEEK`: Returns values in the range [1,7] with Sunday as the first day\nof the week.\n- ` DAY`\n- ` DAYOFYEAR`\n- ` WEEK`: Returns the week number of the date in the range [0, 53]. Weeks begin\nwith Sunday, and dates prior to the first Sunday of the year are in week 0.\n- ` WEEK(<WEEKDAY>)`: Returns the week number of the date in the range [0, 53].\nWeeks begin on` WEEKDAY`. Dates prior to\nthe first` WEEKDAY`of the year are in week 0. Valid values for` WEEKDAY`are` SUNDAY`,` MONDAY`,` TUESDAY`,` WEDNESDAY`,` THURSDAY`,` FRIDAY`, and` SATURDAY`.\n- ` ISOWEEK`: Returns the[ISO 8601 week](https://en.wikipedia.org/wiki/ISO_week_date)number of the` date_expression`.` ISOWEEK`s begin on Monday. Return values\nare in the range [1, 53]. The first` ISOWEEK`of each ISO year begins on the\nMonday before the first Thursday of the Gregorian calendar year.\n- ` MONTH`\n- ` QUARTER`: Returns values in the range [1,4].\n- ` YEAR`\n- ` ISOYEAR`: Returns the[ISO 8601](https://en.wikipedia.org/wiki/ISO_8601)week-numbering year, which is the Gregorian calendar year containing the\nThursday of the week to which` date_expression`belongs.\n\n **Return Data Type** \n\nINT64\n\n **Examples** \n\nIn the following example,`EXTRACT`returns a value corresponding to the`DAY`date part.\n\n```\nSELECT EXTRACT(DAY FROM DATE '2013-12-25') AS the_day;\n\n/*---------*\n | the_day |\n +---------+\n | 25 |\n *---------*/\n```\n\nIn the following example,`EXTRACT`returns values corresponding to different\ndate parts from a column of dates near the end of the year.\n\n```\nSELECT\n date,\n EXTRACT(ISOYEAR FROM date) AS isoyear,\n EXTRACT(ISOWEEK FROM date) AS isoweek,\n EXTRACT(YEAR FROM date) AS year,\n EXTRACT(WEEK FROM date) AS week\nFROM UNNEST(GENERATE_DATE_ARRAY('2015-12-23', '2016-01-09')) AS date\nORDER BY date;\n\n/*------------+---------+---------+------+------*\n | date | isoyear | isoweek | year | week |\n +------------+---------+---------+------+------+\n | 2015-12-23 | 2015 | 52 | 2015 | 51 |\n | 2015-12-24 | 2015 | 52 | 2015 | 51 |\n | 2015-12-25 | 2015 | 52 | 2015 | 51 |\n | 2015-12-26 | 2015 | 52 | 2015 | 51 |\n | 2015-12-27 | 2015 | 52 | 2015 | 52 |\n | 2015-12-28 | 2015 | 53 | 2015 | 52 |\n | 2015-12-29 | 2015 | 53 | 2015 | 52 |\n | 2015-12-30 | 2015 | 53 | 2015 | 52 |\n | 2015-12-31 | 2015 | 53 | 2015 | 52 |\n | 2016-01-01 | 2015 | 53 | 2016 | 0 |\n | 2016-01-02 | 2015 | 53 | 2016 | 0 |\n | 2016-01-03 | 2015 | 53 | 2016 | 1 |\n | 2016-01-04 | 2016 | 1 | 2016 | 1 |\n | 2016-01-05 | 2016 | 1 | 2016 | 1 |\n | 2016-01-06 | 2016 | 1 | 2016 | 1 |\n | 2016-01-07 | 2016 | 1 | 2016 | 1 |\n | 2016-01-08 | 2016 | 1 | 2016 | 1 |\n | 2016-01-09 | 2016 | 1 | 2016 | 1 |\n *------------+---------+---------+------+------*/\n```\n\nIn the following example,`date_expression`falls on a Sunday.`EXTRACT`calculates the first column using weeks that begin on Sunday, and it calculates\nthe second column using weeks that begin on Monday.\n\n```\nWITH table AS (SELECT DATE('2017-11-05') AS date)\nSELECT\n date,\n EXTRACT(WEEK(SUNDAY) FROM date) AS week_sunday,\n EXTRACT(WEEK(MONDAY) FROM date) AS week_monday FROM table;\n\n/*------------+-------------+-------------*\n | date | week_sunday | week_monday |\n +------------+-------------+-------------+\n | 2017-11-05 | 45 | 44 |\n *------------+-------------+-------------*/\n```\n\n\n"
},
{
"name": "FARM_FINGERPRINT",
"arguments": [],
"category": "Hash",
"description_markdown": "```\nFARM_FINGERPRINT(value)\n```\n\n **Description** \n\nComputes the fingerprint of the`STRING`or`BYTES`input using the`Fingerprint64`function from the[open-source FarmHash library](https://github.com/google/farmhash). The output\nof this function for a particular input will never change.\n\n **Return type** \n\nINT64\n\n **Examples** \n\n```\nWITH example AS (\n SELECT 1 AS x, \"foo\" AS y, true AS z UNION ALL\n SELECT 2 AS x, \"apple\" AS y, false AS z UNION ALL\n SELECT 3 AS x, \"\" AS y, true AS z\n)\nSELECT\n *,\n FARM_FINGERPRINT(CONCAT(CAST(x AS STRING), y, CAST(z AS STRING)))\n AS row_fingerprint\nFROM example;\n/*---+-------+-------+----------------------*\n | x | y | z | row_fingerprint |\n +---+-------+-------+----------------------+\n | 1 | foo | true | -1541654101129638711 |\n | 2 | apple | false | 2794438866806483259 |\n | 3 | | true | -4880158226897771312 |\n *---+-------+-------+----------------------*/\n```\n\n\n"
},
{
"name": "FIRST_VALUE",
"arguments": [],
"category": "Navigation",
"description_markdown": "```\nFIRST_VALUE (value_expression [{RESPECT | IGNORE} NULLS])\nOVER over_clause\n\nover_clause:\n { named_window | ( [ window_specification ] ) }\n\nwindow_specification:\n [ named_window ]\n [ PARTITION BY partition_expression [, ...] ]\n ORDER BY expression [ { ASC | DESC } ] [, ...]\n [ window_frame_clause ]\n```\n\n **Description** \n\nReturns the value of the`value_expression`for the first row in the current\nwindow frame.\n\nThis function includes`NULL`values in the calculation unless`IGNORE NULLS`is\npresent. If`IGNORE NULLS`is present, the function excludes`NULL`values from\nthe calculation.\n\nTo learn more about the`OVER`clause and how to use it, see[Window function calls](/bigquery/docs/reference/standard-sql/window-function-calls).\n\n **Supported Argument Types** \n\n`value_expression`can be any data type that an expression can return.\n\n **Return Data Type** \n\nSame type as`value_expression`.\n\n **Examples** \n\nThe following example computes the fastest time for each division.\n\n```\nWITH finishers AS\n (SELECT 'Sophia Liu' as name,\n TIMESTAMP '2016-10-18 2:51:45' as finish_time,\n 'F30-34' as division\n UNION ALL SELECT 'Lisa Stelzner', TIMESTAMP '2016-10-18 2:54:11', 'F35-39'\n UNION ALL SELECT 'Nikki Leith', TIMESTAMP '2016-10-18 2:59:01', 'F30-34'\n UNION ALL SELECT 'Lauren Matthews', TIMESTAMP '2016-10-18 3:01:17', 'F35-39'\n UNION ALL SELECT 'Desiree Berry', TIMESTAMP '2016-10-18 3:05:42', 'F35-39'\n UNION ALL SELECT 'Suzy Slane', TIMESTAMP '2016-10-18 3:06:24', 'F35-39'\n UNION ALL SELECT 'Jen Edwards', TIMESTAMP '2016-10-18 3:06:36', 'F30-34'\n UNION ALL SELECT 'Meghan Lederer', TIMESTAMP '2016-10-18 3:07:41', 'F30-34'\n UNION ALL SELECT 'Carly Forte', TIMESTAMP '2016-10-18 3:08:58', 'F25-29'\n UNION ALL SELECT 'Lauren Reasoner', TIMESTAMP '2016-10-18 3:10:14', 'F30-34')\nSELECT name,\n FORMAT_TIMESTAMP('%X', finish_time) AS finish_time,\n division,\n FORMAT_TIMESTAMP('%X', fastest_time) AS fastest_time,\n TIMESTAMP_DIFF(finish_time, fastest_time, SECOND) AS delta_in_seconds\nFROM (\n SELECT name,\n finish_time,\n division,\n FIRST_VALUE(finish_time)\n OVER (PARTITION BY division ORDER BY finish_time ASC\n ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING) AS fastest_time\n FROM finishers);\n\n/*-----------------+-------------+----------+--------------+------------------*\n | name | finish_time | division | fastest_time | delta_in_seconds |\n +-----------------+-------------+----------+--------------+------------------+\n | Carly Forte | 03:08:58 | F25-29 | 03:08:58 | 0 |\n | Sophia Liu | 02:51:45 | F30-34 | 02:51:45 | 0 |\n | Nikki Leith | 02:59:01 | F30-34 | 02:51:45 | 436 |\n | Jen Edwards | 03:06:36 | F30-34 | 02:51:45 | 891 |\n | Meghan Lederer | 03:07:41 | F30-34 | 02:51:45 | 956 |\n | Lauren Reasoner | 03:10:14 | F30-34 | 02:51:45 | 1109 |\n | Lisa Stelzner | 02:54:11 | F35-39 | 02:54:11 | 0 |\n | Lauren Matthews | 03:01:17 | F35-39 | 02:54:11 | 426 |\n | Desiree Berry | 03:05:42 | F35-39 | 02:54:11 | 691 |\n | Suzy Slane | 03:06:24 | F35-39 | 02:54:11 | 733 |\n *-----------------+-------------+----------+--------------+------------------*/\n```\n\n\n"
},
{
"name": "FLOAT64",
"arguments": [],
"category": "JSON",
"description_markdown": "```\nFLOAT64(json_expr[, wide_number_mode=>{ 'exact' | 'round' }])\n```\n\n **Description** \n\nConverts a JSON number to a SQL`FLOAT64`value.\n\nArguments:\n\n- ` json_expr`: JSON. For example:\n \n \n ```\n JSON '9.8'\n ```\n \n If the JSON value is not a number, an error is produced. If the expression\nis a SQL` NULL`, the function returns SQL` NULL`.\n \n \n- ` wide_number_mode`: Optional mandatory-named argument,\nwhich defines what happens with a number that cannot be\nrepresented as a` FLOAT64`without loss of\nprecision. This argument accepts one of the two case-sensitive values:\n \n \n - ` exact`: The function fails if the result cannot be represented as a` FLOAT64`without loss of precision.\n - ` round`(default): The numeric value stored in JSON will be rounded to` FLOAT64`. If such rounding is not possible,\nthe function fails.\n\n **Return type** \n\n`FLOAT64`\n\n **Examples** \n\n```\nSELECT FLOAT64(JSON '9.8') AS velocity;\n\n/*----------*\n | velocity |\n +----------+\n | 9.8 |\n *----------*/\n```\n\n```\nSELECT FLOAT64(JSON_QUERY(JSON '{\"vo2_max\": 39.1, \"age\": 18}', \"$.vo2_max\")) AS vo2_max;\n\n/*---------*\n | vo2_max |\n +---------+\n | 39.1 |\n *---------*/\n```\n\n```\nSELECT FLOAT64(JSON '18446744073709551615', wide_number_mode=>'round') as result;\n\n/*------------------------*\n | result |\n +------------------------+\n | 1.8446744073709552e+19 |\n *------------------------*/\n```\n\n```\nSELECT FLOAT64(JSON '18446744073709551615') as result;\n\n/*------------------------*\n | result |\n +------------------------+\n | 1.8446744073709552e+19 |\n *------------------------*/\n```\n\nThe following examples show how invalid requests are handled:\n\n```\n-- An error is thrown if JSON is not of type FLOAT64.\nSELECT FLOAT64(JSON '\"strawberry\"') AS result;\nSELECT FLOAT64(JSON 'null') AS result;\n\n-- An error is thrown because `wide_number_mode` is case-sensitive and not \"exact\" or \"round\".\nSELECT FLOAT64(JSON '123.4', wide_number_mode=>'EXACT') as result;\nSELECT FLOAT64(JSON '123.4', wide_number_mode=>'exac') as result;\n\n-- An error is thrown because the number cannot be converted to DOUBLE without loss of precision\nSELECT FLOAT64(JSON '18446744073709551615', wide_number_mode=>'exact') as result;\n\n-- Returns a SQL NULL\nSELECT SAFE.FLOAT64(JSON '\"strawberry\"') AS result;\n```\n\n\n"
},
{
"name": "FLOOR",
"arguments": [],
"category": "Mathematical",
"description_markdown": "```\nFLOOR(X)\n```\n\n **Description** \n\nReturns the largest integral value that is not greater than X.\n\n| X | FLOOR(X) |\n| --- | --- |\n| 2.0 | 2.0 |\n| 2.3 | 2.0 |\n| 2.8 | 2.0 |\n| 2.5 | 2.0 |\n| -2.3 | -3.0 |\n| -2.8 | -3.0 |\n| -2.5 | -3.0 |\n| 0 | 0 |\n| `+inf` | `+inf` |\n| `-inf` | `-inf` |\n| `NaN` | `NaN` |\n\n **Return Data Type** \n\n| INPUT | `INT64` | `NUMERIC` | `BIGNUMERIC` | `FLOAT64` |\n| --- | --- | --- | --- | --- |\n| OUTPUT | `FLOAT64` | `NUMERIC` | `BIGNUMERIC` | `FLOAT64` |\n\n\n\n"
},
{
"name": "FORMAT",
"arguments": [],
"category": "String",
"description_markdown": "```\nFORMAT(format_string_expression, data_type_expression[, ...])\n```\n\n **Description** \n\n`FORMAT`formats a data type expression as a string.\n\n- ` format_string_expression`: Can contain zero or more[format specifiers](#format_specifiers). Each format specifier is introduced\nby the` %`symbol, and must map to one or more of the remaining arguments.\nIn general, this is a one-to-one mapping, except when the` *`specifier is\npresent. For example,` %.*i`maps to two arguments—a length argument\nand a signed integer argument. If the number of arguments related to the\nformat specifiers is not the same as the number of arguments, an error occurs.\n- ` data_type_expression`: The value to format as a string. This can be any\nGoogleSQL data type.\n\n **Return type** \n\n`STRING`\n\n **Examples** \n\n| Description | Statement | Result |\n| --- | --- | --- |\n| Simple integer | FORMAT('%d', 10) | 10 |\n| Integer with left blank padding | FORMAT('|%10d|', 11) | | 11| |\n| Integer with left zero padding | FORMAT('+%010d+', 12) | +0000000012+ |\n| Integer with commas | FORMAT(\"%'d\", 123456789) | 123,456,789 |\n| STRING | FORMAT('-%s-', 'abcd efg') | -abcd efg- |\n| FLOAT64 | FORMAT('%f %E', 1.1, 2.2) | 1.100000 2.200000E+00 |\n| DATE | FORMAT('%t', date '2015-09-01') | 2015-09-01 |\n| TIMESTAMP | FORMAT('%t', timestamp '2015-09-01 12:34:56\nAmerica/Los_Angeles') | 2015‑09‑01 19:34:56+00 |\n\nThe`FORMAT()`function does not provide fully customizable formatting for all\ntypes and values, nor formatting that is sensitive to locale.\n\nIf custom formatting is necessary for a type, you must first format it using\ntype-specific format functions, such as`FORMAT_DATE()`or`FORMAT_TIMESTAMP()`.\nFor example:\n\n```\nSELECT FORMAT('date: %s!', FORMAT_DATE('%B %d, %Y', date '2015-01-02'));\n```\n\nReturns\n\n```\ndate: January 02, 2015!\n```\n\n\n<span id=\"format_specifiers\">\n#### Supported format specifiers\n\n</span>\n```\n%[flags][width][.precision]specifier\n```\n\nA[format specifier](#format_specifier_list)adds formatting when casting a\nvalue to a string. It can optionally contain these sub-specifiers:\n\n- [Flags](#flags)\n- [Width](#width)\n- [Precision](#precision)\n\nAdditional information about format specifiers:\n\n- [%g and %G behavior](#g_and_g_behavior)\n- [%p and %P behavior](#p_and_p_behavior)\n- [%t and %T behavior](#t_and_t_behavior)\n- [Error conditions](#error_format_specifiers)\n- [NULL argument handling](#null_format_specifiers)\n- [Additional semantic rules](#rules_format_specifiers)\n\n\n<span id=\"format_specifier_list\">\n##### Format specifiers\n\n</span>\n| Specifier | Description | Examples | Types |\n| --- | --- | --- | --- |\n| `d`or`i` | Decimal integer | 392 | `INT64` \n |\n| `o` | Octal \n \nNote: If an`INT64`value is negative, an error is produced. | 610 | `INT64` \n |\n| `x` | Hexadecimal integer \n \nNote: If an`INT64`value is negative, an error is produced. | 7fa | `INT64` \n |\n| `X` | Hexadecimal integer (uppercase) \n \nNote: If an`INT64`value is negative, an error is produced. | 7FA | `INT64` \n |\n| `f` | Decimal notation, in [-](integer part).(fractional part) for finite\n values, and in lowercase for non-finite values | 392.650000 \ninf \nnan | `NUMERIC` \n`BIGNUMERIC` \n`FLOAT64` \n |\n| `F` | Decimal notation, in [-](integer part).(fractional part) for finite\n values, and in uppercase for non-finite values | 392.650000 \nINF \nNAN | `NUMERIC` \n`BIGNUMERIC` \n`FLOAT64` \n |\n| `e` | Scientific notation (mantissa/exponent), lowercase | 3.926500e+02 \ninf \nnan | `NUMERIC` \n`BIGNUMERIC` \n`FLOAT64` \n |\n| `E` | Scientific notation (mantissa/exponent), uppercase | 3.926500E+02 \nINF \nNAN | `NUMERIC` \n`BIGNUMERIC` \n`FLOAT64` \n |\n| `g` | Either decimal notation or scientific notation, depending on the input\n value's exponent and the specified precision. Lowercase.\n See[%g and %G behavior](#g_and_g_behavior)for details. | 392.65 \n3.9265e+07 \ninf \nnan | `NUMERIC` \n`BIGNUMERIC` \n`FLOAT64` \n |\n| `G` | Either decimal notation or scientific notation, depending on the input\n value's exponent and the specified precision. Uppercase.\n See[%g and %G behavior](#g_and_g_behavior)for details. | 392.65 \n3.9265E+07 \nINF \nNAN | `NUMERIC` \n`BIGNUMERIC` \n`FLOAT64` \n |\n| `p` | Produces a one-line printable string representing JSON.\n \n See[%p and %P behavior](#p_and_p_behavior). | ```\n{\"month\":10,\"year\":2019}\n```\n\n | `JSON` \n |\n| `P` | Produces a multi-line printable string representing JSON.\n \n See[%p and %P behavior](#p_and_p_behavior). | ```\n{\n \"month\": 10,\n \"year\": 2019\n}\n```\n\n | `JSON` \n |\n| `s` | String of characters | sample | `STRING` \n |\n| `t` | Returns a printable string representing the value. Often looks\n similar to casting the argument to`STRING`.\n See[%t and %T behavior](#t_and_t_behavior). | sample \n2014‑01‑01 | Any type |\n| `T` | Produces a string that is a valid GoogleSQL constant with a\n similar type to the value's type (maybe wider, or maybe string).\n See[%t and %T behavior](#t_and_t_behavior). | 'sample' \nb'bytes sample' \n1234 \n2.3 \ndate '2014‑01‑01' | Any type |\n| `%` | '%%' produces a single '%' | % | n/a |\n\nThe format specifier can optionally contain the sub-specifiers identified above\nin the specifier prototype.\n\nThese sub-specifiers must comply with the following specifications.\n\n\n<span id=\"flags\">\n##### Flags\n\n</span>\n| Flags | Description |\n| --- | --- |\n| `-` | Left-justify within the given field width; Right justification is the\ndefault (see width sub-specifier) |\n| `+` | Forces to precede the result with a plus or minus sign (`+`or`-`) even for positive numbers. By default, only negative numbers\nare preceded with a`-`sign |\n| <space> | If no sign is going to be written, a blank space is inserted before the\nvalue |\n| `#` | - For `%o`, `%x`, and `%X`, this flag means to precede the\n value with 0, 0x or 0X respectively for values different than zero.\n- For `%f`, `%F`, `%e`, and `%E`, this flag means to add the decimal\n point even when there is no fractional part, unless the value\n is non-finite.\n- For `%g` and `%G`, this flag means to add the decimal point even\n when there is no fractional part unless the value is non-finite, and\n never remove the trailing zeros after the decimal point.\n\n |\n| `0` | Left-pads the number with zeroes (0) instead of spaces when padding is\n specified (see width sub-specifier) |\n| `'` | Formats integers using the appropriating grouping character.\n For example:\n\n |\n\nFlags may be specified in any order. Duplicate flags are not an error. When\nflags are not relevant for some element type, they are ignored.\n\n\n<span id=\"width\">\n##### Width\n\n</span>\n| Width | Description |\n| --- | --- |\n| <number> | Minimum number of characters to be printed. If the value to be printed\n is shorter than this number, the result is padded with blank spaces.\n The value is not truncated even if the result is larger |\n| `*` | The width is not specified in the format string, but as an additional\n integer value argument preceding the argument that has to be formatted |\n\n\n<span id=\"precision\">\n##### Precision\n\n</span>\n| Precision | Description |\n| --- | --- |\n| `.`<number> | - For integer specifiers `%d`, `%i`, `%o`, `%u`, `%x`, and `%X`:\n precision specifies the\n minimum number of digits to be written. If the value to be written is\n shorter than this number, the result is padded with trailing zeros.\n The value is not truncated even if the result is longer. A precision\n of 0 means that no character is written for the value 0.\n- For specifiers `%a`, `%A`, `%e`, `%E`, `%f`, and `%F`: this is the\n number of digits to be printed after the decimal point. The default\n value is 6.\n- For specifiers `%g` and `%G`: this is the number of significant digits\n to be printed, before the removal of the trailing zeros after the\n decimal point. The default value is 6.\n\n |\n| `.*` | The precision is not specified in the format string, but as an\n additional integer value argument preceding the argument that has to be\n formatted |\n\n\n<span id=\"g_and_g_behavior\">\n##### %g and %G behavior\n\n</span>\nThe`%g`and`%G`format specifiers choose either the decimal notation (like\nthe`%f`and`%F`specifiers) or the scientific notation (like the`%e`and`%E`specifiers), depending on the input value's exponent and the specified[precision](#precision).\n\nLet p stand for the specified[precision](#precision)(defaults to 6; 1 if the\nspecified precision is less than 1). The input value is first converted to\nscientific notation with precision = (p - 1). If the resulting exponent part x\nis less than -4 or no less than p, the scientific notation with precision =\n(p - 1) is used; otherwise the decimal notation with precision = (p - 1 - x) is\nused.\n\nUnless[# flag](#flags)is present, the trailing zeros after the decimal point\nare removed, and the decimal point is also removed if there is no digit after\nit.\n\n\n<span id=\"p_and_p_behavior\">\n##### %p and %P behavior\n\n</span>\nThe`%p`format specifier produces a one-line printable string. The`%P`format specifier produces a multi-line printable string. You can use these\nformat specifiers with the following data types:\n\n| **Type** | **%p** | **%P** |\n| --- | --- | --- |\n| JSON | JSON input:\n\n```\nJSON '\n{\n \"month\": 10,\n \"year\": 2019\n}\n'\n```\n\nProduces a one-line printable string representing JSON:\n\n```\n{\"month\":10,\"year\":2019}\n```\n\n | JSON input:\n\n```\nJSON '\n{\n \"month\": 10,\n \"year\": 2019\n}\n'\n```\n\nProduces a multi-line printable string representing JSON:\n\n```\n{\n \"month\": 10,\n \"year\": 2019\n}\n```\n\n |\n\n\n<span id=\"t_and_t_behavior\">\n##### %t and %T behavior\n\n</span>\nThe`%t`and`%T`format specifiers are defined for all types. The[width](#width),[precision](#precision), and[flags](#flags)act as they do\nfor`%s`: the[width](#width)is the minimum width and the`STRING`will be\npadded to that size, and[precision](#precision)is the maximum width\nof content to show and the`STRING`will be truncated to that size, prior to\npadding to width.\n\nThe`%t`specifier is always meant to be a readable form of the value.\n\nThe`%T`specifier is always a valid SQL literal of a similar type, such as a\nwider numeric type.\nThe literal will not include casts or a type name, except for the special case\nof non-finite floating point values.\n\nThe`STRING`is formatted as follows:\n\n| **Type** | **%t** | **%T** |\n| --- | --- | --- |\n| `NULL`of any type | NULL | NULL |\n| `INT64` \n | 123 | 123 |\n| NUMERIC | 123.0 *(always with .0)* | NUMERIC \"123.0\" |\n| FLOAT64 | 123.0 *(always with .0)* \n123e+10 \n`inf` \n`-inf` \n`NaN` | 123.0 *(always with .0)* \n123e+10 \nCAST(\"inf\" AS <type>) \nCAST(\"-inf\" AS <type>) \nCAST(\"nan\" AS <type>) |\n| STRING | unquoted string value | quoted string literal |\n| BYTES | unquoted escaped bytes \ne.g., abc\\x01\\x02 | quoted bytes literal \ne.g., b\"abc\\x01\\x02\" |\n| BOOL | boolean value | boolean value |\n| DATE | 2011-02-03 | DATE \"2011-02-03\" |\n| TIMESTAMP | 2011-02-03 04:05:06+00 | TIMESTAMP \"2011-02-03 04:05:06+00\" |\n| INTERVAL | 1-2 3 4:5:6.789 | INTERVAL \"1-2 3 4:5:6.789\" YEAR TO SECOND |\n| ARRAY | [value, value, ...] \nwhere values are formatted with %t | [value, value, ...] \nwhere values are formatted with %T |\n| STRUCT | (value, value, ...) \nwhere fields are formatted with %t | (value, value, ...) \nwhere fields are formatted with %T \n \nSpecial cases: \nZero fields: STRUCT() \nOne field: STRUCT(value) |\n| JSON | one-line printable string representing JSON. \n```\n{\"name\":\"apple\",\"stock\":3}\n```\n\n | one-line printable string representing a JSON literal. \n```\nJSON '{\"name\":\"apple\",\"stock\":3}'\n```\n\n |\n\n\n<span id=\"error_format_specifiers\">\n##### Error conditions\n\n</span>\nIf a format specifier is invalid, or is not compatible with the related\nargument type, or the wrong number or arguments are provided, then an error is\nproduced. For example, the following`<format_string>`expressions are invalid:\n\n```\nFORMAT('%s', 1)\n```\n\n```\nFORMAT('%')\n```\n\n\n<span id=\"null_format_specifiers\">\n##### NULL argument handling\n\n</span>\nA`NULL`format string results in a`NULL`output`STRING`. Any other arguments\nare ignored in this case.\n\nThe function generally produces a`NULL`value if a`NULL`argument is present.\nFor example,`FORMAT('%i', NULL_expression)`produces a`NULL STRING`as\noutput.\n\nHowever, there are some exceptions: if the format specifier is %t or %T\n(both of which produce`STRING`s that effectively match CAST and literal value\nsemantics), a`NULL`value produces 'NULL' (without the quotes) in the result`STRING`. For example, the function:\n\n```\nFORMAT('00-%t-00', NULL_expression);\n```\n\nReturns\n\n```\n00-NULL-00\n```\n\n\n<span id=\"rules_format_specifiers\">\n##### Additional semantic rules\n\n</span>\n`FLOAT64`values can be`+/-inf`or`NaN`.\nWhen an argument has one of those values, the result of the format specifiers`%f`,`%F`,`%e`,`%E`,`%g`,`%G`, and`%t`are`inf`,`-inf`, or`nan`(or the same in uppercase) as appropriate. This is consistent with how\nGoogleSQL casts these values to`STRING`. For`%T`,\nGoogleSQL returns quoted strings for`FLOAT64`values that don't have non-string literal\nrepresentations.\n\n\n\n"
},
{
"name": "FORMAT_DATE",
"arguments": [],
"category": "Date",
"description_markdown": "```\nFORMAT_DATE(format_string, date_expr)\n```\n\n **Description** \n\nFormats the`date_expr`according to the specified`format_string`.\n\nSee[Supported Format Elements For DATE](/bigquery/docs/reference/standard-sql/format-elements#format_elements_date_time)for a list of format elements that this function supports.\n\n **Return Data Type** \n\nSTRING\n\n **Examples** \n\n```\nSELECT FORMAT_DATE('%x', DATE '2008-12-25') AS US_format;\n\n/*------------*\n | US_format |\n +------------+\n | 12/25/08 |\n *------------*/\n```\n\n```\nSELECT FORMAT_DATE('%b-%d-%Y', DATE '2008-12-25') AS formatted;\n\n/*-------------*\n | formatted |\n +-------------+\n | Dec-25-2008 |\n *-------------*/\n```\n\n```\nSELECT FORMAT_DATE('%b %Y', DATE '2008-12-25') AS formatted;\n\n/*-------------*\n | formatted |\n +-------------+\n | Dec 2008 |\n *-------------*/\n```\n\n\n"
},
{
"name": "FORMAT_DATETIME",
"arguments": [],
"category": "Datetime",
"description_markdown": "```\nFORMAT_DATETIME(format_string, datetime_expression)\n```\n\n **Description** \n\nFormats a`DATETIME`object according to the specified`format_string`. See[Supported Format Elements For DATETIME](/bigquery/docs/reference/standard-sql/format-elements#format_elements_date_time)for a list of format elements that this function supports.\n\n **Return Data Type** \n\n`STRING`\n\n **Examples** \n\n```\nSELECT\n FORMAT_DATETIME(\"%c\", DATETIME \"2008-12-25 15:30:00\")\n AS formatted;\n\n/*--------------------------*\n | formatted |\n +--------------------------+\n | Thu Dec 25 15:30:00 2008 |\n *--------------------------*/\n```\n\n```\nSELECT\n FORMAT_DATETIME(\"%b-%d-%Y\", DATETIME \"2008-12-25 15:30:00\")\n AS formatted;\n\n/*-------------*\n | formatted |\n +-------------+\n | Dec-25-2008 |\n *-------------*/\n```\n\n```\nSELECT\n FORMAT_DATETIME(\"%b %Y\", DATETIME \"2008-12-25 15:30:00\")\n AS formatted;\n\n/*-------------*\n | formatted |\n +-------------+\n | Dec 2008 |\n *-------------*/\n```\n\n\n"
},
{
"name": "FORMAT_TIME",
"arguments": [],
"category": "Time",
"description_markdown": "```\nFORMAT_TIME(format_string, time_object)\n```\n\n **Description** Formats a`TIME`object according to the specified`format_string`. See[Supported Format Elements For TIME](/bigquery/docs/reference/standard-sql/format-elements#format_elements_date_time)for a list of format elements that this function supports.\n\n **Return Data Type** \n\n`STRING`\n\n **Example** \n\n```\nSELECT FORMAT_TIME(\"%R\", TIME \"15:30:00\") as formatted_time;\n\n/*----------------*\n | formatted_time |\n +----------------+\n | 15:30 |\n *----------------*/\n```\n\n\n"
},
{
"name": "FORMAT_TIMESTAMP",
"arguments": [],
"category": "Timestamp",
"description_markdown": "```\nFORMAT_TIMESTAMP(format_string, timestamp[, time_zone])\n```\n\n **Description** \n\nFormats a timestamp according to the specified`format_string`.\n\nSee[Format elements for date and time parts](/bigquery/docs/reference/standard-sql/format-elements#format_elements_date_time)for a list of format elements that this function supports.\n\n **Return Data Type** \n\n`STRING`\n\n **Example** \n\n```\nSELECT FORMAT_TIMESTAMP(\"%c\", TIMESTAMP \"2050-12-25 15:30:55+00\", \"UTC\")\n AS formatted;\n\n/*--------------------------*\n | formatted |\n +--------------------------+\n | Sun Dec 25 15:30:55 2050 |\n *--------------------------*/\n```\n\n```\nSELECT FORMAT_TIMESTAMP(\"%b-%d-%Y\", TIMESTAMP \"2050-12-25 15:30:55+00\")\n AS formatted;\n\n/*-------------*\n | formatted |\n +-------------+\n | Dec-25-2050 |\n *-------------*/\n```\n\n```\nSELECT FORMAT_TIMESTAMP(\"%b %Y\", TIMESTAMP \"2050-12-25 15:30:55+00\")\n AS formatted;\n\n/*-------------*\n | formatted |\n +-------------+\n | Dec 2050 |\n *-------------*/\n```\n\n```\nSELECT FORMAT_TIMESTAMP(\"%Y-%m-%dT%H:%M:%SZ\", TIMESTAMP \"2050-12-25 15:30:55\", \"UTC\")\n AS formatted;\n\n/*+---------------------*\n | formatted |\n +----------------------+\n | 2050-12-25T15:30:55Z |\n *----------------------*/\n```\n\n\n"
},
{
"name": "FROM_BASE32",
"arguments": [],
"category": "String",
"description_markdown": "```\nFROM_BASE32(string_expr)\n```\n\n **Description** \n\nConverts the base32-encoded input`string_expr`into`BYTES`format. To convert`BYTES`to a base32-encoded`STRING`, use[TO_BASE32](#to_base32).\n\n **Return type** \n\n`BYTES`\n\n **Example** \n\n```\nSELECT FROM_BASE32('MFRGGZDF74======') AS byte_data;\n\n-- Note that the result of FROM_BASE32 is of type BYTES, displayed as a base64-encoded string.\n/*-----------*\n | byte_data |\n +-----------+\n | YWJjZGX/ |\n *-----------*/\n```\n\n\n"
},
{
"name": "FROM_BASE64",
"arguments": [],
"category": "String",
"description_markdown": "```\nFROM_BASE64(string_expr)\n```\n\n **Description** \n\nConverts the base64-encoded input`string_expr`into`BYTES`format. To convert`BYTES`to a base64-encoded`STRING`,\nuse [TO_BASE64][string-link-to-base64].\n\nThere are several base64 encodings in common use that vary in exactly which\nalphabet of 65 ASCII characters are used to encode the 64 digits and padding.\nSee[RFC 4648](https://tools.ietf.org/html/rfc4648#section-4)for details. This\nfunction expects the alphabet`[A-Za-z0-9+/=]`.\n\n **Return type** \n\n`BYTES`\n\n **Example** \n\n```\nSELECT FROM_BASE64('/+A=') AS byte_data;\n\n-- Note that the result of FROM_BASE64 is of type BYTES, displayed as a base64-encoded string.\n/*-----------*\n | byte_data |\n +-----------+\n | /+A= |\n *-----------*/\n```\n\nTo work with an encoding using a different base64 alphabet, you might need to\ncompose`FROM_BASE64`with the`REPLACE`function. For instance, the`base64url`url-safe and filename-safe encoding commonly used in web programming\nuses`-_=`as the last characters rather than`+/=`. To decode a`base64url`-encoded string, replace`-`and`_`with`+`and`/`respectively.\n\n```\nSELECT FROM_BASE64(REPLACE(REPLACE('_-A=', '-', '+'), '_', '/')) AS binary;\n\n-- Note that the result of FROM_BASE64 is of type BYTES, displayed as a base64-encoded string.\n/*--------*\n | binary |\n +--------+\n | /+A= |\n *--------*/\n```\n\n\n"
},
{
"name": "FROM_HEX",
"arguments": [],
"category": "String",
"description_markdown": "```\nFROM_HEX(string)\n```\n\n **Description** \n\nConverts a hexadecimal-encoded`STRING`into`BYTES`format. Returns an error\nif the input`STRING`contains characters outside the range`(0..9, A..F, a..f)`. The lettercase of the characters does not matter. If the\ninput`STRING`has an odd number of characters, the function acts as if the\ninput has an additional leading`0`. To convert`BYTES`to a hexadecimal-encoded`STRING`, use[TO_HEX](#to_hex).\n\n **Return type** \n\n`BYTES`\n\n **Example** \n\n```\nWITH Input AS (\n SELECT '00010203aaeeefff' AS hex_str UNION ALL\n SELECT '0AF' UNION ALL\n SELECT '666f6f626172'\n)\nSELECT hex_str, FROM_HEX(hex_str) AS bytes_str\nFROM Input;\n\n-- Note that the result of FROM_HEX is of type BYTES, displayed as a base64-encoded string.\n/*------------------+--------------*\n | hex_str | bytes_str |\n +------------------+--------------+\n | 0AF | AAECA6ru7/8= |\n | 00010203aaeeefff | AK8= |\n | 666f6f626172 | Zm9vYmFy |\n *------------------+--------------*/\n```\n\n\n"
},
{
"name": "GAP_FILL",
"arguments": [],
"category": "Table",
"description_markdown": "Finds and fills gaps in a time series.\nFor more information, see[GAP_FILL](#gap_fill)in\nTime series functions.\n\n\n\n"
},
{
"name": "GENERATE_ARRAY",
"arguments": [],
"category": "Array",
"description_markdown": "```\nGENERATE_ARRAY(start_expression, end_expression[, step_expression])\n```\n\n **Description** \n\nReturns an array of values. The`start_expression`and`end_expression`parameters determine the inclusive start and end of the array.\n\nThe`GENERATE_ARRAY`function accepts the following data types as inputs:\n\n- ` INT64`\n- ` NUMERIC`\n- ` BIGNUMERIC`\n- ` FLOAT64`\n\nThe`step_expression`parameter determines the increment used to\ngenerate array values. The default value for this parameter is`1`.\n\nThis function returns an error if`step_expression`is set to 0, or if any\ninput is`NaN`.\n\nIf any argument is`NULL`, the function will return a`NULL`array.\n\n **Return Data Type** \n\n`ARRAY`\n\n **Examples** \n\nThe following returns an array of integers, with a default step of 1.\n\n```\nSELECT GENERATE_ARRAY(1, 5) AS example_array;\n\n/*-----------------*\n | example_array |\n +-----------------+\n | [1, 2, 3, 4, 5] |\n *-----------------*/\n```\n\nThe following returns an array using a user-specified step size.\n\n```\nSELECT GENERATE_ARRAY(0, 10, 3) AS example_array;\n\n/*---------------*\n | example_array |\n +---------------+\n | [0, 3, 6, 9] |\n *---------------*/\n```\n\nThe following returns an array using a negative value,`-3`for its step size.\n\n```\nSELECT GENERATE_ARRAY(10, 0, -3) AS example_array;\n\n/*---------------*\n | example_array |\n +---------------+\n | [10, 7, 4, 1] |\n *---------------*/\n```\n\nThe following returns an array using the same value for the`start_expression`and`end_expression`.\n\n```\nSELECT GENERATE_ARRAY(4, 4, 10) AS example_array;\n\n/*---------------*\n | example_array |\n +---------------+\n | [4] |\n *---------------*/\n```\n\nThe following returns an empty array, because the`start_expression`is greater\nthan the`end_expression`, and the`step_expression`value is positive.\n\n```\nSELECT GENERATE_ARRAY(10, 0, 3) AS example_array;\n\n/*---------------*\n | example_array |\n +---------------+\n | [] |\n *---------------*/\n```\n\nThe following returns a`NULL`array because`end_expression`is`NULL`.\n\n```\nSELECT GENERATE_ARRAY(5, NULL, 1) AS example_array;\n\n/*---------------*\n | example_array |\n +---------------+\n | NULL |\n *---------------*/\n```\n\nThe following returns multiple arrays.\n\n```\nSELECT GENERATE_ARRAY(start, 5) AS example_array\nFROM UNNEST([3, 4, 5]) AS start;\n\n/*---------------*\n | example_array |\n +---------------+\n | [3, 4, 5] |\n | [4, 5] |\n | [5] |\n +---------------*/\n```\n\n\n"
},
{
"name": "GENERATE_DATE_ARRAY",
"arguments": [],
"category": "Array",
"description_markdown": "```\nGENERATE_DATE_ARRAY(start_date, end_date[, INTERVAL INT64_expr date_part])\n```\n\n **Description** \n\nReturns an array of dates. The`start_date`and`end_date`parameters determine the inclusive start and end of the array.\n\nThe`GENERATE_DATE_ARRAY`function accepts the following data types as inputs:\n\n- ` start_date`must be a` DATE`.\n- ` end_date`must be a` DATE`.\n- ` INT64_expr`must be an` INT64`.\n- ` date_part`must be either DAY, WEEK, MONTH, QUARTER, or YEAR.\n\nThe`INT64_expr`parameter determines the increment used to generate dates. The\ndefault value for this parameter is 1 day.\n\nThis function returns an error if`INT64_expr`is set to 0.\n\n **Return Data Type** \n\n`ARRAY`containing 0 or more`DATE`values.\n\n **Examples** \n\nThe following returns an array of dates, with a default step of 1.\n\n```\nSELECT GENERATE_DATE_ARRAY('2016-10-05', '2016-10-08') AS example;\n\n/*--------------------------------------------------*\n | example |\n +--------------------------------------------------+\n | [2016-10-05, 2016-10-06, 2016-10-07, 2016-10-08] |\n *--------------------------------------------------*/\n```\n\nThe following returns an array using a user-specified step size.\n\n```\nSELECT GENERATE_DATE_ARRAY(\n '2016-10-05', '2016-10-09', INTERVAL 2 DAY) AS example;\n\n/*--------------------------------------*\n | example |\n +--------------------------------------+\n | [2016-10-05, 2016-10-07, 2016-10-09] |\n *--------------------------------------*/\n```\n\nThe following returns an array using a negative value,`-3`for its step size.\n\n```\nSELECT GENERATE_DATE_ARRAY('2016-10-05',\n '2016-10-01', INTERVAL -3 DAY) AS example;\n\n/*--------------------------*\n | example |\n +--------------------------+\n | [2016-10-05, 2016-10-02] |\n *--------------------------*/\n```\n\nThe following returns an array using the same value for the`start_date`and`end_date`.\n\n```\nSELECT GENERATE_DATE_ARRAY('2016-10-05',\n '2016-10-05', INTERVAL 8 DAY) AS example;\n\n/*--------------*\n | example |\n +--------------+\n | [2016-10-05] |\n *--------------*/\n```\n\nThe following returns an empty array, because the`start_date`is greater\nthan the`end_date`, and the`step`value is positive.\n\n```\nSELECT GENERATE_DATE_ARRAY('2016-10-05',\n '2016-10-01', INTERVAL 1 DAY) AS example;\n\n/*---------*\n | example |\n +---------+\n | [] |\n *---------*/\n```\n\nThe following returns a`NULL`array, because one of its inputs is`NULL`.\n\n```\nSELECT GENERATE_DATE_ARRAY('2016-10-05', NULL) AS example;\n\n/*---------*\n | example |\n +---------+\n | NULL |\n *---------*/\n```\n\nThe following returns an array of dates, using MONTH as the`date_part`interval:\n\n```\nSELECT GENERATE_DATE_ARRAY('2016-01-01',\n '2016-12-31', INTERVAL 2 MONTH) AS example;\n\n/*--------------------------------------------------------------------------*\n | example |\n +--------------------------------------------------------------------------+\n | [2016-01-01, 2016-03-01, 2016-05-01, 2016-07-01, 2016-09-01, 2016-11-01] |\n *--------------------------------------------------------------------------*/\n```\n\nThe following uses non-constant dates to generate an array.\n\n```\nSELECT GENERATE_DATE_ARRAY(date_start, date_end, INTERVAL 1 WEEK) AS date_range\nFROM (\n SELECT DATE '2016-01-01' AS date_start, DATE '2016-01-31' AS date_end\n UNION ALL SELECT DATE \"2016-04-01\", DATE \"2016-04-30\"\n UNION ALL SELECT DATE \"2016-07-01\", DATE \"2016-07-31\"\n UNION ALL SELECT DATE \"2016-10-01\", DATE \"2016-10-31\"\n) AS items;\n\n/*--------------------------------------------------------------*\n | date_range |\n +--------------------------------------------------------------+\n | [2016-01-01, 2016-01-08, 2016-01-15, 2016-01-22, 2016-01-29] |\n | [2016-04-01, 2016-04-08, 2016-04-15, 2016-04-22, 2016-04-29] |\n | [2016-07-01, 2016-07-08, 2016-07-15, 2016-07-22, 2016-07-29] |\n | [2016-10-01, 2016-10-08, 2016-10-15, 2016-10-22, 2016-10-29] |\n *--------------------------------------------------------------*/\n```\n\n\n"
},
{
"name": "GENERATE_RANGE_ARRAY",
"arguments": [],
"category": "Range",
"description_markdown": " **Preview** \n\nThis product or feature is subject to the \"Pre-GA Offerings Terms\"\n in the General Service Terms section of the[Service Specific Terms](/terms/service-terms).\n Pre-GA products and features are available \"as is\" and might have\n limited support. For more information, see the[launch stage descriptions](/products#product-launch-stages).\n\n **Note:** To provide feedback or request support for this feature, send an email to[bigquery-time-series-preview-support@google.com](mailto:bigquery-time-series-preview-support@google.com).```\nGENERATE_RANGE_ARRAY(range_to_split, step_interval)\n```\n\n```\nGENERATE_RANGE_ARRAY(range_to_split, step_interval, include_last_partial_range)\n```\n\n **Description** \n\nSplits a range into an array of subranges.\n\n **Definitions** \n\n- ` range_to_split`: The` RANGE<T>`value to split.\n- ` step_interval`: The` INTERVAL`value, which determines the maximum size of\neach subrange in the resulting array. An[interval single date and time part](/bigquery/docs/reference/standard-sql/data-types#single_datetime_part_interval)is supported, but an interval range of date and time parts is not.\n \n \n - If` range_to_split`is` RANGE<DATE>`, these interval\ndate parts are supported:` YEAR`to` DAY`.\n \n \n - If` range_to_split`is` RANGE<DATETIME>`, these interval\ndate and time parts are supported:` YEAR`to` SECOND`.\n \n \n - If` range_to_split`is` RANGE<TIMESTAMP>`, these interval\ndate and time parts are supported:` DAY`to` SECOND`.\n \n \n- ` include_last_partial_range`: A` BOOL`value, which determines whether or\nnot to include the last subrange if it's a partial subrange.\nIf this argument is not specified, the default value is` TRUE`.\n \n \n - ` TRUE`(default): The last subrange is included, even if it's\nsmaller than` step_interval`.\n \n \n - ` FALSE`: Exclude the last subrange if it's smaller than` step_interval`.\n \n \n\n **Details** \n\nReturns`NULL`if any input is`NULL`.\n\n **Return type** \n\n`ARRAY<RANGE<T>>`\n\n **Examples** \n\nIn the following example, a date range between`2020-01-01`and`2020-01-06`is split into an array of subranges that are one day long. There are\nno partial ranges.\n\n```\nSELECT GENERATE_RANGE_ARRAY(\n RANGE(DATE '2020-01-01', DATE '2020-01-06'),\n INTERVAL 1 DAY) AS results;\n\n/*----------------------------+\n | results |\n +----------------------------+\n | [ |\n | [2020-01-01, 2020-01-02), |\n | [2020-01-02, 2020-01-03), |\n | [2020-01-03, 2020-01-04), |\n | [2020-01-04, 2020-01-05), |\n | [2020-01-05, 2020-01-06), |\n | ] |\n +----------------------------*/\n```\n\nIn the following examples, a date range between`2020-01-01`and`2020-01-06`is split into an array of subranges that are two days long. The final subrange\nis smaller than two days:\n\n```\nSELECT GENERATE_RANGE_ARRAY(\n RANGE(DATE '2020-01-01', DATE '2020-01-06'),\n INTERVAL 2 DAY) AS results;\n\n/*----------------------------+\n | results |\n +----------------------------+\n | [ |\n | [2020-01-01, 2020-01-03), |\n | [2020-01-03, 2020-01-05), |\n | [2020-01-05, 2020-01-06) |\n | ] |\n +----------------------------*/\n```\n\n```\nSELECT GENERATE_RANGE_ARRAY(\n RANGE(DATE '2020-01-01', DATE '2020-01-06'),\n INTERVAL 2 DAY,\n TRUE) AS results;\n\n/*----------------------------+\n | results |\n +----------------------------+\n | [ |\n | [2020-01-01, 2020-01-03), |\n | [2020-01-03, 2020-01-05), |\n | [2020-01-05, 2020-01-06) |\n | ] |\n +----------------------------*/\n```\n\nIn the following example, a date range between`2020-01-01`and`2020-01-06`is split into an array of subranges that are two days long, but the final\nsubrange is excluded because it's smaller than two days:\n\n```\nSELECT GENERATE_RANGE_ARRAY(\n RANGE(DATE '2020-01-01', DATE '2020-01-06'),\n INTERVAL 2 DAY,\n FALSE) AS results;\n\n/*----------------------------+\n | results |\n +----------------------------+\n | [ |\n | [2020-01-01, 2020-01-03), |\n | [2020-01-03, 2020-01-05) |\n | ] |\n +----------------------------*/\n```\n\n\n"
},
{
"name": "GENERATE_TIMESTAMP_ARRAY",
"arguments": [],
"category": "Array",
"description_markdown": "```\nGENERATE_TIMESTAMP_ARRAY(start_timestamp, end_timestamp,\n INTERVAL step_expression date_part)\n```\n\n **Description** \n\nReturns an`ARRAY`of`TIMESTAMPS`separated by a given interval. The`start_timestamp`and`end_timestamp`parameters determine the inclusive\nlower and upper bounds of the`ARRAY`.\n\nThe`GENERATE_TIMESTAMP_ARRAY`function accepts the following data types as\ninputs:\n\n- ` start_timestamp`:` TIMESTAMP`\n- ` end_timestamp`:` TIMESTAMP`\n- ` step_expression`:` INT64`\n- Allowed` date_part`values are:` MICROSECOND`,` MILLISECOND`,` SECOND`,` MINUTE`,` HOUR`, or` DAY`.\n\nThe`step_expression`parameter determines the increment used to generate\ntimestamps.\n\n **Return Data Type** \n\nAn`ARRAY`containing 0 or more`TIMESTAMP`values.\n\n **Examples** \n\nThe following example returns an`ARRAY`of`TIMESTAMP`s at intervals of 1 day.\n\n```\nSELECT GENERATE_TIMESTAMP_ARRAY('2016-10-05 00:00:00', '2016-10-07 00:00:00',\n INTERVAL 1 DAY) AS timestamp_array;\n\n/*--------------------------------------------------------------------------*\n | timestamp_array |\n +--------------------------------------------------------------------------+\n | [2016-10-05 00:00:00+00, 2016-10-06 00:00:00+00, 2016-10-07 00:00:00+00] |\n *--------------------------------------------------------------------------*/\n```\n\nThe following example returns an`ARRAY`of`TIMESTAMP`s at intervals of 1\nsecond.\n\n```\nSELECT GENERATE_TIMESTAMP_ARRAY('2016-10-05 00:00:00', '2016-10-05 00:00:02',\n INTERVAL 1 SECOND) AS timestamp_array;\n\n/*--------------------------------------------------------------------------*\n | timestamp_array |\n +--------------------------------------------------------------------------+\n | [2016-10-05 00:00:00+00, 2016-10-05 00:00:01+00, 2016-10-05 00:00:02+00] |\n *--------------------------------------------------------------------------*/\n```\n\nThe following example returns an`ARRAY`of`TIMESTAMPS`with a negative\ninterval.\n\n```\nSELECT GENERATE_TIMESTAMP_ARRAY('2016-10-06 00:00:00', '2016-10-01 00:00:00',\n INTERVAL -2 DAY) AS timestamp_array;\n\n/*--------------------------------------------------------------------------*\n | timestamp_array |\n +--------------------------------------------------------------------------+\n | [2016-10-06 00:00:00+00, 2016-10-04 00:00:00+00, 2016-10-02 00:00:00+00] |\n *--------------------------------------------------------------------------*/\n```\n\nThe following example returns an`ARRAY`with a single element, because`start_timestamp`and`end_timestamp`have the same value.\n\n```\nSELECT GENERATE_TIMESTAMP_ARRAY('2016-10-05 00:00:00', '2016-10-05 00:00:00',\n INTERVAL 1 HOUR) AS timestamp_array;\n\n/*--------------------------*\n | timestamp_array |\n +--------------------------+\n | [2016-10-05 00:00:00+00] |\n *--------------------------*/\n```\n\nThe following example returns an empty`ARRAY`, because`start_timestamp`is\nlater than`end_timestamp`.\n\n```\nSELECT GENERATE_TIMESTAMP_ARRAY('2016-10-06 00:00:00', '2016-10-05 00:00:00',\n INTERVAL 1 HOUR) AS timestamp_array;\n\n/*-----------------*\n | timestamp_array |\n +-----------------+\n | [] |\n *-----------------*/\n```\n\nThe following example returns a null`ARRAY`, because one of the inputs is`NULL`.\n\n```\nSELECT GENERATE_TIMESTAMP_ARRAY('2016-10-05 00:00:00', NULL, INTERVAL 1 HOUR)\n AS timestamp_array;\n\n/*-----------------*\n | timestamp_array |\n +-----------------+\n | NULL |\n *-----------------*/\n```\n\nThe following example generates`ARRAY`s of`TIMESTAMP`s from columns containing\nvalues for`start_timestamp`and`end_timestamp`.\n\n```\nSELECT GENERATE_TIMESTAMP_ARRAY(start_timestamp, end_timestamp, INTERVAL 1 HOUR)\n AS timestamp_array\nFROM\n (SELECT\n TIMESTAMP '2016-10-05 00:00:00' AS start_timestamp,\n TIMESTAMP '2016-10-05 02:00:00' AS end_timestamp\n UNION ALL\n SELECT\n TIMESTAMP '2016-10-05 12:00:00' AS start_timestamp,\n TIMESTAMP '2016-10-05 14:00:00' AS end_timestamp\n UNION ALL\n SELECT\n TIMESTAMP '2016-10-05 23:59:00' AS start_timestamp,\n TIMESTAMP '2016-10-06 01:59:00' AS end_timestamp);\n\n/*--------------------------------------------------------------------------*\n | timestamp_array |\n +--------------------------------------------------------------------------+\n | [2016-10-05 00:00:00+00, 2016-10-05 01:00:00+00, 2016-10-05 02:00:00+00] |\n | [2016-10-05 12:00:00+00, 2016-10-05 13:00:00+00, 2016-10-05 14:00:00+00] |\n | [2016-10-05 23:59:00+00, 2016-10-06 00:59:00+00, 2016-10-06 01:59:00+00] |\n *--------------------------------------------------------------------------*/\n```\n\n\n"
},
{
"name": "GENERATE_UUID",
"arguments": [],
"category": "Utility",
"description_markdown": "```\nGENERATE_UUID()\n```\n\n **Description** \n\nReturns a random universally unique identifier (UUID) as a`STRING`.\nThe returned`STRING`consists of 32 hexadecimal\ndigits in five groups separated by hyphens in the form 8-4-4-4-12. The\nhexadecimal digits represent 122 random bits and 6 fixed bits, in compliance\nwith[RFC 4122 section 4.4](https://tools.ietf.org/html/rfc4122#section-4.4).\nThe returned`STRING`is lowercase.\n\n **Return Data Type** \n\nSTRING\n\n **Example** \n\nThe following query generates a random UUID.\n\n```\nSELECT GENERATE_UUID() AS uuid;\n\n/*--------------------------------------*\n | uuid |\n +--------------------------------------+\n | 4192bff0-e1e0-43ce-a4db-912808c32493 |\n *--------------------------------------*/\n```\n\n\n \n"
},
{
"name": "GREATEST",
"arguments": [],
"category": "Mathematical",
"description_markdown": "```\nGREATEST(X1,...,XN)\n```\n\n **Description** \n\nReturns the greatest value among`X1,...,XN`. If any argument is`NULL`, returns`NULL`. Otherwise, in the case of floating-point arguments, if any argument is`NaN`, returns`NaN`. In all other cases, returns the value among`X1,...,XN`that has the greatest value according to the ordering used by the`ORDER BY`clause. The arguments`X1, ..., XN`must be coercible to a common supertype, and\nthe supertype must support ordering.\n\n| X1,...,XN | GREATEST(X1,...,XN) |\n| --- | --- |\n| 3,5,1 | 5 |\n\nThis function supports specifying[collation](/bigquery/docs/reference/standard-sql/collation-concepts#collate_about).\n\n **Return Data Types** \n\nData type of the input values.\n\n\n\n"
},
{
"name": "GROUPING",
"arguments": [],
"category": "Aggregate",
"description_markdown": "```\nGROUPING(groupable_value)\n```\n\n **Description** \n\nIf a groupable item in the[GROUP BY clause](/bigquery/docs/reference/standard-sql/query-syntax#group_by_clause)is aggregated\n(and thus not grouped), this function returns`1`. Otherwise,\nthis function returns`0`.\n\nDefinitions:\n\n- ` groupable_value`: An expression that represents a value that can be grouped\nin the` GROUP BY`clause.\n\nDetails:\n\nThe`GROUPING`function is helpful if you need to determine which rows are\nproduced by which grouping sets. A grouping set is a group of columns by which\nrows can be grouped together. So, if you need to filter rows by\na few specific grouping sets, you can use the`GROUPING`function to identify\nwhich grouping sets grouped which rows by creating a matrix of the results.\n\nIn addition, you can use the`GROUPING`function to determine the type of`NULL`produced by the`GROUP BY`clause. In some cases, the`GROUP BY`clause\nproduces a`NULL`placeholder. This placeholder represents all groupable items\nthat are aggregated (not grouped) in the current grouping set. This is different\nfrom a standard`NULL`, which can also be produced by a query.\n\nFor more information, see the following examples.\n\n **Returned Data Type** \n\n`INT64`\n\n **Examples** \n\nIn the following example, it's difficult to determine which rows are grouped by\nthe grouping value`product_type`or`product_name`. The`GROUPING`function\nmakes this easier to determine.\n\nPay close attention to what's in the`product_type_agg`and`product_name_agg`column matrix. This determines how the rows are grouped.\n\n| `product_type_agg` | `product_name_agg` | Notes |\n| --- | --- | --- |\n| 1 | 0 | Rows are grouped by`product_name`. |\n| 0 | 1 | Rows are grouped by`product_type`. |\n| 0 | 0 | Rows are grouped by`product_type`and`product_name`. |\n| 1 | 1 | Grand total row. |\n\n```\nWITH\n Products AS (\n SELECT 'shirt' AS product_type, 't-shirt' AS product_name, 3 AS product_count UNION ALL\n SELECT 'shirt', 't-shirt', 8 UNION ALL\n SELECT 'shirt', 'polo', 25 UNION ALL\n SELECT 'pants', 'jeans', 6\n )\nSELECT\n product_type,\n product_name,\n SUM(product_count) AS product_sum,\n GROUPING(product_type) AS product_type_agg,\n GROUPING(product_name) AS product_name_agg,\nFROM Products\nGROUP BY GROUPING SETS(product_type, product_name, ())\nORDER BY product_name;\n\n/*--------------+--------------+-------------+------------------+------------------+\n | product_type | product_name | product_sum | product_type_agg | product_name_agg |\n +--------------+--------------+-------------+------------------+------------------+\n | NULL | NULL | 42 | 1 | 1 |\n | shirt | NULL | 36 | 0 | 1 |\n | pants | NULL | 6 | 0 | 1 |\n | NULL | jeans | 6 | 1 | 0 |\n | NULL | polo | 25 | 1 | 0 |\n | NULL | t-shirt | 11 | 1 | 0 |\n +--------------+--------------+-------------+------------------+------------------*/\n```\n\nIn the following example, it's difficult to determine\nif`NULL`represents a`NULL`placeholder or a standard`NULL`value in the`product_type`column. The`GROUPING`function makes it easier to\ndetermine what type of`NULL`is being produced. If`product_type_is_aggregated`is`1`, the`NULL`value for\nthe`product_type`column is a`NULL`placeholder.\n\n```\nWITH\n Products AS (\n SELECT 'shirt' AS product_type, 't-shirt' AS product_name, 3 AS product_count UNION ALL\n SELECT 'shirt', 't-shirt', 8 UNION ALL\n SELECT NULL, 'polo', 25 UNION ALL\n SELECT 'pants', 'jeans', 6\n )\nSELECT\n product_type,\n product_name,\n SUM(product_count) AS product_sum,\n GROUPING(product_type) AS product_type_is_aggregated\nFROM Products\nGROUP BY GROUPING SETS(product_type, product_name)\nORDER BY product_name;\n\n/*--------------+--------------+-------------+----------------------------+\n | product_type | product_name | product_sum | product_type_is_aggregated |\n +--------------+--------------+-------------+----------------------------+\n | shirt | NULL | 11 | 0 |\n | NULL | NULL | 25 | 0 |\n | pants | NULL | 6 | 0 |\n | NULL | jeans | 6 | 1 |\n | NULL | polo | 25 | 1 |\n | NULL | t-shirt | 11 | 1 |\n +--------------+--------------+-------------+----------------------------*/\n```\n\n\n"
},
{
"name": "HLL_COUNT.EXTRACT",
"arguments": [],
"category": "HyperLogLog",
"description_markdown": "```\nHLL_COUNT.EXTRACT(sketch)\n```\n\n **Description** \n\nA scalar function that extracts a cardinality estimate of a single[HLL++](https://research.google.com/pubs/pub40671.html)sketch.\n\nIf`sketch`is`NULL`, this function returns a cardinality estimate of`0`.\n\n **Supported input types** \n\n`BYTES`\n\n **Return type** \n\n`INT64`\n\n **Example** \n\nThe following query returns the number of distinct users for each country who\nhave at least one invoice.\n\n```\nSELECT\n country,\n HLL_COUNT.EXTRACT(HLL_sketch) AS distinct_customers_with_open_invoice\nFROM\n (\n SELECT\n country,\n HLL_COUNT.INIT(customer_id) AS hll_sketch\n FROM\n UNNEST(\n ARRAY<STRUCT<country STRING, customer_id STRING, invoice_id STRING>>[\n ('UA', 'customer_id_1', 'invoice_id_11'),\n ('BR', 'customer_id_3', 'invoice_id_31'),\n ('CZ', 'customer_id_2', 'invoice_id_22'),\n ('CZ', 'customer_id_2', 'invoice_id_23'),\n ('BR', 'customer_id_3', 'invoice_id_31'),\n ('UA', 'customer_id_2', 'invoice_id_24')])\n GROUP BY country\n );\n\n/*---------+--------------------------------------*\n | country | distinct_customers_with_open_invoice |\n +---------+--------------------------------------+\n | UA | 2 |\n | BR | 1 |\n | CZ | 1 |\n *---------+--------------------------------------*/\n```\n\n\n"
},
{
"name": "HLL_COUNT.INIT",
"arguments": [],
"category": "HyperLogLog",
"description_markdown": "```\nHLL_COUNT.INIT(input [, precision])\n```\n\n **Description** \n\nAn aggregate function that takes one or more`input`values and aggregates them\ninto a[HLL++](https://research.google.com/pubs/pub40671.html)sketch. Each sketch\nis represented using the`BYTES`data type. You can then merge sketches using`HLL_COUNT.MERGE`or`HLL_COUNT.MERGE_PARTIAL`. If no merging is needed,\nyou can extract the final count of distinct values from the sketch using`HLL_COUNT.EXTRACT`.\n\nThis function supports an optional parameter,`precision`. This parameter\ndefines the accuracy of the estimate at the cost of additional memory required\nto process the sketches or store them on disk. The range for this value is`10`to`24`. The default value is`15`. For more information about precision,\nsee[Precision for sketches](/bigquery/docs/sketches#precision_hll).\n\nIf the input is`NULL`, this function returns`NULL`.\n\nFor more information, see[HyperLogLog in Practice: Algorithmic Engineering of\na State of The Art Cardinality Estimation Algorithm](https://research.google.com/pubs/pub40671.html).\n\n **Supported input types** \n\n- ` INT64`\n- ` NUMERIC`\n- ` BIGNUMERIC`\n- ` STRING`\n- ` BYTES`\n\n **Return type** \n\n`BYTES`\n\n **Example** \n\nThe following query creates HLL++ sketches that count the number of distinct\nusers with at least one invoice per country.\n\n```\nSELECT\n country,\n HLL_COUNT.INIT(customer_id, 10)\n AS hll_sketch\nFROM\n UNNEST(\n ARRAY<STRUCT<country STRING, customer_id STRING, invoice_id STRING>>[\n ('UA', 'customer_id_1', 'invoice_id_11'),\n ('CZ', 'customer_id_2', 'invoice_id_22'),\n ('CZ', 'customer_id_2', 'invoice_id_23'),\n ('BR', 'customer_id_3', 'invoice_id_31'),\n ('UA', 'customer_id_2', 'invoice_id_24')])\nGROUP BY country;\n\n/*---------+------------------------------------------------------------------------------------*\n | country | hll_sketch |\n +---------+------------------------------------------------------------------------------------+\n | UA | \"\\010p\\020\\002\\030\\002 \\013\\202\\007\\r\\020\\002\\030\\n \\0172\\005\\371\\344\\001\\315\\010\" |\n | CZ | \"\\010p\\020\\002\\030\\002 \\013\\202\\007\\013\\020\\001\\030\\n \\0172\\003\\371\\344\\001\" |\n | BR | \"\\010p\\020\\001\\030\\002 \\013\\202\\007\\013\\020\\001\\030\\n \\0172\\003\\202\\341\\001\" |\n *---------+------------------------------------------------------------------------------------*/\n```\n\n\n"
},
{
"name": "HLL_COUNT.MERGE",
"arguments": [],
"category": "HyperLogLog",
"description_markdown": "```\nHLL_COUNT.MERGE(sketch)\n```\n\n **Description** \n\nAn aggregate function that returns the cardinality of several[HLL++](https://research.google.com/pubs/pub40671.html)sketches by computing their union.\n\nEach`sketch`must be initialized on the same type. Attempts to merge sketches\nfor different types results in an error. For example, you cannot merge a sketch\ninitialized from`INT64`data with one initialized from`STRING`data.\n\nIf the merged sketches were initialized with different precisions, the precision\nwill be downgraded to the lowest precision involved in the merge.\n\nThis function ignores`NULL`values when merging sketches. If the merge happens\nover zero rows or only over`NULL`values, the function returns`0`.\n\n **Supported input types** \n\n`BYTES`\n\n **Return type** \n\n`INT64`\n\n **Example** \n\nThe following query counts the number of distinct users across all countries\n who have at least one invoice.\n\n```\nSELECT HLL_COUNT.MERGE(hll_sketch) AS distinct_customers_with_open_invoice\nFROM\n (\n SELECT\n country,\n HLL_COUNT.INIT(customer_id) AS hll_sketch\n FROM\n UNNEST(\n ARRAY<STRUCT<country STRING, customer_id STRING, invoice_id STRING>>[\n ('UA', 'customer_id_1', 'invoice_id_11'),\n ('BR', 'customer_id_3', 'invoice_id_31'),\n ('CZ', 'customer_id_2', 'invoice_id_22'),\n ('CZ', 'customer_id_2', 'invoice_id_23'),\n ('BR', 'customer_id_3', 'invoice_id_31'),\n ('UA', 'customer_id_2', 'invoice_id_24')])\n GROUP BY country\n );\n\n/*--------------------------------------*\n | distinct_customers_with_open_invoice |\n +--------------------------------------+\n | 3 |\n *--------------------------------------*/\n```\n\n\n"
},
{
"name": "HLL_COUNT.MERGE_PARTIAL",
"arguments": [],
"category": "HyperLogLog",
"description_markdown": "```\nHLL_COUNT.MERGE_PARTIAL(sketch)\n```\n\n **Description** \n\nAn aggregate function that takes one or more[HLL++](https://research.google.com/pubs/pub40671.html)`sketch`inputs and merges them into a new sketch.\n\nEach`sketch`must be initialized on the same type. Attempts to merge sketches\nfor different types results in an error. For example, you cannot merge a sketch\ninitialized from`INT64`data with one initialized from`STRING`data.\n\nIf the merged sketches were initialized with different precisions, the precision\nwill be downgraded to the lowest precision involved in the merge. For example,\nif`MERGE_PARTIAL`encounters sketches of precision 14 and 15, the returned new\nsketch will have precision 14.\n\nThis function returns`NULL`if there is no input or all inputs are`NULL`.\n\n **Supported input types** \n\n`BYTES`\n\n **Return type** \n\n`BYTES`\n\n **Example** \n\nThe following query returns an HLL++ sketch that counts the number of distinct\nusers who have at least one invoice across all countries.\n\n```\nSELECT HLL_COUNT.MERGE_PARTIAL(HLL_sketch) AS distinct_customers_with_open_invoice\nFROM\n (\n SELECT\n country,\n HLL_COUNT.INIT(customer_id) AS hll_sketch\n FROM\n UNNEST(\n ARRAY<STRUCT<country STRING, customer_id STRING, invoice_id STRING>>[\n ('UA', 'customer_id_1', 'invoice_id_11'),\n ('BR', 'customer_id_3', 'invoice_id_31'),\n ('CZ', 'customer_id_2', 'invoice_id_22'),\n ('CZ', 'customer_id_2', 'invoice_id_23'),\n ('BR', 'customer_id_3', 'invoice_id_31'),\n ('UA', 'customer_id_2', 'invoice_id_24')])\n GROUP BY country\n );\n\n/*----------------------------------------------------------------------------------------------*\n | distinct_customers_with_open_invoice |\n +----------------------------------------------------------------------------------------------+\n | \"\\010p\\020\\006\\030\\002 \\013\\202\\007\\020\\020\\003\\030\\017 \\0242\\010\\320\\2408\\352}\\244\\223\\002\" |\n *----------------------------------------------------------------------------------------------*/\n```\n\n\n<span id=\"interval_functions\">\n## Interval functions\n\n</span>\nGoogleSQL for BigQuery supports the following interval functions.\n\n\n\n"
},
{
"name": "IEEE_DIVIDE",
"arguments": [],
"category": "Mathematical",
"description_markdown": "```\nIEEE_DIVIDE(X, Y)\n```\n\n **Description** \n\nDivides X by Y; this function never fails. Returns`FLOAT64`. Unlike the division operator (/),\nthis function does not generate errors for division by zero or overflow.\n\n| X | Y | IEEE_DIVIDE(X, Y) |\n| --- | --- | --- |\n| 20.0 | 4.0 | 5.0 |\n| 0.0 | 25.0 | 0.0 |\n| 25.0 | 0.0 | `+inf` |\n| -25.0 | 0.0 | `-inf` |\n| 0.0 | 0.0 | `NaN` |\n| 0.0 | `NaN` | `NaN` |\n| `NaN` | 0.0 | `NaN` |\n| `+inf` | `+inf` | `NaN` |\n| `-inf` | `-inf` | `NaN` |\n\n\n\n"
},
{
"name": "INITCAP",
"arguments": [],
"category": "String",
"description_markdown": "```\nINITCAP(value[, delimiters])\n```\n\n **Description** \n\nTakes a`STRING`and returns it with the first character in each word in\nuppercase and all other characters in lowercase. Non-alphabetic characters\nremain the same.\n\n`delimiters`is an optional string argument that is used to override the default\nset of characters used to separate words. If`delimiters`is not specified, it\ndefaults to the following characters: \n`<whitespace> [ ] ( ) { } / | \\ < > ! ? @ \" ^ # $ & ~ _ , . : ; * % + -`\n\nIf`value`or`delimiters`is`NULL`, the function returns`NULL`.\n\n **Return type** \n\n`STRING`\n\n **Examples** \n\n```\nWITH example AS\n(\n SELECT 'Hello World-everyone!' AS value UNION ALL\n SELECT 'tHe dog BARKS loudly+friendly' AS value UNION ALL\n SELECT 'apples&oranges;&pears' AS value UNION ALL\n SELECT 'καθίσματα ταινιών' AS value\n)\nSELECT value, INITCAP(value) AS initcap_value FROM example\n\n/*-------------------------------+-------------------------------*\n | value | initcap_value |\n +-------------------------------+-------------------------------+\n | Hello World-everyone! | Hello World-Everyone! |\n | tHe dog BARKS loudly+friendly | The Dog Barks Loudly+Friendly |\n | apples&oranges;&pears | Apples&Oranges;&Pears |\n | καθίσματα ταινιών | Καθίσματα Ταινιών |\n *-------------------------------+-------------------------------*/\n\nWITH example AS\n(\n SELECT 'hello WORLD!' AS value, '' AS delimiters UNION ALL\n SELECT 'καθίσματα ταιντιώ@ν' AS value, 'τ@' AS delimiters UNION ALL\n SELECT 'Apples1oranges2pears' AS value, '12' AS delimiters UNION ALL\n SELECT 'tHisEisEaESentence' AS value, 'E' AS delimiters\n)\nSELECT value, delimiters, INITCAP(value, delimiters) AS initcap_value FROM example;\n\n/*----------------------+------------+----------------------*\n | value | delimiters | initcap_value |\n +----------------------+------------+----------------------+\n | hello WORLD! | | Hello world! |\n | καθίσματα ταιντιώ@ν | τ@ | ΚαθίσματΑ τΑιντΙώ@Ν |\n | Apples1oranges2pears | 12 | Apples1Oranges2Pears |\n | tHisEisEaESentence | E | ThisEIsEAESentence |\n *----------------------+------------+----------------------*/\n```\n\n\n"
},
{
"name": "INSTR",
"arguments": [],
"category": "String",
"description_markdown": "```\nINSTR(value, subvalue[, position[, occurrence]])\n```\n\n **Description** \n\nReturns the lowest 1-based position of`subvalue`in`value`.`value`and`subvalue`must be the same type, either`STRING`or`BYTES`.\n\nIf`position`is specified, the search starts at this position in`value`, otherwise it starts at`1`, which is the beginning of`value`. If`position`is negative, the function searches backwards\nfrom the end of`value`, with`-1`indicating the last character.`position`is of type`INT64`and cannot be`0`.\n\nIf`occurrence`is specified, the search returns the position of a specific\ninstance of`subvalue`in`value`. If not specified,`occurrence`defaults to`1`and returns the position of the first occurrence.\nFor`occurrence`>`1`, the function includes overlapping occurrences.`occurrence`is of type`INT64`and must be positive.\n\nThis function supports specifying[collation](/bigquery/docs/reference/standard-sql/collation-concepts#collate_about).\n\nReturns`0`if:\n\n- No match is found.\n- If` occurrence`is greater than the number of matches found.\n- If` position`is greater than the length of` value`.\n\nReturns`NULL`if:\n\n- Any input argument is` NULL`.\n\nReturns an error if:\n\n- ` position`is` 0`.\n- ` occurrence`is` 0`or negative.\n\n **Return type** \n\n`INT64`\n\n **Examples** \n\n```\nWITH example AS\n(SELECT 'banana' as value, 'an' as subvalue, 1 as position, 1 as\noccurrence UNION ALL\nSELECT 'banana' as value, 'an' as subvalue, 1 as position, 2 as\noccurrence UNION ALL\nSELECT 'banana' as value, 'an' as subvalue, 1 as position, 3 as\noccurrence UNION ALL\nSELECT 'banana' as value, 'an' as subvalue, 3 as position, 1 as\noccurrence UNION ALL\nSELECT 'banana' as value, 'an' as subvalue, -1 as position, 1 as\noccurrence UNION ALL\nSELECT 'banana' as value, 'an' as subvalue, -3 as position, 1 as\noccurrence UNION ALL\nSELECT 'banana' as value, 'ann' as subvalue, 1 as position, 1 as\noccurrence UNION ALL\nSELECT 'helloooo' as value, 'oo' as subvalue, 1 as position, 1 as\noccurrence UNION ALL\nSELECT 'helloooo' as value, 'oo' as subvalue, 1 as position, 2 as\noccurrence\n)\nSELECT value, subvalue, position, occurrence, INSTR(value,\nsubvalue, position, occurrence) AS instr\nFROM example;\n\n/*--------------+--------------+----------+------------+-------*\n | value | subvalue | position | occurrence | instr |\n +--------------+--------------+----------+------------+-------+\n | banana | an | 1 | 1 | 2 |\n | banana | an | 1 | 2 | 4 |\n | banana | an | 1 | 3 | 0 |\n | banana | an | 3 | 1 | 4 |\n | banana | an | -1 | 1 | 4 |\n | banana | an | -3 | 1 | 4 |\n | banana | ann | 1 | 1 | 0 |\n | helloooo | oo | 1 | 1 | 5 |\n | helloooo | oo | 1 | 2 | 6 |\n *--------------+--------------+----------+------------+-------*/\n```\n\n\n"
},
{
"name": "INT64",
"arguments": [],
"category": "JSON",
"description_markdown": "```\nINT64(json_expr)\n```\n\n **Description** \n\nConverts a JSON number to a SQL`INT64`value.\n\nArguments:\n\n- ` json_expr`: JSON. For example:\n \n \n ```\n JSON '999'\n ```\n \n If the JSON value is not a number, or the JSON number is not in the SQL` INT64`domain, an error is produced. If the expression is SQL` NULL`, the\nfunction returns SQL` NULL`.\n \n \n\n **Return type** \n\n`INT64`\n\n **Examples** \n\n```\nSELECT INT64(JSON '2005') AS flight_number;\n\n/*---------------*\n | flight_number |\n +---------------+\n | 2005 |\n *---------------*/\n```\n\n```\nSELECT INT64(JSON_QUERY(JSON '{\"gate\": \"A4\", \"flight_number\": 2005}', \"$.flight_number\")) AS flight_number;\n\n/*---------------*\n | flight_number |\n +---------------+\n | 2005 |\n *---------------*/\n```\n\n```\nSELECT INT64(JSON '10.0') AS score;\n\n/*-------*\n | score |\n +-------+\n | 10 |\n *-------*/\n```\n\nThe following examples show how invalid requests are handled:\n\n```\n-- An error is thrown if JSON is not a number or cannot be converted to a 64-bit integer.\nSELECT INT64(JSON '10.1') AS result; -- Throws an error\nSELECT INT64(JSON '\"strawberry\"') AS result; -- Throws an error\nSELECT INT64(JSON 'null') AS result; -- Throws an error\nSELECT SAFE.INT64(JSON '\"strawberry\"') AS result; -- Returns a SQL NULL\n```\n\n\n"
},
{
"name": "IS_INF",
"arguments": [],
"category": "Mathematical",
"description_markdown": "```\nIS_INF(X)\n```\n\n **Description** \n\nReturns`TRUE`if the value is positive or negative infinity.\n\n| X | IS_INF(X) |\n| --- | --- |\n| `+inf` | `TRUE` |\n| `-inf` | `TRUE` |\n| 25 | `FALSE` |\n\n\n\n"
},
{
"name": "IS_NAN",
"arguments": [],
"category": "Mathematical",
"description_markdown": "```\nIS_NAN(X)\n```\n\n **Description** \n\nReturns`TRUE`if the value is a`NaN`value.\n\n| X | IS_NAN(X) |\n| --- | --- |\n| `NaN` | `TRUE` |\n| 25 | `FALSE` |\n\n\n\n"
},
{
"name": "JSON_ARRAY",
"arguments": [],
"category": "JSON",
"description_markdown": "```\nJSON_ARRAY([value][, ...])\n```\n\n **Description** \n\nCreates a JSON array from zero or more SQL values.\n\nArguments:\n\n- ` value`: A[JSON encoding-supported](#json_encodings)value to add\nto a JSON array.\n\n **Return type** \n\n`JSON`\n\n **Examples** \n\nYou can create an empty JSON array. For example:\n\n```\nSELECT JSON_ARRAY() AS json_data\n\n/*-----------*\n | json_data |\n +-----------+\n | [] |\n *-----------*/\n```\n\nThe following query creates a JSON array with one value in it:\n\n```\nSELECT JSON_ARRAY(10) AS json_data\n\n/*-----------*\n | json_data |\n +-----------+\n | [10] |\n *-----------*/\n```\n\nYou can create a JSON array with an empty JSON array in it. For example:\n\n```\nSELECT JSON_ARRAY([]) AS json_data\n\n/*-----------*\n | json_data |\n +-----------+\n | [[]] |\n *-----------*/\n```\n\n```\nSELECT JSON_ARRAY(10, 'foo', NULL) AS json_data\n\n/*-----------------*\n | json_data |\n +-----------------+\n | [10,\"foo\",null] |\n *-----------------*/\n```\n\n```\nSELECT JSON_ARRAY(STRUCT(10 AS a, 'foo' AS b)) AS json_data\n\n/*----------------------*\n | json_data |\n +----------------------+\n | [{\"a\":10,\"b\":\"foo\"}] |\n *----------------------*/\n```\n\n```\nSELECT JSON_ARRAY(10, ['foo', 'bar'], [20, 30]) AS json_data\n\n/*----------------------------*\n | json_data |\n +----------------------------+\n | [10,[\"foo\",\"bar\"],[20,30]] |\n *----------------------------*/\n```\n\n```\nSELECT JSON_ARRAY(10, [JSON '20', JSON '\"foo\"']) AS json_data\n\n/*-----------------*\n | json_data |\n +-----------------+\n | [10,[20,\"foo\"]] |\n *-----------------*/\n```\n\n\n"
},
{
"name": "JSON_ARRAY_APPEND",
"arguments": [],
"category": "JSON",
"description_markdown": "```\nJSON_ARRAY_APPEND(\n json_expr,\n json_path_value_pair[, ...]\n [, append_each_element=>{ TRUE | FALSE }]\n)\n\njson_path_value_pair:\n json_path, value\n```\n\nAppends JSON data to the end of a JSON array.\n\nArguments:\n\n- ` json_expr`: JSON. For example:\n \n \n ```\n JSON '[\"a\", \"b\", \"c\"]'\n ```\n \n \n- ` json_path_value_pair`: A value and the[JSONPath](#JSONPath_format)for\nthat value. This includes:\n \n \n - ` json_path`: Append` value`at this[JSONPath](#JSONPath_format)in` json_expr`.\n \n \n - ` value`: A[JSON encoding-supported](#json_encodings)value to\nappend.\n \n \n- ` append_each_element`: An optional, mandatory named argument.\n \n \n - If` TRUE`(default), and` value`is a SQL array,\nappends each element individually.\n \n \n - If` FALSE,`and` value`is a SQL array, appends\nthe array as one element.\n \n \n\nDetails:\n\n- Path value pairs are evaluated left to right. The JSON produced by\nevaluating one pair becomes the JSON against which the next pair\nis evaluated.\n- The operation is ignored if the path points to a JSON non-array value that\nis not a JSON null.\n- If` json_path`points to a JSON null, the JSON null is replaced by a\nJSON array that contains` value`.\n- If the path exists but has an incompatible type at any given path token,\nthe path value pair operation is ignored.\n- The function applies all path value pair append operations even if an\nindividual path value pair operation is invalid. For invalid operations,\nthe operation is ignored and the function continues to process the rest of\nthe path value pairs.\n- If any` json_path`is an invalid[JSONPath](#JSONPath_format), an error is\nproduced.\n- If` json_expr`is SQL` NULL`, the function returns SQL` NULL`.\n- If` append_each_element`is SQL` NULL`, the function returns` json_expr`.\n- If` json_path`is SQL` NULL`, the` json_path_value_pair`operation is\nignored.\n\n **Return type** \n\n`JSON`\n\n **Examples** \n\nIn the following example, path`$`is matched and appends`1`.\n\n```\nSELECT JSON_ARRAY_APPEND(JSON '[\"a\", \"b\", \"c\"]', '$', 1) AS json_data\n\n/*-----------------*\n | json_data |\n +-----------------+\n | [\"a\",\"b\",\"c\",1] |\n *-----------------*/\n```\n\nIn the following example,`append_each_element`defaults to`TRUE`, so`[1, 2]`is appended as individual elements.\n\n```\nSELECT JSON_ARRAY_APPEND(JSON '[\"a\", \"b\", \"c\"]', '$', [1, 2]) AS json_data\n\n/*-------------------*\n | json_data |\n +-------------------+\n | [\"a\",\"b\",\"c\",1,2] |\n *-------------------*/\n```\n\nIn the following example,`append_each_element`is`FALSE`, so`[1, 2]`is appended as one element.\n\n```\nSELECT JSON_ARRAY_APPEND(\n JSON '[\"a\", \"b\", \"c\"]',\n '$', [1, 2],\n append_each_element=>FALSE) AS json_data\n\n/*---------------------*\n | json_data |\n +---------------------+\n | [\"a\",\"b\",\"c\",[1,2]] |\n *---------------------*/\n```\n\nIn the following example,`append_each_element`is`FALSE`, so`[1, 2]`and`[3, 4]`are each appended as one element.\n\n```\nSELECT JSON_ARRAY_APPEND(\n JSON '[\"a\", [\"b\"], \"c\"]',\n '$[1]', [1, 2],\n '$[1][1]', [3, 4],\n append_each_element=>FALSE) AS json_data\n\n/*-----------------------------*\n | json_data |\n +-----------------------------+\n | [\"a\",[\"b\",[1,2,[3,4]]],\"c\"] |\n *-----------------------------*/\n```\n\nIn the following example, the first path`$[1]`appends`[1, 2]`as single\nelements, and then the second path`$[1][1]`is not a valid path to an array,\nso the second operation is ignored.\n\n```\nSELECT JSON_ARRAY_APPEND(\n JSON '[\"a\", [\"b\"], \"c\"]',\n '$[1]', [1, 2],\n '$[1][1]', [3, 4]) AS json_data\n\n/*---------------------*\n | json_data |\n +---------------------+\n | [\"a\",[\"b\",1,2],\"c\"] |\n *---------------------*/\n```\n\nIn the following example, path`$.a`is matched and appends`2`.\n\n```\nSELECT JSON_ARRAY_APPEND(JSON '{\"a\": [1]}', '$.a', 2) AS json_data\n\n/*-------------*\n | json_data |\n +-------------+\n | {\"a\":[1,2]} |\n *-------------*/\n```\n\nIn the following example, a value is appended into a JSON null.\n\n```\nSELECT JSON_ARRAY_APPEND(JSON '{\"a\": null}', '$.a', 10)\n\n/*------------*\n | json_data |\n +------------+\n | {\"a\":[10]} |\n *------------*/\n```\n\nIn the following example, path`$.a`is not an array, so the operation is\nignored.\n\n```\nSELECT JSON_ARRAY_APPEND(JSON '{\"a\": 1}', '$.a', 2) AS json_data\n\n/*-----------*\n | json_data |\n +-----------+\n | {\"a\":1} |\n *-----------*/\n```\n\nIn the following example, path`$.b`does not exist, so the operation is\nignored.\n\n```\nSELECT JSON_ARRAY_APPEND(JSON '{\"a\": 1}', '$.b', 2) AS json_data\n\n/*-----------*\n | json_data |\n +-----------+\n | {\"a\":1} |\n *-----------*/\n```\n\n\n"
},
{
"name": "JSON_ARRAY_INSERT",
"arguments": [],
"category": "JSON",
"description_markdown": "```\nJSON_ARRAY_INSERT(\n json_expr,\n json_path_value_pair[, ...]\n [, insert_each_element=>{ TRUE | FALSE }]\n)\n\njson_path_value_pair:\n json_path, value\n```\n\nProduces a new JSON value that is created by inserting JSON data into\na JSON array.\n\nArguments:\n\n- ` json_expr`: JSON. For example:\n \n \n ```\n JSON '[\"a\", \"b\", \"c\"]'\n ```\n \n \n- ` json_path_value_pair`: A value and the[JSONPath](#JSONPath_format)for\nthat value. This includes:\n \n \n - ` json_path`: Insert` value`at this[JSONPath](#JSONPath_format)in` json_expr`.\n \n \n - ` value`: A[JSON encoding-supported](#json_encodings)value to\ninsert.\n \n \n- ` insert_each_element`: An optional, mandatory named argument.\n \n \n - If` TRUE`(default), and` value`is a SQL array,\ninserts each element individually.\n \n \n - If` FALSE,`and` value`is a SQL array, inserts\nthe array as one element.\n \n \n\nDetails:\n\n- Path value pairs are evaluated left to right. The JSON produced by\nevaluating one pair becomes the JSON against which the next pair\nis evaluated.\n- The operation is ignored if the path points to a JSON non-array value that\nis not a JSON null.\n- If` json_path`points to a JSON null, the JSON null is replaced by a\nJSON array of the appropriate size and padded on the left with JSON nulls.\n- If the path exists but has an incompatible type at any given path token,\nthe path value pair operator is ignored.\n- The function applies all path value pair append operations even if an\nindividual path value pair operation is invalid. For invalid operations,\nthe operation is ignored and the function continues to process the rest of\nthe path value pairs.\n- If the array index in` json_path`is larger than the size of the array, the\nfunction extends the length of the array to the index, fills in\nthe array with JSON nulls, then adds` value`at the index.\n- If any` json_path`is an invalid[JSONPath](#JSONPath_format), an error is\nproduced.\n- If` json_expr`is SQL` NULL`, the function returns SQL` NULL`.\n- If` insert_each_element`is SQL` NULL`, the function returns` json_expr`.\n- If` json_path`is SQL` NULL`, the` json_path_value_pair`operation is\nignored.\n\n **Return type** \n\n`JSON`\n\n **Examples** \n\nIn the following example, path`$[1]`is matched and inserts`1`.\n\n```\nSELECT JSON_ARRAY_INSERT(JSON '[\"a\", [\"b\", \"c\"], \"d\"]', '$[1]', 1) AS json_data\n\n/*-----------------------*\n | json_data |\n +-----------------------+\n | [\"a\",1,[\"b\",\"c\"],\"d\"] |\n *-----------------------*/\n```\n\nIn the following example, path`$[1][0]`is matched and inserts`1`.\n\n```\nSELECT JSON_ARRAY_INSERT(JSON '[\"a\", [\"b\", \"c\"], \"d\"]', '$[1][0]', 1) AS json_data\n\n/*-----------------------*\n | json_data |\n +-----------------------+\n | [\"a\",[1,\"b\",\"c\"],\"d\"] |\n *-----------------------*/\n```\n\nIn the following example,`insert_each_element`defaults to`TRUE`, so`[1, 2]`is inserted as individual elements.\n\n```\nSELECT JSON_ARRAY_INSERT(JSON '[\"a\", \"b\", \"c\"]', '$[1]', [1, 2]) AS json_data\n\n/*-------------------*\n | json_data |\n +-------------------+\n | [\"a\",1,2,\"b\",\"c\"] |\n *-------------------*/\n```\n\nIn the following example,`insert_each_element`is`FALSE`, so`[1, 2]`is\ninserted as one element.\n\n```\nSELECT JSON_ARRAY_INSERT(\n JSON '[\"a\", \"b\", \"c\"]',\n '$[1]', [1, 2],\n insert_each_element=>FALSE) AS json_data\n\n/*---------------------*\n | json_data |\n +---------------------+\n | [\"a\",[1,2],\"b\",\"c\"] |\n *---------------------*/\n```\n\nIn the following example, path`$[7]`is larger than the length of the\nmatched array, so the array is extended with JSON nulls and`\"e\"`is inserted at\nthe end of the array.\n\n```\nSELECT JSON_ARRAY_INSERT(JSON '[\"a\", \"b\", \"c\", \"d\"]', '$[7]', \"e\") AS json_data\n\n/*--------------------------------------*\n | json_data |\n +--------------------------------------+\n | [\"a\",\"b\",\"c\",\"d\",null,null,null,\"e\"] |\n *--------------------------------------*/\n```\n\nIn the following example, path`$.a`is an object, so the operation is ignored.\n\n```\nSELECT JSON_ARRAY_INSERT(JSON '{\"a\": {}}', '$.a[0]', 2) AS json_data\n\n/*-----------*\n | json_data |\n +-----------+\n | {\"a\":{}} |\n *-----------*/\n```\n\nIn the following example, path`$`does not specify a valid array position,\nso the operation is ignored.\n\n```\nSELECT JSON_ARRAY_INSERT(JSON '[1, 2]', '$', 3) AS json_data\n\n/*-----------*\n | json_data |\n +-----------+\n | [1,2] |\n *-----------*/\n```\n\nIn the following example, a value is inserted into a JSON null.\n\n```\nSELECT JSON_ARRAY_INSERT(JSON '{\"a\": null}', '$.a[2]', 10) AS json_data\n\n/*----------------------*\n | json_data |\n +----------------------+\n | {\"a\":[null,null,10]} |\n *----------------------*/\n```\n\nIn the following example, the operation is ignored because you can't insert\ndata into a JSON number.\n\n```\nSELECT JSON_ARRAY_INSERT(JSON '1', '$[0]', 'r1') AS json_data\n\n/*-----------*\n | json_data |\n +-----------+\n | 1 |\n *-----------*/\n```\n\n\n"
},
{
"name": "JSON_EXTRACT",
"arguments": [],
"category": "JSON",
"description_markdown": "```\nJSON_EXTRACT(json_string_expr, json_path)\n```\n\n```\nJSON_EXTRACT(json_expr, json_path)\n```\n\n **Description** \n\nExtracts a JSON value and converts it to a\nSQL JSON-formatted`STRING`or`JSON`value.\nThis function uses single quotes and brackets to escape invalid[JSONPath](#JSONPath_format)characters in JSON keys. For example:`['a.b']`.\n\nArguments:\n\n- ` json_string_expr`: A JSON-formatted string. For example:\n \n \n ```\n '{\"class\": {\"students\": [{\"name\": \"Jane\"}]}}'\n ```\n \n Extracts a SQL` NULL`when a JSON-formatted string` null`is encountered.\nFor example:\n \n \n ```\n SELECT JSON_EXTRACT(\"null\", \"$\") -- Returns a SQL NULL\n ```\n \n \n- ` json_expr`: JSON. For example:\n \n \n ```\n JSON '{\"class\": {\"students\": [{\"name\": \"Jane\"}]}}'\n ```\n \n Extracts a JSON` null`when a JSON` null`is encountered.\n \n \n ```\n SELECT JSON_EXTRACT(JSON 'null', \"$\") -- Returns a JSON 'null'\n ```\n \n \n- ` json_path`: The[JSONPath](#JSONPath_format). This identifies the data that\nyou want to obtain from the input.\n \n \n\nThere are differences between the JSON-formatted string and JSON input types.\nFor details, see[Differences between the JSON and JSON-formatted STRING types](#differences_json_and_string).\n\n **Return type** \n\n- ` json_string_expr`: A JSON-formatted` STRING`\n- ` json_expr`:` JSON`\n\n **Examples** \n\nIn the following example, JSON data is extracted and returned as JSON.\n\n```\nSELECT\n JSON_EXTRACT(JSON '{\"class\": {\"students\": [{\"id\": 5}, {\"id\": 12}]}}', '$.class')\n AS json_data;\n\n/*-----------------------------------*\n | json_data |\n +-----------------------------------+\n | {\"students\":[{\"id\":5},{\"id\":12}]} |\n *-----------------------------------*/\n```\n\nIn the following examples, JSON data is extracted and returned as\nJSON-formatted strings.\n\n```\nSELECT JSON_EXTRACT(json_text, '$') AS json_text_string\nFROM UNNEST([\n '{\"class\": {\"students\": [{\"name\": \"Jane\"}]}}',\n '{\"class\": {\"students\": []}}',\n '{\"class\": {\"students\": [{\"name\": \"John\"}, {\"name\": \"Jamie\"}]}}'\n ]) AS json_text;\n\n/*-----------------------------------------------------------*\n | json_text_string |\n +-----------------------------------------------------------+\n | {\"class\":{\"students\":[{\"name\":\"Jane\"}]}} |\n | {\"class\":{\"students\":[]}} |\n | {\"class\":{\"students\":[{\"name\":\"John\"},{\"name\":\"Jamie\"}]}} |\n *-----------------------------------------------------------*/\n```\n\n```\nSELECT JSON_EXTRACT(json_text, '$.class.students[0]') AS first_student\nFROM UNNEST([\n '{\"class\": {\"students\": [{\"name\": \"Jane\"}]}}',\n '{\"class\": {\"students\": []}}',\n '{\"class\": {\"students\": [{\"name\": \"John\"}, {\"name\": \"Jamie\"}]}}'\n ]) AS json_text;\n\n/*-----------------*\n | first_student |\n +-----------------+\n | {\"name\":\"Jane\"} |\n | NULL |\n | {\"name\":\"John\"} |\n *-----------------*/\n```\n\n```\nSELECT JSON_EXTRACT(json_text, '$.class.students[1].name') AS second_student_name\nFROM UNNEST([\n '{\"class\": {\"students\": [{\"name\": \"Jane\"}]}}',\n '{\"class\": {\"students\": []}}',\n '{\"class\": {\"students\": [{\"name\": \"John\"}, {\"name\": null}]}}',\n '{\"class\": {\"students\": [{\"name\": \"John\"}, {\"name\": \"Jamie\"}]}}'\n ]) AS json_text;\n\n/*----------------*\n | second_student |\n +----------------+\n | NULL |\n | NULL |\n | NULL |\n | \"Jamie\" |\n *----------------*/\n```\n\n```\nSELECT JSON_EXTRACT(json_text, \"$.class['students']\") AS student_names\nFROM UNNEST([\n '{\"class\": {\"students\": [{\"name\": \"Jane\"}]}}',\n '{\"class\": {\"students\": []}}',\n '{\"class\": {\"students\": [{\"name\": \"John\"}, {\"name\": \"Jamie\"}]}}'\n ]) AS json_text;\n\n/*------------------------------------*\n | student_names |\n +------------------------------------+\n | [{\"name\":\"Jane\"}] |\n | [] |\n | [{\"name\":\"John\"},{\"name\":\"Jamie\"}] |\n *------------------------------------*/\n```\n\n```\nSELECT JSON_EXTRACT('{\"a\": null}', \"$.a\"); -- Returns a SQL NULL\nSELECT JSON_EXTRACT('{\"a\": null}', \"$.b\"); -- Returns a SQL NULL\n```\n\n```\nSELECT JSON_EXTRACT(JSON '{\"a\": null}', \"$.a\"); -- Returns a JSON 'null'\nSELECT JSON_EXTRACT(JSON '{\"a\": null}', \"$.b\"); -- Returns a SQL NULL\n```\n\n\n"
},
{
"name": "JSON_EXTRACT_ARRAY",
"arguments": [],
"category": "JSON",
"description_markdown": "```\nJSON_EXTRACT_ARRAY(json_string_expr[, json_path])\n```\n\n```\nJSON_EXTRACT_ARRAY(json_expr[, json_path])\n```\n\n **Description** \n\nExtracts a JSON array and converts it to\na SQL`ARRAY<JSON-formatted STRING>`or`ARRAY<JSON>`value.\nThis function uses single quotes and brackets to escape invalid[JSONPath](#JSONPath_format)characters in JSON keys. For example:`['a.b']`.\n\nArguments:\n\n- ` json_string_expr`: A JSON-formatted string. For example:\n \n \n ```\n '[\"a\", \"b\", {\"key\": \"c\"}]'\n ```\n \n \n- ` json_expr`: JSON. For example:\n \n \n ```\n JSON '[\"a\", \"b\", {\"key\": \"c\"}]'\n ```\n \n \n- ` json_path`: The[JSONPath](#JSONPath_format). This identifies the data that\nyou want to obtain from the input. If this optional parameter is not\nprovided, then the JSONPath` $`symbol is applied, which means that all of\nthe data is analyzed.\n \n \n\nThere are differences between the JSON-formatted string and JSON input types.\nFor details, see[Differences between the JSON and JSON-formatted STRING types](#differences_json_and_string).\n\n **Return type** \n\n- ` json_string_expr`:` ARRAY<JSON-formatted STRING>`\n- ` json_expr`:` ARRAY<JSON>`\n\n **Examples** \n\nThis extracts items in JSON to an array of`JSON`values:\n\n```\nSELECT JSON_EXTRACT_ARRAY(\n JSON '{\"fruits\":[\"apples\",\"oranges\",\"grapes\"]}','$.fruits'\n ) AS json_array;\n\n/*---------------------------------*\n | json_array |\n +---------------------------------+\n | [\"apples\", \"oranges\", \"grapes\"] |\n *---------------------------------*/\n```\n\nThis extracts the items in a JSON-formatted string to a string array:\n\n```\nSELECT JSON_EXTRACT_ARRAY('[1,2,3]') AS string_array;\n\n/*--------------*\n | string_array |\n +--------------+\n | [1, 2, 3] |\n *--------------*/\n```\n\nThis extracts a string array and converts it to an integer array:\n\n```\nSELECT ARRAY(\n SELECT CAST(integer_element AS INT64)\n FROM UNNEST(\n JSON_EXTRACT_ARRAY('[1,2,3]','$')\n ) AS integer_element\n) AS integer_array;\n\n/*---------------*\n | integer_array |\n +---------------+\n | [1, 2, 3] |\n *---------------*/\n```\n\nThis extracts string values in a JSON-formatted string to an array:\n\n```\n-- Doesn't strip the double quotes\nSELECT JSON_EXTRACT_ARRAY('[\"apples\", \"oranges\", \"grapes\"]', '$') AS string_array;\n\n/*---------------------------------*\n | string_array |\n +---------------------------------+\n | [\"apples\", \"oranges\", \"grapes\"] |\n *---------------------------------*/\n\n-- Strips the double quotes\nSELECT ARRAY(\n SELECT JSON_EXTRACT_SCALAR(string_element, '$')\n FROM UNNEST(JSON_EXTRACT_ARRAY('[\"apples\",\"oranges\",\"grapes\"]','$')) AS string_element\n) AS string_array;\n\n/*---------------------------*\n | string_array |\n +---------------------------+\n | [apples, oranges, grapes] |\n *---------------------------*/\n```\n\nThis extracts only the items in the`fruit`property to an array:\n\n```\nSELECT JSON_EXTRACT_ARRAY(\n '{\"fruit\": [{\"apples\": 5, \"oranges\": 10}, {\"apples\": 2, \"oranges\": 4}], \"vegetables\": [{\"lettuce\": 7, \"kale\": 8}]}',\n '$.fruit'\n) AS string_array;\n\n/*-------------------------------------------------------*\n | string_array |\n +-------------------------------------------------------+\n | [{\"apples\":5,\"oranges\":10}, {\"apples\":2,\"oranges\":4}] |\n *-------------------------------------------------------*/\n```\n\nThese are equivalent:\n\n```\nSELECT JSON_EXTRACT_ARRAY('{\"fruits\": [\"apples\", \"oranges\", \"grapes\"]}', '$[fruits]') AS string_array;\n\nSELECT JSON_EXTRACT_ARRAY('{\"fruits\": [\"apples\", \"oranges\", \"grapes\"]}', '$.fruits') AS string_array;\n\n-- The queries above produce the following result:\n/*---------------------------------*\n | string_array |\n +---------------------------------+\n | [\"apples\", \"oranges\", \"grapes\"] |\n *---------------------------------*/\n```\n\nIn cases where a JSON key uses invalid JSONPath characters, you can escape those\ncharacters using single quotes and brackets,`[' ']`. For example:\n\n```\nSELECT JSON_EXTRACT_ARRAY('{\"a.b\": {\"c\": [\"world\"]}}', \"$['a.b'].c\") AS hello;\n\n/*-----------*\n | hello |\n +-----------+\n | [\"world\"] |\n *-----------*/\n```\n\nThe following examples explore how invalid requests and empty arrays are\nhandled:\n\n- If a JSONPath is invalid, an error is thrown.\n- If a JSON-formatted string is invalid, the output is NULL.\n- It is okay to have empty arrays in the JSON-formatted string.\n\n```\n-- An error is thrown if you provide an invalid JSONPath.\nSELECT JSON_EXTRACT_ARRAY('[\"foo\", \"bar\", \"baz\"]', 'INVALID_JSONPath') AS result;\n\n-- If the JSONPath does not refer to an array, then NULL is returned.\nSELECT JSON_EXTRACT_ARRAY('{\"a\": \"foo\"}', '$.a') AS result;\n\n/*--------*\n | result |\n +--------+\n | NULL |\n *--------*/\n\n-- If a key that does not exist is specified, then the result is NULL.\nSELECT JSON_EXTRACT_ARRAY('{\"a\": \"foo\"}', '$.b') AS result;\n\n/*--------*\n | result |\n +--------+\n | NULL |\n *--------*/\n\n-- Empty arrays in JSON-formatted strings are supported.\nSELECT JSON_EXTRACT_ARRAY('{\"a\": \"foo\", \"b\": []}', '$.b') AS result;\n\n/*--------*\n | result |\n +--------+\n | [] |\n *--------*/\n```\n\n\n"
},
{
"name": "JSON_EXTRACT_SCALAR",
"arguments": [],
"category": "JSON",
"description_markdown": "```\nJSON_EXTRACT_SCALAR(json_string_expr[, json_path])\n```\n\n```\nJSON_EXTRACT_SCALAR(json_expr[, json_path])\n```\n\n **Description** \n\nExtracts a JSON scalar value and converts it to a SQL`STRING`value.\nIn addition, this function:\n\n- Removes the outermost quotes and unescapes the return values.\n- Returns a SQL` NULL`if a non-scalar value is selected.\n- Uses single quotes and brackets to escape invalid[JSONPath](#JSONPath_format)characters in JSON keys. For example:` ['a.b']`.\n\nArguments:\n\n- ` json_string_expr`: A JSON-formatted string. For example:\n \n \n ```\n '{\"name\": \"Jane\", \"age\": \"6\"}'\n ```\n \n \n- ` json_expr`: JSON. For example:\n \n \n ```\n JSON '{\"name\": \"Jane\", \"age\": \"6\"}'\n ```\n \n \n- ` json_path`: The[JSONPath](#JSONPath_format). This identifies the data that\nyou want to obtain from the input. If this optional parameter is not\nprovided, then the JSONPath` $`symbol is applied, which means that all of\nthe data is analyzed.\n \n If` json_path`returns a JSON` null`or a non-scalar value (in other words,\nif` json_path`refers to an object or an array), then a SQL` NULL`is\nreturned.\n \n \n\nThere are differences between the JSON-formatted string and JSON input types.\nFor details, see[Differences between the JSON and JSON-formatted STRING types](#differences_json_and_string).\n\n **Return type** \n\n`STRING`\n\n **Examples** \n\nIn the following example,`age`is extracted.\n\n```\nSELECT JSON_EXTRACT_SCALAR(JSON '{\"name\": \"Jakob\", \"age\": \"6\" }', '$.age') AS scalar_age;\n\n/*------------*\n | scalar_age |\n +------------+\n | 6 |\n *------------*/\n```\n\nThe following example compares how results are returned for the`JSON_EXTRACT`and`JSON_EXTRACT_SCALAR`functions.\n\n```\nSELECT JSON_EXTRACT('{\"name\": \"Jakob\", \"age\": \"6\" }', '$.name') AS json_name,\n JSON_EXTRACT_SCALAR('{\"name\": \"Jakob\", \"age\": \"6\" }', '$.name') AS scalar_name,\n JSON_EXTRACT('{\"name\": \"Jakob\", \"age\": \"6\" }', '$.age') AS json_age,\n JSON_EXTRACT_SCALAR('{\"name\": \"Jakob\", \"age\": \"6\" }', '$.age') AS scalar_age;\n\n/*-----------+-------------+----------+------------*\n | json_name | scalar_name | json_age | scalar_age |\n +-----------+-------------+----------+------------+\n | \"Jakob\" | Jakob | \"6\" | 6 |\n *-----------+-------------+----------+------------*/\n```\n\n```\nSELECT JSON_EXTRACT('{\"fruits\": [\"apple\", \"banana\"]}', '$.fruits') AS json_extract,\n JSON_EXTRACT_SCALAR('{\"fruits\": [\"apple\", \"banana\"]}', '$.fruits') AS json_extract_scalar;\n\n/*--------------------+---------------------*\n | json_extract | json_extract_scalar |\n +--------------------+---------------------+\n | [\"apple\",\"banana\"] | NULL |\n *--------------------+---------------------*/\n```\n\nIn cases where a JSON key uses invalid JSONPath characters, you can escape those\ncharacters using single quotes and brackets,`[' ']`. For example:\n\n```\nSELECT JSON_EXTRACT_SCALAR('{\"a.b\": {\"c\": \"world\"}}', \"$['a.b'].c\") AS hello;\n\n/*-------*\n | hello |\n +-------+\n | world |\n *-------*/\n```\n\n\n"
},
{
"name": "JSON_EXTRACT_STRING_ARRAY",
"arguments": [],
"category": "JSON",
"description_markdown": "```\nJSON_EXTRACT_STRING_ARRAY(json_string_expr[, json_path])\n```\n\n```\nJSON_EXTRACT_STRING_ARRAY(json_expr[, json_path])\n```\n\n **Description** \n\nExtracts a JSON array of scalar values and converts it to a SQL`ARRAY<STRING>`value. In addition, this function:\n\n- Removes the outermost quotes and unescapes the values.\n- Returns a SQL` NULL`if the selected value is not an array or\nnot an array containing only scalar values.\n- Uses single quotes and brackets to escape invalid[JSONPath](#JSONPath_format)characters in JSON keys. For example:` ['a.b']`.\n\nArguments:\n\n- ` json_string_expr`: A JSON-formatted string. For example:\n \n \n ```\n '[\"apples\", \"oranges\", \"grapes\"]'\n ```\n \n \n- ` json_expr`: JSON. For example:\n \n \n ```\n JSON '[\"apples\", \"oranges\", \"grapes\"]'\n ```\n \n \n- ` json_path`: The[JSONPath](#JSONPath_format). This identifies the data that\nyou want to obtain from the input. If this optional parameter is not\nprovided, then the JSONPath` $`symbol is applied, which means that all of\nthe data is analyzed.\n \n \n\nThere are differences between the JSON-formatted string and JSON input types.\nFor details, see[Differences between the JSON and JSON-formatted STRING types](#differences_json_and_string).\n\nCaveats:\n\n- A JSON` null`in the input array produces a SQL` NULL`as the output for that\nJSON` null`. If the output contains a` NULL`array element, an error is\nproduced because the final output cannot be an array with` NULL`values.\n- If a JSONPath matches an array that contains scalar objects and a JSON` null`,\nthen the output of the function must be transformed because the final output\ncannot be an array with` NULL`values.\n\n **Return type** \n\n`ARRAY<STRING>`\n\n **Examples** \n\nThis extracts items in JSON to a string array:\n\n```\nSELECT JSON_EXTRACT_STRING_ARRAY(\n JSON '{\"fruits\": [\"apples\", \"oranges\", \"grapes\"]}', '$.fruits'\n ) AS string_array;\n\n/*---------------------------*\n | string_array |\n +---------------------------+\n | [apples, oranges, grapes] |\n *---------------------------*/\n```\n\nThe following example compares how results are returned for the`JSON_EXTRACT_ARRAY`and`JSON_EXTRACT_STRING_ARRAY`functions.\n\n```\nSELECT JSON_EXTRACT_ARRAY('[\"apples\", \"oranges\"]') AS json_array,\nJSON_EXTRACT_STRING_ARRAY('[\"apples\", \"oranges\"]') AS string_array;\n\n/*-----------------------+-------------------*\n | json_array | string_array |\n +-----------------------+-------------------+\n | [\"apples\", \"oranges\"] | [apples, oranges] |\n *-----------------------+-------------------*/\n```\n\nThis extracts the items in a JSON-formatted string to a string array:\n\n```\n-- Strips the double quotes\nSELECT JSON_EXTRACT_STRING_ARRAY('[\"foo\", \"bar\", \"baz\"]', '$') AS string_array;\n\n/*-----------------*\n | string_array |\n +-----------------+\n | [foo, bar, baz] |\n *-----------------*/\n```\n\nThis extracts a string array and converts it to an integer array:\n\n```\nSELECT ARRAY(\n SELECT CAST(integer_element AS INT64)\n FROM UNNEST(\n JSON_EXTRACT_STRING_ARRAY('[1, 2, 3]', '$')\n ) AS integer_element\n) AS integer_array;\n\n/*---------------*\n | integer_array |\n +---------------+\n | [1, 2, 3] |\n *---------------*/\n```\n\nThese are equivalent:\n\n```\nSELECT JSON_EXTRACT_STRING_ARRAY('{\"fruits\": [\"apples\", \"oranges\", \"grapes\"]}', '$[fruits]') AS string_array;\n\nSELECT JSON_EXTRACT_STRING_ARRAY('{\"fruits\": [\"apples\", \"oranges\", \"grapes\"]}', '$.fruits') AS string_array;\n\n-- The queries above produce the following result:\n/*---------------------------*\n | string_array |\n +---------------------------+\n | [apples, oranges, grapes] |\n *---------------------------*/\n```\n\nIn cases where a JSON key uses invalid JSONPath characters, you can escape those\ncharacters using single quotes and brackets:`[' ']`. For example:\n\n```\nSELECT JSON_EXTRACT_STRING_ARRAY('{\"a.b\": {\"c\": [\"world\"]}}', \"$['a.b'].c\") AS hello;\n\n/*---------*\n | hello |\n +---------+\n | [world] |\n *---------*/\n```\n\nThe following examples explore how invalid requests and empty arrays are\nhandled:\n\n```\n-- An error is thrown if you provide an invalid JSONPath.\nSELECT JSON_EXTRACT_STRING_ARRAY('[\"foo\", \"bar\", \"baz\"]', 'INVALID_JSONPath') AS result;\n\n-- If the JSON formatted string is invalid, then NULL is returned.\nSELECT JSON_EXTRACT_STRING_ARRAY('}}', '$') AS result;\n\n/*--------*\n | result |\n +--------+\n | NULL |\n *--------*/\n\n-- If the JSON document is NULL, then NULL is returned.\nSELECT JSON_EXTRACT_STRING_ARRAY(NULL, '$') AS result;\n\n/*--------*\n | result |\n +--------+\n | NULL |\n *--------*/\n\n-- If a JSONPath does not match anything, then the output is NULL.\nSELECT JSON_EXTRACT_STRING_ARRAY('{\"a\": [\"foo\", \"bar\", \"baz\"]}', '$.b') AS result;\n\n/*--------*\n | result |\n +--------+\n | NULL |\n *--------*/\n\n-- If a JSONPath matches an object that is not an array, then the output is NULL.\nSELECT JSON_EXTRACT_STRING_ARRAY('{\"a\": \"foo\"}', '$') AS result;\n\n/*--------*\n | result |\n +--------+\n | NULL |\n *--------*/\n\n-- If a JSONPath matches an array of non-scalar objects, then the output is NULL.\nSELECT JSON_EXTRACT_STRING_ARRAY('{\"a\": [{\"b\": \"foo\", \"c\": 1}, {\"b\": \"bar\", \"c\":2}], \"d\": \"baz\"}', '$.a') AS result;\n\n/*--------*\n | result |\n +--------+\n | NULL |\n *--------*/\n\n-- If a JSONPath matches an array of mixed scalar and non-scalar objects, then the output is NULL.\nSELECT JSON_EXTRACT_STRING_ARRAY('{\"a\": [10, {\"b\": 20}]', '$.a') AS result;\n\n/*--------*\n | result |\n +--------+\n | NULL |\n *--------*/\n\n-- If a JSONPath matches an empty JSON array, then the output is an empty array instead of NULL.\nSELECT JSON_EXTRACT_STRING_ARRAY('{\"a\": \"foo\", \"b\": []}', '$.b') AS result;\n\n/*--------*\n | result |\n +--------+\n | [] |\n *--------*/\n\n-- The following query produces and error because the final output cannot be an\n-- array with NULLs.\nSELECT JSON_EXTRACT_STRING_ARRAY('[\"world\", 1, null]') AS result;\n```\n\n\n"
},
{
"name": "JSON_OBJECT",
"arguments": [],
"category": "JSON",
"description_markdown": "- [Signature 1](#json_object_signature1):` JSON_OBJECT([json_key, json_value][, ...])`\n- [Signature 2](#json_object_signature2):` JSON_OBJECT(json_key_array, json_value_array)`\n\n\n<span id=\"json_object_signature1\">\n#### Signature 1\n\n</span>\n```\nJSON_OBJECT([json_key, json_value][, ...])\n```\n\n **Description** \n\nCreates a JSON object, using key-value pairs.\n\nArguments:\n\n- ` json_key`: A` STRING`value that represents a key.\n- ` json_value`: A[JSON encoding-supported](#json_encodings)value.\n\nDetails:\n\n- If two keys are passed in with the same name, only the first key-value pair\nis preserved.\n- The order of key-value pairs is not preserved.\n- If` json_key`is` NULL`, an error is produced.\n\n **Return type** \n\n`JSON`\n\n **Examples** \n\nYou can create an empty JSON object by passing in no JSON keys and values.\nFor example:\n\n```\nSELECT JSON_OBJECT() AS json_data\n\n/*-----------*\n | json_data |\n +-----------+\n | {} |\n *-----------*/\n```\n\nYou can create a JSON object by passing in key-value pairs. For example:\n\n```\nSELECT JSON_OBJECT('foo', 10, 'bar', TRUE) AS json_data\n\n/*-----------------------*\n | json_data |\n +-----------------------+\n | {\"bar\":true,\"foo\":10} |\n *-----------------------*/\n```\n\n```\nSELECT JSON_OBJECT('foo', 10, 'bar', ['a', 'b']) AS json_data\n\n/*----------------------------*\n | json_data |\n +----------------------------+\n | {\"bar\":[\"a\",\"b\"],\"foo\":10} |\n *----------------------------*/\n```\n\n```\nSELECT JSON_OBJECT('a', NULL, 'b', JSON 'null') AS json_data\n\n/*---------------------*\n | json_data |\n +---------------------+\n | {\"a\":null,\"b\":null} |\n *---------------------*/\n```\n\n```\nSELECT JSON_OBJECT('a', 10, 'a', 'foo') AS json_data\n\n/*-----------*\n | json_data |\n +-----------+\n | {\"a\":10} |\n *-----------*/\n```\n\n```\nWITH Items AS (SELECT 'hello' AS key, 'world' AS value)\nSELECT JSON_OBJECT(key, value) AS json_data FROM Items\n\n/*-------------------*\n | json_data |\n +-------------------+\n | {\"hello\":\"world\"} |\n *-------------------*/\n```\n\nAn error is produced if a SQL`NULL`is passed in for a JSON key.\n\n```\n-- Error: A key cannot be NULL.\nSELECT JSON_OBJECT(NULL, 1) AS json_data\n```\n\nAn error is produced if the number of JSON keys and JSON values don't match:\n\n```\n-- Error: No matching signature for function JSON_OBJECT for argument types:\n-- STRING, INT64, STRING\nSELECT JSON_OBJECT('a', 1, 'b') AS json_data\n```\n\n\n<span id=\"json_object_signature2\">\n#### Signature 2\n\n</span>\n```\nJSON_OBJECT(json_key_array, json_value_array)\n```\n\nCreates a JSON object, using an array of keys and values.\n\nArguments:\n\n- ` json_key_array`: An array of zero or more` STRING`keys.\n- ` json_value_array`: An array of zero or more[JSON encoding-supported](#json_encodings)values.\n\nDetails:\n\n- If two keys are passed in with the same name, only the first key-value pair\nis preserved.\n- The order of key-value pairs is not preserved.\n- The number of keys must match the number of values, otherwise an error is\nproduced.\n- If any argument is` NULL`, an error is produced.\n- If a key in` json_key_array`is` NULL`, an error is produced.\n\n **Return type** \n\n`JSON`\n\n **Examples** \n\nYou can create an empty JSON object by passing in an empty array of\nkeys and values. For example:\n\n```\nSELECT JSON_OBJECT(CAST([] AS ARRAY<STRING>), []) AS json_data\n\n/*-----------*\n | json_data |\n +-----------+\n | {} |\n *-----------*/\n```\n\nYou can create a JSON object by passing in an array of keys and an array of\nvalues. For example:\n\n```\nSELECT JSON_OBJECT(['a', 'b'], [10, NULL]) AS json_data\n\n/*-------------------*\n | json_data |\n +-------------------+\n | {\"a\":10,\"b\":null} |\n *-------------------*/\n```\n\n```\nSELECT JSON_OBJECT(['a', 'b'], [JSON '10', JSON '\"foo\"']) AS json_data\n\n/*--------------------*\n | json_data |\n +--------------------+\n | {\"a\":10,\"b\":\"foo\"} |\n *--------------------*/\n```\n\n```\nSELECT\n JSON_OBJECT(\n ['a', 'b'],\n [STRUCT(10 AS id, 'Red' AS color), STRUCT(20 AS id, 'Blue' AS color)])\n AS json_data\n\n/*------------------------------------------------------------*\n | json_data |\n +------------------------------------------------------------+\n | {\"a\":{\"color\":\"Red\",\"id\":10},\"b\":{\"color\":\"Blue\",\"id\":20}} |\n *------------------------------------------------------------*/\n```\n\n```\nSELECT\n JSON_OBJECT(\n ['a', 'b'],\n [TO_JSON(10), TO_JSON(['foo', 'bar'])])\n AS json_data\n\n/*----------------------------*\n | json_data |\n +----------------------------+\n | {\"a\":10,\"b\":[\"foo\",\"bar\"]} |\n *----------------------------*/\n```\n\nThe following query groups by`id`and then creates an array of keys and\nvalues from the rows with the same`id`:\n\n```\nWITH\n Fruits AS (\n SELECT 0 AS id, 'color' AS json_key, 'red' AS json_value UNION ALL\n SELECT 0, 'fruit', 'apple' UNION ALL\n SELECT 1, 'fruit', 'banana' UNION ALL\n SELECT 1, 'ripe', 'true'\n )\nSELECT JSON_OBJECT(ARRAY_AGG(json_key), ARRAY_AGG(json_value)) AS json_data\nFROM Fruits\nGROUP BY id\n\n/*----------------------------------*\n | json_data |\n +----------------------------------+\n | {\"color\":\"red\",\"fruit\":\"apple\"} |\n | {\"fruit\":\"banana\",\"ripe\":\"true\"} |\n *----------------------------------*/\n```\n\nAn error is produced if the size of the JSON keys and values arrays don't\nmatch:\n\n```\n-- Error: The number of keys and values must match.\nSELECT JSON_OBJECT(['a', 'b'], [10]) AS json_data\n```\n\nAn error is produced if the array of JSON keys or JSON values is a SQL`NULL`.\n\n```\n-- Error: The keys array cannot be NULL.\nSELECT JSON_OBJECT(CAST(NULL AS ARRAY<STRING>), [10, 20]) AS json_data\n```\n\n```\n-- Error: The values array cannot be NULL.\nSELECT JSON_OBJECT(['a', 'b'], CAST(NULL AS ARRAY<INT64>)) AS json_data\n```\n\n\n"
},
{
"name": "JSON_QUERY",
"arguments": [],
"category": "JSON",
"description_markdown": "```\nJSON_QUERY(json_string_expr, json_path)\n```\n\n```\nJSON_QUERY(json_expr, json_path)\n```\n\n **Description** \n\nExtracts a JSON value and converts it to a SQL\nJSON-formatted`STRING`or`JSON`value.\nThis function uses double quotes to escape invalid[JSONPath](#JSONPath_format)characters in JSON keys. For example:`\"a.b\"`.\n\nArguments:\n\n- ` json_string_expr`: A JSON-formatted string. For example:\n \n \n ```\n '{\"class\": {\"students\": [{\"name\": \"Jane\"}]}}'\n ```\n \n Extracts a SQL` NULL`when a JSON-formatted string` null`is encountered.\nFor example:\n \n \n ```\n SELECT JSON_QUERY(\"null\", \"$\") -- Returns a SQL NULL\n ```\n \n \n- ` json_expr`: JSON. For example:\n \n \n ```\n JSON '{\"class\": {\"students\": [{\"name\": \"Jane\"}]}}'\n ```\n \n Extracts a JSON` null`when a JSON` null`is encountered.\n \n \n ```\n SELECT JSON_QUERY(JSON 'null', \"$\") -- Returns a JSON 'null'\n ```\n \n \n- ` json_path`: The[JSONPath](#JSONPath_format). This identifies the data that\nyou want to obtain from the input.\n \n \n\nThere are differences between the JSON-formatted string and JSON input types.\nFor details, see[Differences between the JSON and JSON-formatted STRING types](#differences_json_and_string).\n\n **Return type** \n\n- ` json_string_expr`: A JSON-formatted` STRING`\n- ` json_expr`:` JSON`\n\n **Examples** \n\nIn the following example, JSON data is extracted and returned as JSON.\n\n```\nSELECT\n JSON_QUERY(JSON '{\"class\": {\"students\": [{\"id\": 5}, {\"id\": 12}]}}', '$.class')\n AS json_data;\n\n/*-----------------------------------*\n | json_data |\n +-----------------------------------+\n | {\"students\":[{\"id\":5},{\"id\":12}]} |\n *-----------------------------------*/\n```\n\nIn the following examples, JSON data is extracted and returned as\nJSON-formatted strings.\n\n```\nSELECT JSON_QUERY(json_text, '$') AS json_text_string\nFROM UNNEST([\n '{\"class\": {\"students\": [{\"name\": \"Jane\"}]}}',\n '{\"class\": {\"students\": []}}',\n '{\"class\": {\"students\": [{\"name\": \"John\"}, {\"name\": \"Jamie\"}]}}'\n ]) AS json_text;\n\n/*-----------------------------------------------------------*\n | json_text_string |\n +-----------------------------------------------------------+\n | {\"class\":{\"students\":[{\"name\":\"Jane\"}]}} |\n | {\"class\":{\"students\":[]}} |\n | {\"class\":{\"students\":[{\"name\":\"John\"},{\"name\":\"Jamie\"}]}} |\n *-----------------------------------------------------------*/\n```\n\n```\nSELECT JSON_QUERY(json_text, '$.class.students[0]') AS first_student\nFROM UNNEST([\n '{\"class\": {\"students\": [{\"name\": \"Jane\"}]}}',\n '{\"class\": {\"students\": []}}',\n '{\"class\": {\"students\": [{\"name\": \"John\"}, {\"name\": \"Jamie\"}]}}'\n ]) AS json_text;\n\n/*-----------------*\n | first_student |\n +-----------------+\n | {\"name\":\"Jane\"} |\n | NULL |\n | {\"name\":\"John\"} |\n *-----------------*/\n```\n\n```\nSELECT JSON_QUERY(json_text, '$.class.students[1].name') AS second_student_name\nFROM UNNEST([\n '{\"class\": {\"students\": [{\"name\": \"Jane\"}]}}',\n '{\"class\": {\"students\": []}}',\n '{\"class\": {\"students\": [{\"name\": \"John\"}, {\"name\": null}]}}',\n '{\"class\": {\"students\": [{\"name\": \"John\"}, {\"name\": \"Jamie\"}]}}'\n ]) AS json_text;\n\n/*----------------*\n | second_student |\n +----------------+\n | NULL |\n | NULL |\n | NULL |\n | \"Jamie\" |\n *----------------*/\n```\n\n```\nSELECT JSON_QUERY(json_text, '$.class.\"students\"') AS student_names\nFROM UNNEST([\n '{\"class\": {\"students\": [{\"name\": \"Jane\"}]}}',\n '{\"class\": {\"students\": []}}',\n '{\"class\": {\"students\": [{\"name\": \"John\"}, {\"name\": \"Jamie\"}]}}'\n ]) AS json_text;\n\n/*------------------------------------*\n | student_names |\n +------------------------------------+\n | [{\"name\":\"Jane\"}] |\n | [] |\n | [{\"name\":\"John\"},{\"name\":\"Jamie\"}] |\n *------------------------------------*/\n```\n\n```\nSELECT JSON_QUERY('{\"a\": null}', \"$.a\"); -- Returns a SQL NULL\nSELECT JSON_QUERY('{\"a\": null}', \"$.b\"); -- Returns a SQL NULL\n```\n\n```\nSELECT JSON_QUERY(JSON '{\"a\": null}', \"$.a\"); -- Returns a JSON 'null'\nSELECT JSON_QUERY(JSON '{\"a\": null}', \"$.b\"); -- Returns a SQL NULL\n```\n\n\n"
},
{
"name": "JSON_QUERY_ARRAY",
"arguments": [],
"category": "JSON",
"description_markdown": "```\nJSON_QUERY_ARRAY(json_string_expr[, json_path])\n```\n\n```\nJSON_QUERY_ARRAY(json_expr[, json_path])\n```\n\n **Description** \n\nExtracts a JSON array and converts it to\na SQL`ARRAY<JSON-formatted STRING>`or`ARRAY<JSON>`value.\nIn addition, this function uses double quotes to escape invalid[JSONPath](#JSONPath_format)characters in JSON keys. For example:`\"a.b\"`.\n\nArguments:\n\n- ` json_string_expr`: A JSON-formatted string. For example:\n \n \n ```\n '[\"a\", \"b\", {\"key\": \"c\"}]'\n ```\n \n \n- ` json_expr`: JSON. For example:\n \n \n ```\n JSON '[\"a\", \"b\", {\"key\": \"c\"}]'\n ```\n \n \n- ` json_path`: The[JSONPath](#JSONPath_format). This identifies the data that\nyou want to obtain from the input. If this optional parameter is not\nprovided, then the JSONPath` $`symbol is applied, which means that all of\nthe data is analyzed.\n \n \n\nThere are differences between the JSON-formatted string and JSON input types.\nFor details, see[Differences between the JSON and JSON-formatted STRING types](#differences_json_and_string).\n\n **Return type** \n\n- ` json_string_expr`:` ARRAY<JSON-formatted STRING>`\n- ` json_expr`:` ARRAY<JSON>`\n\n **Examples** \n\nThis extracts items in JSON to an array of`JSON`values:\n\n```\nSELECT JSON_QUERY_ARRAY(\n JSON '{\"fruits\": [\"apples\", \"oranges\", \"grapes\"]}', '$.fruits'\n ) AS json_array;\n\n/*---------------------------------*\n | json_array |\n +---------------------------------+\n | [\"apples\", \"oranges\", \"grapes\"] |\n *---------------------------------*/\n```\n\nThis extracts the items in a JSON-formatted string to a string array:\n\n```\nSELECT JSON_QUERY_ARRAY('[1, 2, 3]') AS string_array;\n\n/*--------------*\n | string_array |\n +--------------+\n | [1, 2, 3] |\n *--------------*/\n```\n\nThis extracts a string array and converts it to an integer array:\n\n```\nSELECT ARRAY(\n SELECT CAST(integer_element AS INT64)\n FROM UNNEST(\n JSON_QUERY_ARRAY('[1, 2, 3]','$')\n ) AS integer_element\n) AS integer_array;\n\n/*---------------*\n | integer_array |\n +---------------+\n | [1, 2, 3] |\n *---------------*/\n```\n\nThis extracts string values in a JSON-formatted string to an array:\n\n```\n-- Doesn't strip the double quotes\nSELECT JSON_QUERY_ARRAY('[\"apples\", \"oranges\", \"grapes\"]', '$') AS string_array;\n\n/*---------------------------------*\n | string_array |\n +---------------------------------+\n | [\"apples\", \"oranges\", \"grapes\"] |\n *---------------------------------*/\n\n-- Strips the double quotes\nSELECT ARRAY(\n SELECT JSON_VALUE(string_element, '$')\n FROM UNNEST(JSON_QUERY_ARRAY('[\"apples\", \"oranges\", \"grapes\"]', '$')) AS string_element\n) AS string_array;\n\n/*---------------------------*\n | string_array |\n +---------------------------+\n | [apples, oranges, grapes] |\n *---------------------------*/\n```\n\nThis extracts only the items in the`fruit`property to an array:\n\n```\nSELECT JSON_QUERY_ARRAY(\n '{\"fruit\": [{\"apples\": 5, \"oranges\": 10}, {\"apples\": 2, \"oranges\": 4}], \"vegetables\": [{\"lettuce\": 7, \"kale\": 8}]}',\n '$.fruit'\n) AS string_array;\n\n/*-------------------------------------------------------*\n | string_array |\n +-------------------------------------------------------+\n | [{\"apples\":5,\"oranges\":10}, {\"apples\":2,\"oranges\":4}] |\n *-------------------------------------------------------*/\n```\n\nThese are equivalent:\n\n```\nSELECT JSON_QUERY_ARRAY('{\"fruits\": [\"apples\", \"oranges\", \"grapes\"]}', '$.fruits') AS string_array;\n\nSELECT JSON_QUERY_ARRAY('{\"fruits\": [\"apples\", \"oranges\", \"grapes\"]}', '$.\"fruits\"') AS string_array;\n\n-- The queries above produce the following result:\n/*---------------------------------*\n | string_array |\n +---------------------------------+\n | [\"apples\", \"oranges\", \"grapes\"] |\n *---------------------------------*/\n```\n\nIn cases where a JSON key uses invalid JSONPath characters, you can escape those\ncharacters using double quotes:`\" \"`. For example:\n\n```\nSELECT JSON_QUERY_ARRAY('{\"a.b\": {\"c\": [\"world\"]}}', '$.\"a.b\".c') AS hello;\n\n/*-----------*\n | hello |\n +-----------+\n | [\"world\"] |\n *-----------*/\n```\n\nThe following examples show how invalid requests and empty arrays are handled:\n\n```\n-- An error is returned if you provide an invalid JSONPath.\nSELECT JSON_QUERY_ARRAY('[\"foo\", \"bar\", \"baz\"]', 'INVALID_JSONPath') AS result;\n\n-- If the JSONPath does not refer to an array, then NULL is returned.\nSELECT JSON_QUERY_ARRAY('{\"a\": \"foo\"}', '$.a') AS result;\n\n/*--------*\n | result |\n +--------+\n | NULL |\n *--------*/\n\n-- If a key that does not exist is specified, then the result is NULL.\nSELECT JSON_QUERY_ARRAY('{\"a\": \"foo\"}', '$.b') AS result;\n\n/*--------*\n | result |\n +--------+\n | NULL |\n *--------*/\n\n-- Empty arrays in JSON-formatted strings are supported.\nSELECT JSON_QUERY_ARRAY('{\"a\": \"foo\", \"b\": []}', '$.b') AS result;\n\n/*--------*\n | result |\n +--------+\n | [] |\n *--------*/\n```\n\n\n"
},
{
"name": "JSON_REMOVE",
"arguments": [],
"category": "JSON",
"description_markdown": "```\nJSON_REMOVE(json_expr, json_path[, ...])\n```\n\nProduces a new SQL`JSON`value with the specified JSON data removed.\n\nArguments:\n\n- ` json_expr`: JSON. For example:\n \n \n ```\n JSON '{\"class\": {\"students\": [{\"name\": \"Jane\"}]}}'\n ```\n \n \n- ` json_path`: Remove data at this[JSONPath](#JSONPath_format)in` json_expr`.\n \n \n\nDetails:\n\n- Paths are evaluated left to right. The JSON produced by evaluating the\nfirst path is the JSON for the next path.\n- The operation ignores non-existent paths and continue processing the rest\nof the paths.\n- For each path, the entire matched JSON subtree is deleted.\n- If the path matches a JSON object key, this function deletes the\nkey-value pair.\n- If the path matches an array element, this function deletes the specific\nelement from the matched array.\n- If removing the path results in an empty JSON object or empty JSON array,\nthe empty structure is preserved.\n- If` json_path`is` $`or an invalid[JSONPath](#JSONPath_format), an error is\nproduced.\n- If` json_path`is SQL` NULL`, the path operation is ignored.\n\n **Return type** \n\n`JSON`\n\n **Examples** \n\nIn the following example, the path`$[1]`is matched and removes`[\"b\", \"c\"]`.\n\n```\nSELECT JSON_REMOVE(JSON '[\"a\", [\"b\", \"c\"], \"d\"]', '$[1]') AS json_data\n\n/*-----------*\n | json_data |\n +-----------+\n | [\"a\",\"d\"] |\n *-----------*/\n```\n\nYou can use the field access operator to pass JSON data into this function.\nFor example:\n\n```\nWITH T AS (SELECT JSON '{\"a\": {\"b\": 10, \"c\": 20}}' AS data)\nSELECT JSON_REMOVE(data.a, '$.b') AS json_data FROM T\n\n/*-----------*\n | json_data |\n +-----------+\n | {\"c\":20} |\n *-----------*/\n```\n\nIn the following example, the first path`$[1]`is matched and removes`[\"b\", \"c\"]`. Then, the second path`$[1]`is matched and removes`\"d\"`.\n\n```\nSELECT JSON_REMOVE(JSON '[\"a\", [\"b\", \"c\"], \"d\"]', '$[1]', '$[1]') AS json_data\n\n/*-----------*\n | json_data |\n +-----------+\n | [\"a\"] |\n *-----------*/\n```\n\nThe structure of an empty array is preserved when all elements are deleted\nfrom it. For example:\n\n```\nSELECT JSON_REMOVE(JSON '[\"a\", [\"b\", \"c\"], \"d\"]', '$[1]', '$[1]', '$[0]') AS json_data\n\n/*-----------*\n | json_data |\n +-----------+\n | [] |\n *-----------*/\n```\n\nIn the following example, the path`$.a.b.c`is matched and removes the`\"c\":\"d\"`key-value pair from the JSON object.\n\n```\nSELECT JSON_REMOVE(JSON '{\"a\": {\"b\": {\"c\": \"d\"}}}', '$.a.b.c') AS json_data\n\n/*----------------*\n | json_data |\n +----------------+\n | {\"a\":{\"b\":{}}} |\n *----------------*/\n```\n\nIn the following example, the path`$.a.b`is matched and removes the`\"b\": {\"c\":\"d\"}`key-value pair from the JSON object.\n\n```\nSELECT JSON_REMOVE(JSON '{\"a\": {\"b\": {\"c\": \"d\"}}}', '$.a.b') AS json_data\n\n/*-----------*\n | json_data |\n +-----------+\n | {\"a\":{}} |\n *-----------*/\n```\n\nIn the following example, the path`$.b`is not valid, so the operation makes\nno changes.\n\n```\nSELECT JSON_REMOVE(JSON '{\"a\": 1}', '$.b') AS json_data\n\n/*-----------*\n | json_data |\n +-----------+\n | {\"a\":1} |\n *-----------*/\n```\n\nIn the following example, path`$.a.b`and`$.b`don't exist, so those\noperations are ignored, but the others are processed.\n\n```\nSELECT JSON_REMOVE(JSON '{\"a\": [1, 2, 3]}', '$.a[0]', '$.a.b', '$.b', '$.a[0]') AS json_data\n\n/*-----------*\n | json_data |\n +-----------+\n | {\"a\":[3]} |\n *-----------*/\n```\n\nIf you pass in`$`as the path, an error is produced. For example:\n\n```\n-- Error: The JSONPath cannot be '$'\nSELECT JSON_REMOVE(JSON '{}', '$') AS json_data\n```\n\nIn the following example, the operation is ignored because you can't remove\ndata from a JSON null.\n\n```\nSELECT JSON_REMOVE(JSON 'null', '$.a.b') AS json_data\n\n/*-----------*\n | json_data |\n +-----------+\n | null |\n *-----------*/\n```\n\n\n"
},
{
"name": "JSON_SET",
"arguments": [],
"category": "JSON",
"description_markdown": "```\nJSON_SET(\n json_expr,\n json_path_value_pair[, ...]\n [, create_if_missing=> { TRUE | FALSE }]\n)\n\njson_path_value_pair:\n json_path, value\n```\n\nProduces a new SQL`JSON`value with the specified JSON data inserted\nor replaced.\n\nArguments:\n\n- ` json_expr`: JSON. For example:\n \n \n ```\n JSON '{\"class\": {\"students\": [{\"name\": \"Jane\"}]}}'\n ```\n \n \n- ` json_path_value_pair`: A value and the[JSONPath](#JSONPath_format)for\nthat value. This includes:\n \n \n - ` json_path`: Insert or replace` value`at this[JSONPath](#JSONPath_format)in` json_expr`.\n \n \n - ` value`: A[JSON encoding-supported](#json_encodings)value to\ninsert.\n \n \n- ` create_if_missing`: An optional, mandatory named argument.\n \n \n - If TRUE (default), replaces or inserts data if the path does not exist.\n \n \n - If FALSE, only *existing* JSONPath values are replaced. If the path\ndoesn't exist, the set operation is ignored.\n \n \n\nDetails:\n\n- Path value pairs are evaluated left to right. The JSON produced by\nevaluating one pair becomes the JSON against which the next pair\nis evaluated.\n- If a matched path has an existing value, it overwrites the existing data\nwith` value`.\n- If` create_if_missing`is` TRUE`:\n \n \n - If a path doesn't exist, the remainder of the path is recursively\n created.\n - If the matched path prefix points to a JSON null, the remainder of the\n path is recursively created, and` value`is inserted.\n - If a path token points to a JSON array and the specified index is *larger* than the size of the array, pads the JSON array with JSON\n nulls, recursively creates the remainder of the path at the specified\n index, and inserts the path value pair.\n- This function applies all path value pair set operations even if an\nindividual path value pair operation is invalid. For invalid operations,\nthe operation is ignored and the function continues to process the rest\nof the path value pairs.\n \n \n- If the path exists but has an incompatible type at any given path\ntoken, no update happens for that specific path value pair.\n \n \n- If any` json_path`is an invalid[JSONPath](#JSONPath_format), an error is\nproduced.\n \n \n- If` json_expr`is SQL` NULL`, the function returns SQL` NULL`.\n \n \n- If` json_path`is SQL` NULL`, the` json_path_value_pair`operation is\nignored.\n \n \n- If` create_if_missing`is SQL` NULL`, the set operation is ignored.\n \n \n\n **Return type** \n\n`JSON`\n\n **Examples** \n\nIn the following example, the path`$`matches the entire`JSON`value\nand replaces it with`{\"b\": 2, \"c\": 3}`.\n\n```\nSELECT JSON_SET(JSON '{\"a\": 1}', '$', JSON '{\"b\": 2, \"c\": 3}') AS json_data\n\n/*---------------*\n | json_data |\n +---------------+\n | {\"b\":2,\"c\":3} |\n *---------------*/\n```\n\nIn the following example,`create_if_missing`is`FALSE`and the path`$.b`doesn't exist, so the set operation is ignored.\n\n```\nSELECT JSON_SET(\n JSON '{\"a\": 1}',\n \"$.b\", 999,\n create_if_missing => false) AS json_data\n\n/*------------*\n | json_data |\n +------------+\n | '{\"a\": 1}' |\n *------------*/\n```\n\nIn the following example,`create_if_missing`is`TRUE`and the path`$.a`exists, so the value is replaced.\n\n```\nSELECT JSON_SET(\n JSON '{\"a\": 1}',\n \"$.a\", 999,\n create_if_missing => false) AS json_data\n\n/*--------------*\n | json_data |\n +--------------+\n | '{\"a\": 999}' |\n *--------------*/\n```\n\nIn the following example, the path`$.a`is matched, but`$.a.b`does not\nexist, so the new path and the value are inserted.\n\n```\nSELECT JSON_SET(JSON '{\"a\": {}}', '$.a.b', 100) AS json_data\n\n/*-----------------*\n | json_data |\n +-----------------+\n | {\"a\":{\"b\":100}} |\n *-----------------*/\n```\n\nIn the following example, the path prefix`$`points to a JSON null, so the\nremainder of the path is created for the value`100`.\n\n```\nSELECT JSON_SET(JSON 'null', '$.a.b', 100) AS json_data\n\n/*-----------------*\n | json_data |\n +-----------------+\n | {\"a\":{\"b\":100}} |\n *-----------------*/\n```\n\nIn the following example, the path`$.a.c`implies that the value at`$.a`is\na JSON object but it's not. This part of the operation is ignored, but the other\nparts of the operation are completed successfully.\n\n```\nSELECT JSON_SET(\n JSON '{\"a\": 1}',\n '$.b', 2,\n '$.a.c', 100,\n '$.d', 3) AS json_data\n\n/*---------------------*\n | json_data |\n +---------------------+\n | {\"a\":1,\"b\":2,\"d\":3} |\n *---------------------*/\n```\n\nIn the following example, the path`$.a[2]`implies that the value for`$.a`is\nan array, but it's not, so the operation is ignored for that value.\n\n```\nSELECT JSON_SET(\n JSON '{\"a\": 1}',\n '$.a[2]', 100,\n '$.b', 2) AS json_data\n\n/*---------------*\n | json_data |\n +---------------+\n | {\"a\":1,\"b\":2} |\n *---------------*/\n```\n\nIn the following example, the path`$[1]`is matched and replaces the\narray element value with`foo`.\n\n```\nSELECT JSON_SET(JSON '[\"a\", [\"b\", \"c\"], \"d\"]', '$[1]', \"foo\") AS json_data\n\n/*-----------------*\n | json_data |\n +-----------------+\n | [\"a\",\"foo\",\"d\"] |\n *-----------------*/\n```\n\nIn the following example, the path`$[1][0]`is matched and replaces the\narray element value with`foo`.\n\n```\nSELECT JSON_SET(JSON '[\"a\", [\"b\", \"c\"], \"d\"]', '$[1][0]', \"foo\") AS json_data\n\n/*-----------------------*\n | json_data |\n +-----------------------+\n | [\"a\",[\"foo\",\"c\"],\"d\"] |\n *-----------------------*/\n```\n\nIn the following example, the path prefix`$`points to a JSON null, so the\nremainder of the path is created. The resulting array is padded with\nJSON nulls and appended with`foo`.\n\n```\nSELECT JSON_SET(JSON 'null', '$[0][3]', \"foo\")\n\n/*--------------------------*\n | json_data |\n +--------------------------+\n | [[null,null,null,\"foo\"]] |\n *--------------------------*/\n```\n\nIn the following example, the path`$[1]`is matched, the matched array is\nextended since`$[1][4]`is larger than the existing array, and then`foo`is\ninserted in the array.\n\n```\nSELECT JSON_SET(JSON '[\"a\", [\"b\", \"c\"], \"d\"]', '$[1][4]', \"foo\") AS json_data\n\n/*-------------------------------------*\n | json_data |\n +-------------------------------------+\n | [\"a\",[\"b\",\"c\",null,null,\"foo\"],\"d\"] |\n *-------------------------------------*/\n```\n\nIn the following example, the path`$[1][0][0]`implies that the value of`$[1][0]`is an array, but it is not, so the operation is ignored.\n\n```\nSELECT JSON_SET(JSON '[\"a\", [\"b\", \"c\"], \"d\"]', '$[1][0][0]', \"foo\") AS json_data\n\n/*---------------------*\n | json_data |\n +---------------------+\n | [\"a\",[\"b\",\"c\"],\"d\"] |\n *---------------------*/\n```\n\nIn the following example, the path`$[1][2]`is larger than the length of\nthe matched array. The array length is extended and the remainder of the path\nis recursively created. The operation continues to the path`$[1][2][1]`and inserts`foo`.\n\n```\nSELECT JSON_SET(JSON '[\"a\", [\"b\", \"c\"], \"d\"]', '$[1][2][1]', \"foo\") AS json_data\n\n/*----------------------------------*\n | json_data |\n +----------------------------------+\n | [\"a\",[\"b\",\"c\",[null,\"foo\"]],\"d\"] |\n *----------------------------------*/\n```\n\nIn the following example, because the`JSON`object is empty, key`b`is\ninserted, and the remainder of the path is recursively created.\n\n```\nSELECT JSON_SET(JSON '{}', '$.b[2].d', 100) AS json_data\n\n/*-----------------------------*\n | json_data |\n +-----------------------------+\n | {\"b\":[null,null,{\"d\":100}]} |\n *-----------------------------*/\n```\n\nIn the following example, multiple values are set.\n\n```\nSELECT JSON_SET(\n JSON '{\"a\": 1, \"b\": {\"c\":3}, \"d\": [4]}',\n '$.a', 'v1',\n '$.b.e', 'v2',\n '$.d[2]', 'v3') AS json_data\n\n/*---------------------------------------------------*\n | json_data |\n +---------------------------------------------------+\n | {\"a\":\"v1\",\"b\":{\"c\":3,\"e\":\"v2\"},\"d\":[4,null,\"v3\"]} |\n *---------------------------------------------------*/\n```\n\n\n"
},
{
"name": "JSON_STRIP_NULLS",
"arguments": [],
"category": "JSON",
"description_markdown": "```\nJSON_STRIP_NULLS(\n json_expr\n [, json_path]\n [, include_arrays => { TRUE | FALSE }]\n [, remove_empty => { TRUE | FALSE }]\n)\n```\n\nRecursively removes JSON nulls from JSON objects and JSON arrays.\n\nArguments:\n\n- ` json_expr`: JSON. For example:\n \n \n ```\n JSON '{\"a\": null, \"b\": \"c\"}'\n ```\n \n \n- ` json_path`: Remove JSON nulls at this[JSONPath](#JSONPath_format)for` json_expr`.\n \n \n- ` include_arrays`: An optional, mandatory named argument that is either` TRUE`(default) or` FALSE`. If` TRUE`or omitted, the function removes\n JSON nulls from JSON arrays. If` FALSE`, does not.\n \n \n- ` remove_empty`: An optional, mandatory named argument that is either` TRUE`or` FALSE`(default). If` TRUE`, the function removes empty\n JSON objects after JSON nulls are removed. If` FALSE`or omitted, does not.\n \n If` remove_empty`is` TRUE`and` include_arrays`is` TRUE`or omitted,\nthe function additionally removes empty JSON arrays.\n \n \n\nDetails:\n\n- If a value is a JSON null, the associated key-value pair is removed.\n- If` remove_empty`is set to` TRUE`, the function recursively removes empty\ncontainers after JSON nulls are removed.\n- If the function generates JSON with nothing in it, the function returns a\nJSON null.\n- If` json_path`is an invalid[JSONPath](#JSONPath_format), an error is\nproduced.\n- If` json_expr`is SQL` NULL`, the function returns SQL` NULL`.\n- If` json_path`,` include_arrays`, or` remove_empty`is SQL` NULL`, the\nfunction returns` json_expr`.\n\n **Return type** \n\n`JSON`\n\n **Examples** \n\nIn the following example, all JSON nulls are removed.\n\n```\nSELECT JSON_STRIP_NULLS(JSON '{\"a\": null, \"b\": \"c\"}') AS json_data\n\n/*-----------*\n | json_data |\n +-----------+\n | {\"b\":\"c\"} |\n *-----------*/\n```\n\nIn the following example, all JSON nulls are removed from a JSON array.\n\n```\nSELECT JSON_STRIP_NULLS(JSON '[1, null, 2, null]') AS json_data\n\n/*-----------*\n | json_data |\n +-----------+\n | [1,2] |\n *-----------*/\n```\n\nIn the following example,`include_arrays`is set as`FALSE`so that JSON nulls\nare not removed from JSON arrays.\n\n```\nSELECT JSON_STRIP_NULLS(JSON '[1, null, 2, null]', include_arrays=>FALSE) AS json_data\n\n/*-----------------*\n | json_data |\n +-----------------+\n | [1,null,2,null] |\n *-----------------*/\n```\n\nIn the following example,`remove_empty`is omitted and defaults to`FALSE`, and the empty structures are retained.\n\n```\nSELECT JSON_STRIP_NULLS(JSON '[1, null, 2, null, [null]]') AS json_data\n\n/*-----------*\n | json_data |\n +-----------+\n | [1,2,[]] |\n *-----------*/\n```\n\nIn the following example,`remove_empty`is set as`TRUE`, and the\nempty structures are removed.\n\n```\nSELECT JSON_STRIP_NULLS(\n JSON '[1, null, 2, null, [null]]',\n remove_empty=>TRUE) AS json_data\n\n/*-----------*\n | json_data |\n +-----------+\n | [1,2] |\n *-----------*/\n```\n\nIn the following examples,`remove_empty`is set as`TRUE`, and the\nempty structures are removed. Because no JSON data is left the function\nreturns JSON null.\n\n```\nSELECT JSON_STRIP_NULLS(JSON '{\"a\": null}', remove_empty=>TRUE) AS json_data\n\n/*-----------*\n | json_data |\n +-----------+\n | null |\n *-----------*/\n```\n\n```\nSELECT JSON_STRIP_NULLS(JSON '{\"a\": [null]}', remove_empty=>TRUE) AS json_data\n\n/*-----------*\n | json_data |\n +-----------+\n | null |\n *-----------*/\n```\n\nIn the following example, empty structures are removed for JSON objects,\nbut not JSON arrays.\n\n```\nSELECT JSON_STRIP_NULLS(\n JSON '{\"a\": {\"b\": {\"c\": null}}, \"d\": [null], \"e\": [], \"f\": 1}',\n include_arrays=>FALSE,\n remove_empty=>TRUE) AS json_data\n\n/*---------------------------*\n | json_data |\n +---------------------------+\n | {\"d\":[null],\"e\":[],\"f\":1} |\n *---------------------------*/\n```\n\nIn the following example, empty structures are removed for both JSON objects,\nand JSON arrays.\n\n```\nSELECT JSON_STRIP_NULLS(\n JSON '{\"a\": {\"b\": {\"c\": null}}, \"d\": [null], \"e\": [], \"f\": 1}',\n remove_empty=>TRUE) AS json_data\n\n/*-----------*\n | json_data |\n +-----------+\n | {\"f\":1} |\n *-----------*/\n```\n\nIn the following example, because no JSON data is left, the function returns a\nJSON null.\n\n```\nSELECT JSON_STRIP_NULLS(JSON 'null') AS json_data\n\n/*-----------*\n | json_data |\n +-----------+\n | null |\n *-----------*/\n```\n\n\n"
},
{
"name": "JSON_TYPE",
"arguments": [],
"category": "JSON",
"description_markdown": "```\nJSON_TYPE(json_expr)\n```\n\n **Description** \n\nGets the JSON type of the outermost JSON value and converts the name of\nthis type to a SQL`STRING`value. The names of these JSON types can be\nreturned:`object`,`array`,`string`,`number`,`boolean`,`null`\n\nArguments:\n\n- ` json_expr`: JSON. For example:\n \n \n ```\n JSON '{\"name\": \"sky\", \"color\": \"blue\"}'\n ```\n \n If this expression is SQL` NULL`, the function returns SQL` NULL`. If the\nextracted JSON value is not a valid JSON type, an error is produced.\n \n \n\n **Return type** \n\n`STRING`\n\n **Examples** \n\n```\nSELECT json_val, JSON_TYPE(json_val) AS type\nFROM\n UNNEST(\n [\n JSON '\"apple\"',\n JSON '10',\n JSON '3.14',\n JSON 'null',\n JSON '{\"city\": \"New York\", \"State\": \"NY\"}',\n JSON '[\"apple\", \"banana\"]',\n JSON 'false'\n ]\n ) AS json_val;\n\n/*----------------------------------+---------*\n | json_val | type |\n +----------------------------------+---------+\n | \"apple\" | string |\n | 10 | number |\n | 3.14 | number |\n | null | null |\n | {\"State\":\"NY\",\"city\":\"New York\"} | object |\n | [\"apple\",\"banana\"] | array |\n | false | boolean |\n *----------------------------------+---------*/\n```\n\n\n"
},
{
"name": "JSON_VALUE",
"arguments": [],
"category": "JSON",
"description_markdown": "```\nJSON_VALUE(json_string_expr[, json_path])\n```\n\n```\nJSON_VALUE(json_expr[, json_path])\n```\n\n **Description** \n\nExtracts a JSON scalar value and converts it to a SQL`STRING`value.\nIn addition, this function:\n\n- Removes the outermost quotes and unescapes the values.\n- Returns a SQL` NULL`if a non-scalar value is selected.\n- Uses double quotes to escape invalid[JSONPath](#JSONPath_format)characters\nin JSON keys. For example:` \"a.b\"`.\n\nArguments:\n\n- ` json_string_expr`: A JSON-formatted string. For example:\n \n \n ```\n '{\"name\": \"Jakob\", \"age\": \"6\"}'\n ```\n \n \n- ` json_expr`: JSON. For example:\n \n \n ```\n JSON '{\"name\": \"Jane\", \"age\": \"6\"}'\n ```\n \n \n- ` json_path`: The[JSONPath](#JSONPath_format). This identifies the data that\nyou want to obtain from the input. If this optional parameter is not\nprovided, then the JSONPath` $`symbol is applied, which means that all of\nthe data is analyzed.\n \n If` json_path`returns a JSON` null`or a non-scalar value (in other words,\nif` json_path`refers to an object or an array), then a SQL` NULL`is\nreturned.\n \n \n\nThere are differences between the JSON-formatted string and JSON input types.\nFor details, see[Differences between the JSON and JSON-formatted STRING types](#differences_json_and_string).\n\n **Return type** \n\n`STRING`\n\n **Examples** \n\nIn the following example, JSON data is extracted and returned as a scalar value.\n\n```\nSELECT JSON_VALUE(JSON '{\"name\": \"Jakob\", \"age\": \"6\" }', '$.age') AS scalar_age;\n\n/*------------*\n | scalar_age |\n +------------+\n | 6 |\n *------------*/\n```\n\nThe following example compares how results are returned for the`JSON_QUERY`and`JSON_VALUE`functions.\n\n```\nSELECT JSON_QUERY('{\"name\": \"Jakob\", \"age\": \"6\"}', '$.name') AS json_name,\n JSON_VALUE('{\"name\": \"Jakob\", \"age\": \"6\"}', '$.name') AS scalar_name,\n JSON_QUERY('{\"name\": \"Jakob\", \"age\": \"6\"}', '$.age') AS json_age,\n JSON_VALUE('{\"name\": \"Jakob\", \"age\": \"6\"}', '$.age') AS scalar_age;\n\n/*-----------+-------------+----------+------------*\n | json_name | scalar_name | json_age | scalar_age |\n +-----------+-------------+----------+------------+\n | \"Jakob\" | Jakob | \"6\" | 6 |\n *-----------+-------------+----------+------------*/\n```\n\n```\nSELECT JSON_QUERY('{\"fruits\": [\"apple\", \"banana\"]}', '$.fruits') AS json_query,\n JSON_VALUE('{\"fruits\": [\"apple\", \"banana\"]}', '$.fruits') AS json_value;\n\n/*--------------------+------------*\n | json_query | json_value |\n +--------------------+------------+\n | [\"apple\",\"banana\"] | NULL |\n *--------------------+------------*/\n```\n\nIn cases where a JSON key uses invalid JSONPath characters, you can escape those\ncharacters using double quotes. For example:\n\n```\nSELECT JSON_VALUE('{\"a.b\": {\"c\": \"world\"}}', '$.\"a.b\".c') AS hello;\n\n/*-------*\n | hello |\n +-------+\n | world |\n *-------*/\n```\n\n\n"
},
{
"name": "JSON_VALUE_ARRAY",
"arguments": [],
"category": "JSON",
"description_markdown": "```\nJSON_VALUE_ARRAY(json_string_expr[, json_path])\n```\n\n```\nJSON_VALUE_ARRAY(json_expr[, json_path])\n```\n\n **Description** \n\nExtracts a JSON array of scalar values and converts it to a SQL`ARRAY<STRING>`value.\nIn addition, this function:\n\n- Removes the outermost quotes and unescapes the values.\n- Returns a SQL` NULL`if the selected value is not an array or\nnot an array containing only scalar values.\n- Uses double quotes to escape invalid[JSONPath](#JSONPath_format)characters\nin JSON keys. For example:` \"a.b\"`.\n\nArguments:\n\n- ` json_string_expr`: A JSON-formatted string. For example:\n \n \n ```\n '[\"apples\", \"oranges\", \"grapes\"]'\n ```\n \n \n- ` json_expr`: JSON. For example:\n \n \n ```\n JSON '[\"apples\", \"oranges\", \"grapes\"]'\n ```\n \n \n- ` json_path`: The[JSONPath](#JSONPath_format). This identifies the data that\nyou want to obtain from the input. If this optional parameter is not\nprovided, then the JSONPath` $`symbol is applied, which means that all of\nthe data is analyzed.\n \n \n\nThere are differences between the JSON-formatted string and JSON input types.\nFor details, see[Differences between the JSON and JSON-formatted STRING types](#differences_json_and_string).\n\nCaveats:\n\n- A JSON` null`in the input array produces a SQL` NULL`as the output for\nJSON` null`. If the output contains a` NULL`array element, an error is\nproduced because the final output cannot be an array with` NULL`values.\n- If a JSONPath matches an array that contains scalar objects and a JSON` null`,\nthen the output of the function must be transformed because the final output\ncannot be an array with` NULL`values.\n\n **Return type** \n\n`ARRAY<STRING>`\n\n **Examples** \n\nThis extracts items in JSON to a string array:\n\n```\nSELECT JSON_VALUE_ARRAY(\n JSON '{\"fruits\": [\"apples\", \"oranges\", \"grapes\"]}', '$.fruits'\n ) AS string_array;\n\n/*---------------------------*\n | string_array |\n +---------------------------+\n | [apples, oranges, grapes] |\n *---------------------------*/\n```\n\nThe following example compares how results are returned for the`JSON_QUERY_ARRAY`and`JSON_VALUE_ARRAY`functions.\n\n```\nSELECT JSON_QUERY_ARRAY('[\"apples\", \"oranges\"]') AS json_array,\n JSON_VALUE_ARRAY('[\"apples\", \"oranges\"]') AS string_array;\n\n/*-----------------------+-------------------*\n | json_array | string_array |\n +-----------------------+-------------------+\n | [\"apples\", \"oranges\"] | [apples, oranges] |\n *-----------------------+-------------------*/\n```\n\nThis extracts the items in a JSON-formatted string to a string array:\n\n```\n-- Strips the double quotes\nSELECT JSON_VALUE_ARRAY('[\"foo\", \"bar\", \"baz\"]', '$') AS string_array;\n\n/*-----------------*\n | string_array |\n +-----------------+\n | [foo, bar, baz] |\n *-----------------*/\n```\n\nThis extracts a string array and converts it to an integer array:\n\n```\nSELECT ARRAY(\n SELECT CAST(integer_element AS INT64)\n FROM UNNEST(\n JSON_VALUE_ARRAY('[1, 2, 3]', '$')\n ) AS integer_element\n) AS integer_array;\n\n/*---------------*\n | integer_array |\n +---------------+\n | [1, 2, 3] |\n *---------------*/\n```\n\nThese are equivalent:\n\n```\nSELECT JSON_VALUE_ARRAY('{\"fruits\": [\"apples\", \"oranges\", \"grapes\"]}', '$.fruits') AS string_array;\nSELECT JSON_VALUE_ARRAY('{\"fruits\": [\"apples\", \"oranges\", \"grapes\"]}', '$.\"fruits\"') AS string_array;\n\n-- The queries above produce the following result:\n/*---------------------------*\n | string_array |\n +---------------------------+\n | [apples, oranges, grapes] |\n *---------------------------*/\n```\n\nIn cases where a JSON key uses invalid JSONPath characters, you can escape those\ncharacters using double quotes:`\" \"`. For example:\n\n```\nSELECT JSON_VALUE_ARRAY('{\"a.b\": {\"c\": [\"world\"]}}', '$.\"a.b\".c') AS hello;\n\n/*---------*\n | hello |\n +---------+\n | [world] |\n *---------*/\n```\n\nThe following examples explore how invalid requests and empty arrays are\nhandled:\n\n```\n-- An error is thrown if you provide an invalid JSONPath.\nSELECT JSON_VALUE_ARRAY('[\"foo\", \"bar\", \"baz\"]', 'INVALID_JSONPath') AS result;\n\n-- If the JSON-formatted string is invalid, then NULL is returned.\nSELECT JSON_VALUE_ARRAY('}}', '$') AS result;\n\n/*--------*\n | result |\n +--------+\n | NULL |\n *--------*/\n\n-- If the JSON document is NULL, then NULL is returned.\nSELECT JSON_VALUE_ARRAY(NULL, '$') AS result;\n\n/*--------*\n | result |\n +--------+\n | NULL |\n *--------*/\n\n-- If a JSONPath does not match anything, then the output is NULL.\nSELECT JSON_VALUE_ARRAY('{\"a\": [\"foo\", \"bar\", \"baz\"]}', '$.b') AS result;\n\n/*--------*\n | result |\n +--------+\n | NULL |\n *--------*/\n\n-- If a JSONPath matches an object that is not an array, then the output is NULL.\nSELECT JSON_VALUE_ARRAY('{\"a\": \"foo\"}', '$') AS result;\n\n/*--------*\n | result |\n +--------+\n | NULL |\n *--------*/\n\n-- If a JSONPath matches an array of non-scalar objects, then the output is NULL.\nSELECT JSON_VALUE_ARRAY('{\"a\": [{\"b\": \"foo\", \"c\": 1}, {\"b\": \"bar\", \"c\": 2}], \"d\": \"baz\"}', '$.a') AS result;\n\n/*--------*\n | result |\n +--------+\n | NULL |\n *--------*/\n\n-- If a JSONPath matches an array of mixed scalar and non-scalar objects,\n-- then the output is NULL.\nSELECT JSON_VALUE_ARRAY('{\"a\": [10, {\"b\": 20}]', '$.a') AS result;\n\n/*--------*\n | result |\n +--------+\n | NULL |\n *--------*/\n\n-- If a JSONPath matches an empty JSON array, then the output is an empty array instead of NULL.\nSELECT JSON_VALUE_ARRAY('{\"a\": \"foo\", \"b\": []}', '$.b') AS result;\n\n/*--------*\n | result |\n +--------+\n | [] |\n *--------*/\n\n-- The following query produces and error because the final output cannot be an\n-- array with NULLs.\nSELECT JSON_VALUE_ARRAY('[\"world\", 1, null]') AS result;\n```\n\n\n"
},
{
"name": "JUSTIFY_DAYS",
"arguments": [],
"category": "Interval",
"description_markdown": "```\nJUSTIFY_DAYS(interval_expression)\n```\n\n **Description** \n\nNormalizes the day part of the interval to the range from -29 to 29 by\nincrementing/decrementing the month or year part of the interval.\n\n **Return Data Type** \n\n`INTERVAL`\n\n **Example** \n\n```\nSELECT\n JUSTIFY_DAYS(INTERVAL 29 DAY) AS i1,\n JUSTIFY_DAYS(INTERVAL -30 DAY) AS i2,\n JUSTIFY_DAYS(INTERVAL 31 DAY) AS i3,\n JUSTIFY_DAYS(INTERVAL -65 DAY) AS i4,\n JUSTIFY_DAYS(INTERVAL 370 DAY) AS i5\n\n/*--------------+--------------+-------------+---------------+--------------*\n | i1 | i2 | i3 | i4 | i5 |\n +--------------+--------------+-------------+---------------+--------------+\n | 0-0 29 0:0:0 | -0-1 0 0:0:0 | 0-1 1 0:0:0 | -0-2 -5 0:0:0 | 1-0 10 0:0:0 |\n *--------------+--------------+-------------+---------------+--------------*/\n```\n\n\n"
},
{
"name": "JUSTIFY_HOURS",
"arguments": [],
"category": "Interval",
"description_markdown": "```\nJUSTIFY_HOURS(interval_expression)\n```\n\n **Description** \n\nNormalizes the time part of the interval to the range from -23:59:59.999999 to\n23:59:59.999999 by incrementing/decrementing the day part of the interval.\n\n **Return Data Type** \n\n`INTERVAL`\n\n **Example** \n\n```\nSELECT\n JUSTIFY_HOURS(INTERVAL 23 HOUR) AS i1,\n JUSTIFY_HOURS(INTERVAL -24 HOUR) AS i2,\n JUSTIFY_HOURS(INTERVAL 47 HOUR) AS i3,\n JUSTIFY_HOURS(INTERVAL -12345 MINUTE) AS i4\n\n/*--------------+--------------+--------------+-----------------*\n | i1 | i2 | i3 | i4 |\n +--------------+--------------+--------------+-----------------+\n | 0-0 0 23:0:0 | 0-0 -1 0:0:0 | 0-0 1 23:0:0 | 0-0 -8 -13:45:0 |\n *--------------+--------------+--------------+-----------------*/\n```\n\n\n"
},
{
"name": "JUSTIFY_INTERVAL",
"arguments": [],
"category": "Interval",
"description_markdown": "```\nJUSTIFY_INTERVAL(interval_expression)\n```\n\n **Description** \n\nNormalizes the days and time parts of the interval.\n\n **Return Data Type** \n\n`INTERVAL`\n\n **Example** \n\n```\nSELECT JUSTIFY_INTERVAL(INTERVAL '29 49:00:00' DAY TO SECOND) AS i\n\n/*-------------*\n | i |\n +-------------+\n | 0-1 1 1:0:0 |\n *-------------*/\n```\n\n\n"
},
{
"name": "KEYS.ADD_KEY_FROM_RAW_BYTES",
"arguments": [],
"category": "AEAD_encryption",
"description_markdown": "```\nKEYS.ADD_KEY_FROM_RAW_BYTES(keyset, key_type, raw_key_bytes)\n```\n\n **Description** \n\nReturns a serialized keyset as`BYTES`with the\naddition of a key to`keyset`based on`key_type`and`raw_key_bytes`.\n\nThe primary cryptographic key remains the same as in`keyset`. The expected\nlength of`raw_key_bytes`depends on the value of`key_type`. The following are\nsupported`key_types`:\n\n- ` 'AES_CBC_PKCS'`: Creates a key for AES decryption using cipher block chaining\nand PKCS padding.` raw_key_bytes`is expected to be a raw key` BYTES`value of length 16, 24, or 32; these\nlengths have sizes of 128, 192, and 256 bits, respectively. GoogleSQL\nAEAD functions do not support keys of these types for encryption; instead,\nprefer` 'AEAD_AES_GCM_256'`or` 'AES_GCM'`keys.\n- ` 'AES_GCM'`: Creates a key for AES decryption or encryption using[Galois/Counter Mode](https://en.wikipedia.org/wiki/Galois/Counter_Mode).` raw_key_bytes`must be a raw key` BYTES`value of length 16 or 32; these lengths have sizes of 128 and 256 bits,\nrespectively. When keys of this type are inputs to` AEAD.ENCRYPT`, the output\nciphertext does not have a Tink-specific prefix indicating which key was\nused as input.\n\n **Return Data Type** \n\n`BYTES`\n\n **Example** \n\nThe following query creates a table of customer IDs along with raw key bytes,\ncalled`CustomerRawKeys`, and a table of unique IDs, called`CustomerIds`. It\ncreates a new`'AEAD_AES_GCM_256'`keyset for each`customer_id`; then it adds a\nnew key to each keyset, using the`raw_key_bytes`value corresponding to that`customer_id`. The output is a table where each row contains a`customer_id`and\na keyset in`BYTES`, which contains the raw key added\nusing KEYS.ADD_KEY_FROM_RAW_BYTES.\n\n```\nWITH CustomerRawKeys AS (\n SELECT 1 AS customer_id, b'0123456789012345' AS raw_key_bytes UNION ALL\n SELECT 2, b'9876543210543210' UNION ALL\n SELECT 3, b'0123012301230123'\n), CustomerIds AS (\n SELECT 1 AS customer_id UNION ALL\n SELECT 2 UNION ALL\n SELECT 3\n)\nSELECT\n ci.customer_id,\n KEYS.ADD_KEY_FROM_RAW_BYTES(\n KEYS.NEW_KEYSET('AEAD_AES_GCM_256'),\n 'AES_CBC_PKCS',\n (SELECT raw_key_bytes FROM CustomerRawKeys AS crk\n WHERE crk.customer_id = ci.customer_id)\n ) AS keyset\nFROM CustomerIds AS ci;\n```\n\nThe output keysets each contain two things: the primary cryptographic key\ncreated using`KEYS.NEW_KEYSET('AEAD_AES_GCM_256')`, and the raw key added using`KEYS.ADD_KEY_FROM_RAW_BYTES`. If a keyset in the output is used with`AEAD.ENCRYPT`, GoogleSQL uses the primary cryptographic key created\nusing`KEYS.NEW_KEYSET('AEAD_AES_GCM_256')`to encrypt the input plaintext. If\nthe keyset is used with`AEAD.DECRYPT_STRING`or`AEAD.DECRYPT_BYTES`,\nGoogleSQL returns the resulting plaintext if either key succeeds in\ndecrypting the ciphertext.\n\n\n\n"
},
{
"name": "KEYS.KEYSET_CHAIN",
"arguments": [],
"category": "AEAD_encryption",
"description_markdown": "```\nKEYS.KEYSET_CHAIN(kms_resource_name, first_level_keyset)\n```\n\n **Description** \n\nCan be used in place of the`keyset`argument to the AEAD\nand deterministic\nencryption functions to pass a[Tink](https://github.com/google/tink/blob/master/docs/KEY-MANAGEMENT.md)keyset that is encrypted\nwith a[Cloud KMS key](/bigquery/docs/aead-encryption-concepts#cloud_kms_protection). This function lets you use\nother AEAD functions without including plaintext keys in a query.\n\nThis function takes the following arguments:\n\n- ` kms_resource_name`: A` STRING`literal that contains the resource path to\nthe Cloud KMS key that's used to decrypt` first_level_keyset`.\nThis key must reside in the same Cloud region where this function is executed.\nA Cloud KMS key looks like this:\n \n \n ```\n gcp-kms://projects/my-project/locations/us/keyRings/my-key-ring/cryptoKeys/my-crypto-key\n ```\n \n \n- ` first_level_keyset`: A` BYTES`literal that represents a[keyset](/bigquery/docs/aead-encryption-concepts#keysets)or[wrapped keyset](/bigquery/docs/aead-encryption-concepts#wrapped_keysets).\n \n \n\n **Return Data Type** \n\n`STRUCT`\n\n **Example** \n\nThis example creates a table of example data, then shows how to encrypt that\ndata using a wrapped (encrypted) keyset. Finally it shows how to query the\nencrypted version of the data.\n\nThe following statement creates a table`RawCustomerData`containing a column of\ncustomer ids and a column of favorite animals.\n\n```\nCREATE TABLE aead.RawCustomerData AS\nSELECT\n 1 AS customer_id,\n b'jaguar' AS favorite_animal\nUNION ALL\nSELECT\n 2 AS customer_id,\n b'zebra' AS favorite_animal\nUNION ALL\nSELECT\n 3 AS customer_id,\n b'zebra' AS favorite_animal;\n```\n\nThe following statement creates a table`EncryptedCustomerData`containing a\ncolumn of unique IDs and a column of ciphertext. The statement encrypts the\nplaintext`favorite_animal`using the first_level_keyset provided.\n\n```\nDECLARE kms_resource_name STRING;\nDECLARE first_level_keyset BYTES;\nSET kms_resource_name = 'gcp-kms://projects/my-project/locations/us/keyRings/my-key-ring/cryptoKeys/my-crypto-key';\nSET first_level_keyset = b'\\012\\044\\000\\107\\275\\360\\176\\264\\206\\332\\235\\215\\304...';\n\nCREATE TABLE aead.EncryptedCustomerData AS\nSELECT\n customer_id,\n AEAD.ENCRYPT(\n KEYS.KEYSET_CHAIN(kms_resource_name, first_level_keyset),\n favorite_animal,\n CAST(CAST(customer_id AS STRING) AS BYTES)\n ) AS encrypted_animal\nFROM\n aead.RawCustomerData;\n```\n\nThe following query uses the first_level_keyset to decrypt data in the`EncryptedCustomerData`table.\n\n```\nDECLARE kms_resource_name STRING;\nDECLARE first_level_keyset BYTES;\nSET kms_resource_name = 'gcp-kms://projects/my-project/locations/us/keyRings/my-key-ring/cryptoKeys/my-crypto-key';\nSET first_level_keyset = b'\\012\\044\\000\\107\\275\\360\\176\\264\\206\\332\\235\\215\\304...';\n\nSELECT\n customer_id,\n AEAD.DECRYPT_BYTES(\n KEYS.KEYSET_CHAIN(kms_resource_name, first_level_keyset),\n encrypted_animal,\n CAST(CAST(customer_id AS STRING) AS BYTES)\n ) AS favorite_animal\nFROM\n aead.EncryptedCustomerData;\n```\n\nThe previous two steps also work with the`DETERMINISTIC_ENCRYPT`and`DETERMINISTIC_DECRYPT_BYTES`functions. The wrapped keyset must be created\nusing the`DETERMINISTIC_AEAD_AES_SIV_CMAC_256`type.\n\nThe following statement creates a table`EncryptedCustomerData`containing a\ncolumn of unique IDs and a column of ciphertext. The statement encrypts the\nplaintext`favorite_animal`using the first_level_keyset provided. You can see\nthat the ciphertext for`favorite_animal`is the same for customers 2 and 3\nsince their plaintext`favorite_animal`is the same.\n\n```\nDECLARE kms_resource_name STRING;\nDECLARE first_level_keyset BYTES;\nSET kms_resource_name = 'gcp-kms://projects/my-project/locations/us/keyRings/my-key-ring/cryptoKeys/my-crypto-key';\nSET first_level_keyset = b'\\012\\044\\000\\107\\275\\360\\176\\264\\206\\332\\235\\215\\304...';\n\nCREATE TABLE daead.EncryptedCustomerData AS\nSELECT\n customer_id,\n DETERMINISTC_ENCRYPT(\n KEYS.KEYSET_CHAIN(kms_resource_name, first_level_keyset),\n favorite_animal,\n CAST(CAST(customer_id AS STRING) AS BYTES)\n ) AS encrypted_animal\nFROM\n daead.RawCustomerData;\n```\n\nThe following query uses the first_level_keyset to decrypt data in the`EncryptedCustomerData`table.\n\n```\nDECLARE kms_resource_name STRING;\nDECLARE first_level_keyset BYTES;\nSET kms_resource_name = 'gcp-kms://projects/my-project/locations/us/keyRings/my-key-ring/cryptoKeys/my-crypto-key';\nSET first_level_keyset = b'\\012\\044\\000\\107\\275\\360\\176\\264\\206\\332\\235\\215\\304...';\n\nSELECT\n customer_id,\n DETERMINISTIC_DECRYPT_BYTES(\n KEYS.KEYSET_CHAIN(kms_resource_name, first_level_keyset),\n encrypted_animal,\n CAST(CAST(customer_id AS STRING) AS BYTES)\n ) AS favorite_animal\nFROM dead.EncryptedCustomerData;\n```\n\n\n"
},
{
"name": "KEYS.KEYSET_FROM_JSON",
"arguments": [],
"category": "AEAD_encryption",
"description_markdown": "```\nKEYS.KEYSET_FROM_JSON(json_keyset)\n```\n\n **Description** \n\nReturns the input`json_keyset``STRING`as\nserialized`BYTES`, which is a valid input for other`KEYS`and`AEAD`functions. The JSON`STRING`must\nbe compatible with the definition of the[google.crypto.tink.Keyset](https://github.com/google/tink/blob/master/proto/tink.proto)protocol buffer message: the JSON keyset should be a JSON object containing\nobjects and name-value pairs corresponding to those in the \"keyset\" message in\nthe google.crypto.tink.Keyset definition. You can convert the output serialized`BYTES`representation back to a JSON`STRING`using`KEYS.KEYSET_TO_JSON`.\n\n **Return Data Type** \n\n`BYTES`\n\n **Example** \n\n`KEYS.KEYSET_FROM_JSON`takes JSON-formatted`STRING`values like the following:\n\n```\n{\n \"key\":[\n {\n \"keyData\":{\n \"keyMaterialType\":\"SYMMETRIC\",\n \"typeUrl\":\"type.googleapis.com/google.crypto.tink.AesGcmKey\",\n \"value\":\"GiD80Z8kL6AP3iSNHhqseZGAIvq7TVQzClT7FQy8YwK3OQ==\"\n },\n \"keyId\":3101427138,\n \"outputPrefixType\":\"TINK\",\n \"status\":\"ENABLED\"\n }\n ],\n \"primaryKeyId\":3101427138\n}\n```\n\nThe following query creates a new keyset from a JSON-formatted`STRING``json_keyset`:\n\n```\nSELECT KEYS.KEYSET_FROM_JSON(json_keyset);\n```\n\nThis returns the`json_keyset`serialized as`BYTES`, like the following:\n\n```\n\\x08\\x9d\\x8e\\x85\\x82\\x09\\x12d\\x0aX\\x0a0\ntype.googleapis.com/google.crypto.tink.AesGcmKey\\x12\\\"\\x1a qX\\xe4IG\\x87\\x1f\\xde\n\\xe3)+e\\x98\\x0a\\x1c}\\xfe\\x88<\\x12\\xeb\\xc1t\\xb8\\x83\\x1a\\xcd\\xa8\\x97\\x84g\\x18\\x01\n\\x10\\x01\\x18\\x9d\\x8e\\x85\\x82\\x09 \\x01\n```\n\n\n"
},
{
"name": "KEYS.KEYSET_LENGTH",
"arguments": [],
"category": "AEAD_encryption",
"description_markdown": "```\nKEYS.KEYSET_LENGTH(keyset)\n```\n\n **Description** \n\nReturns the number of keys in the provided keyset.\n\n **Return Data Type** \n\n`INT64`\n\n **Example** \n\nThis example references a JSON-formatted STRING\ncalled`json_keyset`that contains two keys:\n\n```\n{\n \"primaryKeyId\":1354994251,\n \"key\":[\n {\n \"keyData\":{\n \"keyMaterialType\":\"SYMMETRIC\",\n \"typeUrl\":\"type.googleapis.com/google.crypto.tink.AesGcmKey\",\n \"value\":\"GiD9sxQRgFj4aYN78vaIlxInjZkG/uvyWSY9a8GN+ELV2Q==\"\n },\n \"keyId\":1354994251,\n \"outputPrefixType\":\"TINK\",\n \"status\":\"ENABLED\"\n }\n ],\n \"key\":[\n {\n \"keyData\":{\n \"keyMaterialType\":\"SYMMETRIC\",\n \"typeUrl\":\"type.googleapis.com/google.crypto.tink.AesGcmKey\",\n \"value\":\"PRn76sxQRgFj4aYN00vaIlxInjZkG/uvyWSY9a2bLRm\"\n },\n \"keyId\":852264701,\n \"outputPrefixType\":\"TINK\",\n \"status\":\"DISABLED\"\n }\n ]\n}\n```\n\nThe following query converts`json_keyset`to a keyset and then returns\nthe number of keys in the keyset:\n\n```\nSELECT KEYS.KEYSET_LENGTH(KEYS.KEYSET_FROM_JSON(json_keyset)) as key_count;\n\n/*-----------*\n | key_count |\n +-----------+\n | 2 |\n *-----------*/\n```\n\n\n"
},
{
"name": "KEYS.KEYSET_TO_JSON",
"arguments": [],
"category": "AEAD_encryption",
"description_markdown": "```\nKEYS.KEYSET_TO_JSON(keyset)\n```\n\n **Description** \n\nReturns a JSON`STRING`representation of the input`keyset`. The returned JSON`STRING`is compatible\nwith the definition of the[google.crypto.tink.Keyset](https://github.com/google/tink/blob/master/proto/tink.proto)protocol buffer message. You can convert the JSON`STRING`representation back to`BYTES`using`KEYS.KEYSET_FROM_JSON`.\n\n **Return Data Type** \n\n`STRING`\n\n **Example** \n\nThe following query returns a new`'AEAD_AES_GCM_256'`keyset as a\nJSON-formatted`STRING`.\n\n```\nSELECT KEYS.KEYSET_TO_JSON(KEYS.NEW_KEYSET('AEAD_AES_GCM_256'));\n```\n\nThe result is a`STRING`like the following.\n\n```\n{\n \"key\":[\n {\n \"keyData\":{\n \"keyMaterialType\":\"SYMMETRIC\",\n \"typeUrl\":\"type.googleapis.com/google.crypto.tink.AesGcmKey\",\n \"value\":\"GiD80Z8kL6AP3iSNHhqseZGAIvq7TVQzClT7FQy8YwK3OQ==\"\n },\n \"keyId\":3101427138,\n \"outputPrefixType\":\"TINK\",\n \"status\":\"ENABLED\"\n }\n ],\n \"primaryKeyId\":3101427138\n}\n```\n\n\n"
},
{
"name": "KEYS.NEW_KEYSET",
"arguments": [],
"category": "AEAD_encryption",
"description_markdown": "```\nKEYS.NEW_KEYSET(key_type)\n```\n\n **Description** \n\nReturns a serialized keyset containing a new key based on`key_type`. The\nreturned keyset is a serialized`BYTES`representation of[google.crypto.tink.Keyset](https://github.com/google/tink/blob/master/proto/tink.proto)that contains a primary cryptographic key and no additional keys. You can use\nthe keyset with the`AEAD.ENCRYPT`,`AEAD.DECRYPT_BYTES`, and`AEAD.DECRYPT_STRING`functions for encryption and decryption, as well as with\nthe`KEYS`group of key- and keyset-related functions.\n\n`key_type`is a`STRING`literal representation of the type of key to create.`key_type`cannot be`NULL`.`key_type`can be:\n\n- ` AEAD_AES_GCM_256`: Creates a 256-bit key with the pseudo-random number\ngenerator provided by[boringSSL](https://boringssl.googlesource.com/boringssl/). The key uses AES-GCM for\nencryption and decryption operations.\n- ` DETERMINISTIC_AEAD_AES_SIV_CMAC_256`:\nCreates a 512-bit` AES-SIV-CMAC`key, which contains a 256-bit` AES-CTR`key\nand 256-bit` AES-CMAC`key. The` AES-SIV-CMAC`key is created with the\npseudo-random number generator provided by[boringSSL](https://boringssl.googlesource.com/boringssl/). The key\nuses AES-SIV for encryption and decryption operations.\n\n **Return Data Type** \n\n`BYTES`\n\n **Example** \n\nThe following query creates a keyset for each row in`CustomerIds`, which can\nsubsequently be used to encrypt data. Each keyset contains a single encryption\nkey with randomly-generated key data. Each row in the output contains a`customer_id`and an`'AEAD_AES_GCM_256'`key in`BYTES`.\n\n```\nSELECT customer_id, KEYS.NEW_KEYSET('AEAD_AES_GCM_256') AS keyset\nFROM (\n SELECT 1 AS customer_id UNION ALL\n SELECT 2 UNION ALL\n SELECT 3\n) AS CustomerIds;\n```\n\n\n"
},
{
"name": "KEYS.NEW_WRAPPED_KEYSET",
"arguments": [],
"category": "AEAD_encryption",
"description_markdown": "```\nKEYS.NEW_WRAPPED_KEYSET(kms_resource_name, key_type)\n```\n\n **Description** \n\nCreates a new keyset and encrypts it with a[Cloud KMS key](/bigquery/docs/aead-encryption-concepts#cloud_kms_protection).\nReturns the[wrapped keyset](/bigquery/docs/aead-encryption-concepts#wrapped_keysets)as a`BYTES`representation of[google.crypto.tink.Keyset](https://github.com/google/tink/blob/master/proto/tink.proto)that contains a primary cryptographic key and no additional keys.\n\nThis function takes the following arguments:\n\n- ` kms_resource_name`: A` STRING`literal representation of the\nCloud KMS key.` kms_resource_name`cannot be` NULL`. The\nCloud KMS key must reside in the same Cloud region where this\nfunction is executed. A Cloud KMS key looks like this:\n \n \n ```\n gcp-kms://projects/my-project/locations/us/keyRings/my-key-ring/cryptoKeys/my-crypto-key\n ```\n \n \n- ` key_type`: A` STRING`literal representation of the keyset type.` key_type`cannot be` NULL`but can be one of the following values:\n \n \n - ` AEAD_AES_GCM_256`: Creates a 256-bit key with the pseudo-random number\ngenerator provided by[boringSSL](https://boringssl.googlesource.com/boringssl/). The key uses AES-GCM for\nencryption and decryption operations.\n \n \n - ` DETERMINISTIC_AEAD_AES_SIV_CMAC_256`:\nCreates a 512-bit` AES-SIV-CMAC`key, which contains a 256-bit` AES-CTR`key\nand 256-bit` AES-CMAC`key. The` AES-SIV-CMAC`key is created with the\npseudo-random number generator provided by[boringSSL](https://boringssl.googlesource.com/boringssl/). The key\nuses AES-SIV for encryption and decryption operations.\n \n \n\n **Return Data Type** \n\n`BYTES`\n\n **Example** \n\nPut the following variables above each example query that you run:\n\n```\nDECLARE kms_resource_name STRING;\nSET kms_resource_name = 'gcp-kms://projects/my-project/locations/us/keyRings/my-key-ring/cryptoKeys/my-crypto-key';\n```\n\nThe following query creates a wrapped keyset, which contains the ciphertext\nproduced by encrypting a[Tink](https://github.com/google/tink/blob/master/proto/tink.proto)keyset\nwith the specified Cloud KMS key. If you run the query multiple times,\nit generates multiple wrapped keysets, and each wrapped keyset is unique to\neach query that is run.\n\n```\nSELECT KEYS.NEW_WRAPPED_KEYSET(kms_resource_name, 'AEAD_AES_GCM_256');\n```\n\nMultiple calls to this function with the same arguments in one query\nreturns the same value. For example, the following query only creates one\nwrapped keyset and returns it for each row in a table called`my_table`.\n\n```\nSELECT\n *,\n KEYS.NEW_WRAPPED_KEYSET(kms_resource_name, 'AEAD_AES_GCM_256')\nFROM my_table\n```\n\n\n"
},
{
"name": "KEYS.REWRAP_KEYSET",
"arguments": [],
"category": "AEAD_encryption",
"description_markdown": "```\nKEYS.REWRAP_KEYSET(source_kms_resource_name, target_kms_resource_name, wrapped_keyset)\n```\n\n **Description** \n\nRe-encrypts a[wrapped keyset](/bigquery/docs/aead-encryption-concepts#wrapped_keysets)with a new[Cloud KMS key](/bigquery/docs/aead-encryption-concepts#cloud_kms_protection). Returns the wrapped keyset as a`BYTES`representation of[google.crypto.tink.Keyset](https://github.com/google/tink/blob/master/proto/tink.proto)that contains a primary cryptographic key and no additional keys.\n\nWhen this function is used, a wrapped keyset is decrypted by`source_kms_resource_name`and then re-encrypted by`target_kms_resource_name`.\nDuring this process, the decrypted keyset is never visible to customers.\n\nThis function takes the following arguments:\n\n- ` source_kms_resource_name`: A` STRING`literal representation of the\nCloud KMS key you want to replace. This key must reside in the same\nCloud region where this function is executed. A Cloud KMS key looks\nlike this:\n \n \n ```\n gcp-kms://projects/my-project/locations/us/keyRings/my-key-ring/cryptoKeys/my-crypto-key\n ```\n \n \n- ` target_kms_resource_name`: A` STRING`literal representation of the\nnew Cloud KMS key that you want to use.\n \n \n- ` wrapped_keyset`: A` BYTES`literal representation of the\nkeyset that you want to re-encrypt.\n \n \n\n **Return Data Type** \n\n`BYTES`\n\n **Example** \n\nPut the following variables above each example query that you run:\n\n```\nDECLARE source_kms_resource_name STRING;\nDECLARE target_kms_resource_name STRING;\nDECLARE wrapped_keyset BYTES;\nSET source_kms_resource_name = 'gcp-kms://projects/my-project/locations/us/keyRings/my-key-ring/cryptoKeys/my-crypto-key';\nSET target_kms_resource_name = 'gcp-kms://projects/my-project/locations/another-location/keyRings/my-key-ring/cryptoKeys/my-other-crypto-key';\nSET wrapped_keyset = b'\\012\\044\\000\\107\\275\\360\\176\\264\\206\\332\\235\\215\\304...';\n```\n\nThe following query rewraps a wrapped keyset. If you run the query multiple\ntimes, it generates multiple wrapped keysets, and each wrapped keyset is unique\nto each query that is run.\n\n```\nSELECT KEYS.REWRAP_KEYSET(source_kms_resource_name, target_kms_resource_name, wrapped_keyset);\n```\n\nMultiple calls to this function with the same arguments in one query\nreturns the same value. For example, the following query only creates one\nwrapped keyset and returns it for each row in a table called`my_table`.\n\n```\nSELECT\n *,\n KEYS.REWRAP_KEYSET(source_kms_resource_name, target_kms_resource_name, wrapped_keyset)\nFROM my_table\n```\n\n\n"
},
{
"name": "KEYS.ROTATE_KEYSET",
"arguments": [],
"category": "AEAD_encryption",
"description_markdown": "```\nKEYS.ROTATE_KEYSET(keyset, key_type)\n```\n\n **Description** \n\nAdds a new key to`keyset`based on`key_type`. This new key becomes the primary\ncryptographic key of the new keyset. Returns the new keyset serialized as`BYTES`.\n\nThe old primary cryptographic key from the input`keyset`remains an additional\nkey in the returned keyset.\n\nThe new`key_type`must match the key type of existing keys in the`keyset`.\n\n **Return Data Type** \n\n`BYTES`\n\n **Example** \n\nThe following statement creates a table containing a column of unique`customer_id`values and`'AEAD_AES_GCM_256'`keysets. Then, it creates a new\nprimary cryptographic key within each keyset in the source table using`KEYS.ROTATE_KEYSET`. Each row in the output contains a`customer_id`and an`'AEAD_AES_GCM_256'`keyset in`BYTES`.\n\n```\nWITH ExistingKeysets AS (\nSELECT 1 AS customer_id, KEYS.NEW_KEYSET('AEAD_AES_GCM_256') AS keyset\n UNION ALL\n SELECT 2, KEYS.NEW_KEYSET('AEAD_AES_GCM_256') UNION ALL\n SELECT 3, KEYS.NEW_KEYSET('AEAD_AES_GCM_256')\n)\nSELECT customer_id, KEYS.ROTATE_KEYSET(keyset, 'AEAD_AES_GCM_256') AS keyset\nFROM ExistingKeysets;\n```\n\n\n"
},
{
"name": "KEYS.ROTATE_WRAPPED_KEYSET",
"arguments": [],
"category": "AEAD_encryption",
"description_markdown": "```\nKEYS.ROTATE_WRAPPED_KEYSET(kms_resource_name, wrapped_keyset, key_type)\n```\n\n **Description** \n\nTakes an existing[wrapped keyset](/bigquery/docs/aead-encryption-concepts#wrapped_keysets)and returns a rotated and\nrewrapped keyset. The returned wrapped keyset is a`BYTES`representation of[google.crypto.tink.Keyset](https://github.com/google/tink/blob/master/proto/tink.proto).\n\nWhen this function is used, the wrapped keyset is decrypted,\nthe new key is added, and then the keyset is re-encrypted. The primary\ncryptographic key from the input`wrapped_keyset`remains as an\nadditional key in the returned keyset. During this rotation process,\nthe decrypted keyset is never visible to customers.\n\nThis function takes the following arguments:\n\n- ` kms_resource_name`: A` STRING`literal representation of the[Cloud KMS key](/bigquery/docs/aead-encryption-concepts#cloud_kms_protection)that was used to wrap the\nwrapped keyset. The Cloud KMS key must reside in the same Cloud\nregion where this function is executed. A Cloud KMS key looks like\nthis:\n \n \n ```\n gcp-kms://projects/my-project/locations/us/keyRings/my-key-ring/cryptoKeys/my-crypto-key\n ```\n \n \n- ` wrapped_keyset`: A` BYTES`literal representation of the\nexisting keyset that you want to work with.\n \n \n- ` key_type`: A` STRING`literal representation of the keyset type. This must\nmatch the key type of existing keys in` wrapped_keyset`.\n \n \n\n **Return Data Type** \n\n`BYTES`\n\n **Example** \n\nPut the following variables above each example query that you run:\n\n```\nDECLARE kms_resource_name STRING;\nDECLARE wrapped_keyset BYTES;\nSET kms_resource_name = 'gcp-kms://projects/my-project/locations/us/keyRings/my-key-ring/cryptoKeys/my-crypto-key';\nSET wrapped_keyset = b'\\012\\044\\000\\107\\275\\360\\176\\264\\206\\332\\235\\215\\304...';\n```\n\nThe following query rotates a wrapped keyset. If you run the query multiple\ntimes, it generates multiple wrapped keysets, and each wrapped keyset is unique\nto each query that is run.\n\n```\nSELECT KEYS.ROTATE_WRAPPED_KEYSET(kms_resource_name, wrapped_keyset, 'AEAD_AES_GCM_256');\n```\n\nMultiple calls to this function with the same arguments in one query\nreturns the same value. For example, the following query only creates one\nwrapped keyset and returns it for each row in a table called`my_table`.\n\n```\nSELECT\n *,\n KEYS.ROTATE_WRAPPED_KEYSET(kms_resource_name, wrapped_keyset, 'AEAD_AES_GCM_256')\nFROM my_table\n```\n\n\n<span id=\"aggregate_functions\">\n## Aggregate functions\n\n</span>\nGoogleSQL for BigQuery supports the following general aggregate functions.\nTo learn about the syntax for aggregate function calls, see[Aggregate function calls](/bigquery/docs/reference/standard-sql/aggregate-function-calls).\n\n\n\n"
},
{
"name": "LAG",
"arguments": [],
"category": "Navigation",
"description_markdown": "```\nLAG (value_expression[, offset [, default_expression]])\nOVER over_clause\n\nover_clause:\n { named_window | ( [ window_specification ] ) }\n\nwindow_specification:\n [ named_window ]\n [ PARTITION BY partition_expression [, ...] ]\n ORDER BY expression [ { ASC | DESC } ] [, ...]\n```\n\n **Description** \n\nReturns the value of the`value_expression`on a preceding row. Changing the`offset`value changes which preceding row is returned; the default value is`1`, indicating the previous row in the window frame. An error occurs if`offset`is NULL or a negative value.\n\nThe optional`default_expression`is used if there isn't a row in the window\nframe at the specified offset. This expression must be a constant expression and\nits type must be implicitly coercible to the type of`value_expression`. If left\nunspecified,`default_expression`defaults to NULL.\n\nTo learn more about the`OVER`clause and how to use it, see[Window function calls](/bigquery/docs/reference/standard-sql/window-function-calls).\n\n **Supported Argument Types** \n\n- ` value_expression`can be any data type that can be returned from an\nexpression.\n- ` offset`must be a non-negative integer literal or parameter.\n- ` default_expression`must be compatible with the value expression type.\n\n **Return Data Type** \n\nSame type as`value_expression`.\n\n **Examples** \n\nThe following example illustrates a basic use of the`LAG`function.\n\n```\nWITH finishers AS\n (SELECT 'Sophia Liu' as name,\n TIMESTAMP '2016-10-18 2:51:45' as finish_time,\n 'F30-34' as division\n UNION ALL SELECT 'Lisa Stelzner', TIMESTAMP '2016-10-18 2:54:11', 'F35-39'\n UNION ALL SELECT 'Nikki Leith', TIMESTAMP '2016-10-18 2:59:01', 'F30-34'\n UNION ALL SELECT 'Lauren Matthews', TIMESTAMP '2016-10-18 3:01:17', 'F35-39'\n UNION ALL SELECT 'Desiree Berry', TIMESTAMP '2016-10-18 3:05:42', 'F35-39'\n UNION ALL SELECT 'Suzy Slane', TIMESTAMP '2016-10-18 3:06:24', 'F35-39'\n UNION ALL SELECT 'Jen Edwards', TIMESTAMP '2016-10-18 3:06:36', 'F30-34'\n UNION ALL SELECT 'Meghan Lederer', TIMESTAMP '2016-10-18 3:07:41', 'F30-34'\n UNION ALL SELECT 'Carly Forte', TIMESTAMP '2016-10-18 3:08:58', 'F25-29'\n UNION ALL SELECT 'Lauren Reasoner', TIMESTAMP '2016-10-18 3:10:14', 'F30-34')\nSELECT name,\n finish_time,\n division,\n LAG(name)\n OVER (PARTITION BY division ORDER BY finish_time ASC) AS preceding_runner\nFROM finishers;\n\n/*-----------------+-------------+----------+------------------*\n | name | finish_time | division | preceding_runner |\n +-----------------+-------------+----------+------------------+\n | Carly Forte | 03:08:58 | F25-29 | NULL |\n | Sophia Liu | 02:51:45 | F30-34 | NULL |\n | Nikki Leith | 02:59:01 | F30-34 | Sophia Liu |\n | Jen Edwards | 03:06:36 | F30-34 | Nikki Leith |\n | Meghan Lederer | 03:07:41 | F30-34 | Jen Edwards |\n | Lauren Reasoner | 03:10:14 | F30-34 | Meghan Lederer |\n | Lisa Stelzner | 02:54:11 | F35-39 | NULL |\n | Lauren Matthews | 03:01:17 | F35-39 | Lisa Stelzner |\n | Desiree Berry | 03:05:42 | F35-39 | Lauren Matthews |\n | Suzy Slane | 03:06:24 | F35-39 | Desiree Berry |\n *-----------------+-------------+----------+------------------*/\n```\n\nThis next example uses the optional`offset`parameter.\n\n```\nWITH finishers AS\n (SELECT 'Sophia Liu' as name,\n TIMESTAMP '2016-10-18 2:51:45' as finish_time,\n 'F30-34' as division\n UNION ALL SELECT 'Lisa Stelzner', TIMESTAMP '2016-10-18 2:54:11', 'F35-39'\n UNION ALL SELECT 'Nikki Leith', TIMESTAMP '2016-10-18 2:59:01', 'F30-34'\n UNION ALL SELECT 'Lauren Matthews', TIMESTAMP '2016-10-18 3:01:17', 'F35-39'\n UNION ALL SELECT 'Desiree Berry', TIMESTAMP '2016-10-18 3:05:42', 'F35-39'\n UNION ALL SELECT 'Suzy Slane', TIMESTAMP '2016-10-18 3:06:24', 'F35-39'\n UNION ALL SELECT 'Jen Edwards', TIMESTAMP '2016-10-18 3:06:36', 'F30-34'\n UNION ALL SELECT 'Meghan Lederer', TIMESTAMP '2016-10-18 3:07:41', 'F30-34'\n UNION ALL SELECT 'Carly Forte', TIMESTAMP '2016-10-18 3:08:58', 'F25-29'\n UNION ALL SELECT 'Lauren Reasoner', TIMESTAMP '2016-10-18 3:10:14', 'F30-34')\nSELECT name,\n finish_time,\n division,\n LAG(name, 2)\n OVER (PARTITION BY division ORDER BY finish_time ASC) AS two_runners_ahead\nFROM finishers;\n\n/*-----------------+-------------+----------+-------------------*\n | name | finish_time | division | two_runners_ahead |\n +-----------------+-------------+----------+-------------------+\n | Carly Forte | 03:08:58 | F25-29 | NULL |\n | Sophia Liu | 02:51:45 | F30-34 | NULL |\n | Nikki Leith | 02:59:01 | F30-34 | NULL |\n | Jen Edwards | 03:06:36 | F30-34 | Sophia Liu |\n | Meghan Lederer | 03:07:41 | F30-34 | Nikki Leith |\n | Lauren Reasoner | 03:10:14 | F30-34 | Jen Edwards |\n | Lisa Stelzner | 02:54:11 | F35-39 | NULL |\n | Lauren Matthews | 03:01:17 | F35-39 | NULL |\n | Desiree Berry | 03:05:42 | F35-39 | Lisa Stelzner |\n | Suzy Slane | 03:06:24 | F35-39 | Lauren Matthews |\n *-----------------+-------------+----------+-------------------*/\n```\n\nThe following example replaces NULL values with a default value.\n\n```\nWITH finishers AS\n (SELECT 'Sophia Liu' as name,\n TIMESTAMP '2016-10-18 2:51:45' as finish_time,\n 'F30-34' as division\n UNION ALL SELECT 'Lisa Stelzner', TIMESTAMP '2016-10-18 2:54:11', 'F35-39'\n UNION ALL SELECT 'Nikki Leith', TIMESTAMP '2016-10-18 2:59:01', 'F30-34'\n UNION ALL SELECT 'Lauren Matthews', TIMESTAMP '2016-10-18 3:01:17', 'F35-39'\n UNION ALL SELECT 'Desiree Berry', TIMESTAMP '2016-10-18 3:05:42', 'F35-39'\n UNION ALL SELECT 'Suzy Slane', TIMESTAMP '2016-10-18 3:06:24', 'F35-39'\n UNION ALL SELECT 'Jen Edwards', TIMESTAMP '2016-10-18 3:06:36', 'F30-34'\n UNION ALL SELECT 'Meghan Lederer', TIMESTAMP '2016-10-18 3:07:41', 'F30-34'\n UNION ALL SELECT 'Carly Forte', TIMESTAMP '2016-10-18 3:08:58', 'F25-29'\n UNION ALL SELECT 'Lauren Reasoner', TIMESTAMP '2016-10-18 3:10:14', 'F30-34')\nSELECT name,\n finish_time,\n division,\n LAG(name, 2, 'Nobody')\n OVER (PARTITION BY division ORDER BY finish_time ASC) AS two_runners_ahead\nFROM finishers;\n\n/*-----------------+-------------+----------+-------------------*\n | name | finish_time | division | two_runners_ahead |\n +-----------------+-------------+----------+-------------------+\n | Carly Forte | 03:08:58 | F25-29 | Nobody |\n | Sophia Liu | 02:51:45 | F30-34 | Nobody |\n | Nikki Leith | 02:59:01 | F30-34 | Nobody |\n | Jen Edwards | 03:06:36 | F30-34 | Sophia Liu |\n | Meghan Lederer | 03:07:41 | F30-34 | Nikki Leith |\n | Lauren Reasoner | 03:10:14 | F30-34 | Jen Edwards |\n | Lisa Stelzner | 02:54:11 | F35-39 | Nobody |\n | Lauren Matthews | 03:01:17 | F35-39 | Nobody |\n | Desiree Berry | 03:05:42 | F35-39 | Lisa Stelzner |\n | Suzy Slane | 03:06:24 | F35-39 | Lauren Matthews |\n *-----------------+-------------+----------+-------------------*/\n```\n\n\n"
},
{
"name": "LAST_DAY",
"arguments": [],
"category": "Date",
"description_markdown": "```\nLAST_DAY(date_expression[, date_part])\n```\n\n **Description** \n\nReturns the last day from a date expression. This is commonly used to return\nthe last day of the month.\n\nYou can optionally specify the date part for which the last day is returned.\nIf this parameter is not used, the default value is`MONTH`.`LAST_DAY`supports the following values for`date_part`:\n\n- ` YEAR`\n- ` QUARTER`\n- ` MONTH`\n- ` WEEK`. Equivalent to 7` DAY`s.\n- ` WEEK(<WEEKDAY>)`.` <WEEKDAY>`represents the starting day of the week.\nValid values are` SUNDAY`,` MONDAY`,` TUESDAY`,` WEDNESDAY`,` THURSDAY`,` FRIDAY`, and` SATURDAY`.\n- ` ISOWEEK`. Uses[ISO 8601](https://en.wikipedia.org/wiki/ISO_week_date)week boundaries. ISO weeks begin\non Monday.\n- ` ISOYEAR`. Uses the[ISO 8601](https://en.wikipedia.org/wiki/ISO_8601)week-numbering year boundary.\nThe ISO year boundary is the Monday of the first week whose Thursday belongs\nto the corresponding Gregorian calendar year.\n\n **Return Data Type** \n\n`DATE`\n\n **Example** \n\nThese both return the last day of the month:\n\n```\nSELECT LAST_DAY(DATE '2008-11-25', MONTH) AS last_day\n\n/*------------*\n | last_day |\n +------------+\n | 2008-11-30 |\n *------------*/\n```\n\n```\nSELECT LAST_DAY(DATE '2008-11-25') AS last_day\n\n/*------------*\n | last_day |\n +------------+\n | 2008-11-30 |\n *------------*/\n```\n\nThis returns the last day of the year:\n\n```\nSELECT LAST_DAY(DATE '2008-11-25', YEAR) AS last_day\n\n/*------------*\n | last_day |\n +------------+\n | 2008-12-31 |\n *------------*/\n```\n\nThis returns the last day of the week for a week that starts on a Sunday:\n\n```\nSELECT LAST_DAY(DATE '2008-11-10', WEEK(SUNDAY)) AS last_day\n\n/*------------*\n | last_day |\n +------------+\n | 2008-11-15 |\n *------------*/\n```\n\nThis returns the last day of the week for a week that starts on a Monday:\n\n```\nSELECT LAST_DAY(DATE '2008-11-10', WEEK(MONDAY)) AS last_day\n\n/*------------*\n | last_day |\n +------------+\n | 2008-11-16 |\n *------------*/\n```\n\n\n"
},
{
"name": "LAST_VALUE",
"arguments": [],
"category": "Navigation",
"description_markdown": "```\nLAST_VALUE (value_expression [{RESPECT | IGNORE} NULLS])\nOVER over_clause\n\nover_clause:\n { named_window | ( [ window_specification ] ) }\n\nwindow_specification:\n [ named_window ]\n [ PARTITION BY partition_expression [, ...] ]\n ORDER BY expression [ { ASC | DESC } ] [, ...]\n [ window_frame_clause ]\n```\n\n **Description** \n\nReturns the value of the`value_expression`for the last row in the current\nwindow frame.\n\nThis function includes`NULL`values in the calculation unless`IGNORE NULLS`is\npresent. If`IGNORE NULLS`is present, the function excludes`NULL`values from\nthe calculation.\n\nTo learn more about the`OVER`clause and how to use it, see[Window function calls](/bigquery/docs/reference/standard-sql/window-function-calls).\n\n **Supported Argument Types** \n\n`value_expression`can be any data type that an expression can return.\n\n **Return Data Type** \n\nSame type as`value_expression`.\n\n **Examples** \n\nThe following example computes the slowest time for each division.\n\n```\nWITH finishers AS\n (SELECT 'Sophia Liu' as name,\n TIMESTAMP '2016-10-18 2:51:45' as finish_time,\n 'F30-34' as division\n UNION ALL SELECT 'Lisa Stelzner', TIMESTAMP '2016-10-18 2:54:11', 'F35-39'\n UNION ALL SELECT 'Nikki Leith', TIMESTAMP '2016-10-18 2:59:01', 'F30-34'\n UNION ALL SELECT 'Lauren Matthews', TIMESTAMP '2016-10-18 3:01:17', 'F35-39'\n UNION ALL SELECT 'Desiree Berry', TIMESTAMP '2016-10-18 3:05:42', 'F35-39'\n UNION ALL SELECT 'Suzy Slane', TIMESTAMP '2016-10-18 3:06:24', 'F35-39'\n UNION ALL SELECT 'Jen Edwards', TIMESTAMP '2016-10-18 3:06:36', 'F30-34'\n UNION ALL SELECT 'Meghan Lederer', TIMESTAMP '2016-10-18 3:07:41', 'F30-34'\n UNION ALL SELECT 'Carly Forte', TIMESTAMP '2016-10-18 3:08:58', 'F25-29'\n UNION ALL SELECT 'Lauren Reasoner', TIMESTAMP '2016-10-18 3:10:14', 'F30-34')\nSELECT name,\n FORMAT_TIMESTAMP('%X', finish_time) AS finish_time,\n division,\n FORMAT_TIMESTAMP('%X', slowest_time) AS slowest_time,\n TIMESTAMP_DIFF(slowest_time, finish_time, SECOND) AS delta_in_seconds\nFROM (\n SELECT name,\n finish_time,\n division,\n LAST_VALUE(finish_time)\n OVER (PARTITION BY division ORDER BY finish_time ASC\n ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING) AS slowest_time\n FROM finishers);\n\n/*-----------------+-------------+----------+--------------+------------------*\n | name | finish_time | division | slowest_time | delta_in_seconds |\n +-----------------+-------------+----------+--------------+------------------+\n | Carly Forte | 03:08:58 | F25-29 | 03:08:58 | 0 |\n | Sophia Liu | 02:51:45 | F30-34 | 03:10:14 | 1109 |\n | Nikki Leith | 02:59:01 | F30-34 | 03:10:14 | 673 |\n | Jen Edwards | 03:06:36 | F30-34 | 03:10:14 | 218 |\n | Meghan Lederer | 03:07:41 | F30-34 | 03:10:14 | 153 |\n | Lauren Reasoner | 03:10:14 | F30-34 | 03:10:14 | 0 |\n | Lisa Stelzner | 02:54:11 | F35-39 | 03:06:24 | 733 |\n | Lauren Matthews | 03:01:17 | F35-39 | 03:06:24 | 307 |\n | Desiree Berry | 03:05:42 | F35-39 | 03:06:24 | 42 |\n | Suzy Slane | 03:06:24 | F35-39 | 03:06:24 | 0 |\n *-----------------+-------------+----------+--------------+------------------*/\n```\n\n\n"
},
{
"name": "LAX_BOOL",
"arguments": [],
"category": "JSON",
"description_markdown": "```\nLAX_BOOL(json_expr)\n```\n\n **Description** \n\nAttempts to convert a JSON value to a SQL`BOOL`value.\n\nArguments:\n\n- ` json_expr`: JSON. For example:\n \n \n ```\n JSON 'true'\n ```\n \n \n\nDetails:\n\n- If` json_expr`is SQL` NULL`, the function returns SQL` NULL`.\n- See the conversion rules in the next section for additional` NULL`handling.\n\n **Conversion rules** \n\n| From JSON type | To SQL`BOOL` |\n| --- | --- |\n| boolean | If the JSON boolean is`true`, returns`TRUE`.\n Otherwise, returns`FALSE`. |\n| string | If the JSON string is`'true'`, returns`TRUE`.\n If the JSON string is`'false'`, returns`FALSE`.\n If the JSON string is any other value or has whitespace in it,\n returns`NULL`.\n This conversion is case-insensitive. |\n| number | If the JSON number is a representation of`0`,\n returns`FALSE`. Otherwise, returns`TRUE`. |\n| other type or null | `NULL` |\n\n **Return type** \n\n`BOOL`\n\n **Examples** \n\nExample with input that is a JSON boolean:\n\n```\nSELECT LAX_BOOL(JSON 'true') AS result;\n\n/*--------*\n | result |\n +--------+\n | true |\n *--------*/\n```\n\nExamples with inputs that are JSON strings:\n\n```\nSELECT LAX_BOOL(JSON '\"true\"') AS result;\n\n/*--------*\n | result |\n +--------+\n | TRUE |\n *--------*/\n```\n\n```\nSELECT LAX_BOOL(JSON '\"true \"') AS result;\n\n/*--------*\n | result |\n +--------+\n | NULL |\n *--------*/\n```\n\n```\nSELECT LAX_BOOL(JSON '\"foo\"') AS result;\n\n/*--------*\n | result |\n +--------+\n | NULL |\n *--------*/\n```\n\nExamples with inputs that are JSON numbers:\n\n```\nSELECT LAX_BOOL(JSON '10') AS result;\n\n/*--------*\n | result |\n +--------+\n | TRUE |\n *--------*/\n```\n\n```\nSELECT LAX_BOOL(JSON '0') AS result;\n\n/*--------*\n | result |\n +--------+\n | FALSE |\n *--------*/\n```\n\n```\nSELECT LAX_BOOL(JSON '0.0') AS result;\n\n/*--------*\n | result |\n +--------+\n | FALSE |\n *--------*/\n```\n\n```\nSELECT LAX_BOOL(JSON '-1.1') AS result;\n\n/*--------*\n | result |\n +--------+\n | TRUE |\n *--------*/\n```\n\n\n"
},
{
"name": "LAX_FLOAT64",
"arguments": [],
"category": "JSON",
"description_markdown": "```\nLAX_FLOAT64(json_expr)\n```\n\n **Description** \n\nAttempts to convert a JSON value to a\nSQL`FLOAT64`value.\n\nArguments:\n\n- ` json_expr`: JSON. For example:\n \n \n ```\n JSON '9.8'\n ```\n \n \n\nDetails:\n\n- If` json_expr`is SQL` NULL`, the function returns SQL` NULL`.\n- See the conversion rules in the next section for additional` NULL`handling.\n\n **Conversion rules** \n\n| From JSON type | To SQL`FLOAT64` |\n| --- | --- |\n| boolean | `NULL` |\n| string | If the JSON string represents a JSON number, parses it as\n a`BIGNUMERIC`value, and then safe casts the result as a`FLOAT64`value.\n If the JSON string can't be converted, returns`NULL`. |\n| number | Casts the JSON number as a`FLOAT64`value.\n Large JSON numbers are rounded. |\n| other type or null | `NULL` |\n\n **Return type** \n\n`FLOAT64`\n\n **Examples** \n\nExamples with inputs that are JSON numbers:\n\n```\nSELECT LAX_FLOAT64(JSON '9.8') AS result;\n\n/*--------*\n | result |\n +--------+\n | 9.8 |\n *--------*/\n```\n\n```\nSELECT LAX_FLOAT64(JSON '9') AS result;\n\n/*--------*\n | result |\n +--------+\n | 9.0 |\n *--------*/\n```\n\n```\nSELECT LAX_FLOAT64(JSON '9007199254740993') AS result;\n\n/*--------------------*\n | result |\n +--------------------+\n | 9007199254740992.0 |\n *--------------------*/\n```\n\n```\nSELECT LAX_FLOAT64(JSON '1e100') AS result;\n\n/*--------*\n | result |\n +--------+\n | 1e+100 |\n *--------*/\n```\n\nExamples with inputs that are JSON booleans:\n\n```\nSELECT LAX_FLOAT64(JSON 'true') AS result;\n\n/*--------*\n | result |\n +--------+\n | NULL |\n *--------*/\n```\n\n```\nSELECT LAX_FLOAT64(JSON 'false') AS result;\n\n/*--------*\n | result |\n +--------+\n | NULL |\n *--------*/\n```\n\nExamples with inputs that are JSON strings:\n\n```\nSELECT LAX_FLOAT64(JSON '\"10\"') AS result;\n\n/*--------*\n | result |\n +--------+\n | 10.0 |\n *--------*/\n```\n\n```\nSELECT LAX_FLOAT64(JSON '\"1.1\"') AS result;\n\n/*--------*\n | result |\n +--------+\n | 1.1 |\n *--------*/\n```\n\n```\nSELECT LAX_FLOAT64(JSON '\"1.1e2\"') AS result;\n\n/*--------*\n | result |\n +--------+\n | 110.0 |\n *--------*/\n```\n\n```\nSELECT LAX_FLOAT64(JSON '\"9007199254740993\"') AS result;\n\n/*--------------------*\n | result |\n +--------------------+\n | 9007199254740992.0 |\n *--------------------*/\n```\n\n```\nSELECT LAX_FLOAT64(JSON '\"+1.5\"') AS result;\n\n/*--------*\n | result |\n +--------+\n | 1.5 |\n *--------*/\n```\n\n```\nSELECT LAX_FLOAT64(JSON '\"NaN\"') AS result;\n\n/*--------*\n | result |\n +--------+\n | NaN |\n *--------*/\n```\n\n```\nSELECT LAX_FLOAT64(JSON '\"Inf\"') AS result;\n\n/*----------*\n | result |\n +----------+\n | Infinity |\n *----------*/\n```\n\n```\nSELECT LAX_FLOAT64(JSON '\"-InfiNiTY\"') AS result;\n\n/*-----------*\n | result |\n +-----------+\n | -Infinity |\n *-----------*/\n```\n\n```\nSELECT LAX_FLOAT64(JSON '\"foo\"') AS result;\n\n/*--------*\n | result |\n +--------+\n | NULL |\n *--------*/\n```\n\n\n"
},
{
"name": "LAX_INT64",
"arguments": [],
"category": "JSON",
"description_markdown": "```\nLAX_INT64(json_expr)\n```\n\n **Description** \n\nAttempts to convert a JSON value to a SQL`INT64`value.\n\nArguments:\n\n- ` json_expr`: JSON. For example:\n \n \n ```\n JSON '999'\n ```\n \n \n\nDetails:\n\n- If` json_expr`is SQL` NULL`, the function returns SQL` NULL`.\n- See the conversion rules in the next section for additional` NULL`handling.\n\n **Conversion rules** \n\n| From JSON type | To SQL`INT64` |\n| --- | --- |\n| boolean | If the JSON boolean is`true`, returns`1`.\n If`false`, returns`0`. |\n| string | If the JSON string represents a JSON number, parses it as\n a`BIGNUMERIC`value, and then safe casts the results as an`INT64`value.\n If the JSON string can't be converted, returns`NULL`. |\n| number | Casts the JSON number as an`INT64`value.\n If the JSON number can't be converted, returns`NULL`. |\n| other type or null | `NULL` |\n\n **Return type** \n\n`INT64`\n\n **Examples** \n\nExamples with inputs that are JSON numbers:\n\n```\nSELECT LAX_INT64(JSON '10') AS result;\n\n/*--------*\n | result |\n +--------+\n | 10 |\n *--------*/\n```\n\n```\nSELECT LAX_INT64(JSON '10.0') AS result;\n\n/*--------*\n | result |\n +--------+\n | 10 |\n *--------*/\n```\n\n```\nSELECT LAX_INT64(JSON '1.1') AS result;\n\n/*--------*\n | result |\n +--------+\n | 1 |\n *--------*/\n```\n\n```\nSELECT LAX_INT64(JSON '3.5') AS result;\n\n/*--------*\n | result |\n +--------+\n | 4 |\n *--------*/\n```\n\n```\nSELECT LAX_INT64(JSON '1.1e2') AS result;\n\n/*--------*\n | result |\n +--------+\n | 110 |\n *--------*/\n```\n\n```\nSELECT LAX_INT64(JSON '1e100') AS result;\n\n/*--------*\n | result |\n +--------+\n | NULL |\n *--------*/\n```\n\nExamples with inputs that are JSON booleans:\n\n```\nSELECT LAX_INT64(JSON 'true') AS result;\n\n/*--------*\n | result |\n +--------+\n | 1 |\n *--------*/\n```\n\n```\nSELECT LAX_INT64(JSON 'false') AS result;\n\n/*--------*\n | result |\n +--------+\n | 0 |\n *--------*/\n```\n\nExamples with inputs that are JSON strings:\n\n```\nSELECT LAX_INT64(JSON '\"10\"') AS result;\n\n/*--------*\n | result |\n +--------+\n | 10 |\n *--------*/\n```\n\n```\nSELECT LAX_INT64(JSON '\"1.1\"') AS result;\n\n/*--------*\n | result |\n +--------+\n | 1 |\n *--------*/\n```\n\n```\nSELECT LAX_INT64(JSON '\"1.1e2\"') AS result;\n\n/*--------*\n | result |\n +--------+\n | 110 |\n *--------*/\n```\n\n```\nSELECT LAX_INT64(JSON '\"+1.5\"') AS result;\n\n/*--------*\n | result |\n +--------+\n | 2 |\n *--------*/\n```\n\n```\nSELECT LAX_INT64(JSON '\"1e100\"') AS result;\n\n/*--------*\n | result |\n +--------+\n | NULL |\n *--------*/\n```\n\n```\nSELECT LAX_INT64(JSON '\"foo\"') AS result;\n\n/*--------*\n | result |\n +--------+\n | NULL |\n *--------*/\n```\n\n\n"
},
{
"name": "LAX_STRING",
"arguments": [],
"category": "JSON",
"description_markdown": "```\nLAX_STRING(json_expr)\n```\n\n **Description** \n\nAttempts to convert a JSON value to a SQL`STRING`value.\n\nArguments:\n\n- ` json_expr`: JSON. For example:\n \n \n ```\n JSON '\"name\"'\n ```\n \n \n\nDetails:\n\n- If` json_expr`is SQL` NULL`, the function returns SQL` NULL`.\n- See the conversion rules in the next section for additional` NULL`handling.\n\n **Conversion rules** \n\n| From JSON type | To SQL`STRING` |\n| --- | --- |\n| boolean | If the JSON boolean is`true`, returns`'true'`.\n If`false`, returns`'false'`. |\n| string | Returns the JSON string as a`STRING`value. |\n| number | Returns the JSON number as a`STRING`value. |\n| other type or null | `NULL` |\n\n **Return type** \n\n`STRING`\n\n **Examples** \n\nExamples with inputs that are JSON strings:\n\n```\nSELECT LAX_STRING(JSON '\"purple\"') AS result;\n\n/*--------*\n | result |\n +--------+\n | purple |\n *--------*/\n```\n\n```\nSELECT LAX_STRING(JSON '\"10\"') AS result;\n\n/*--------*\n | result |\n +--------+\n | 10 |\n *--------*/\n```\n\nExamples with inputs that are JSON booleans:\n\n```\nSELECT LAX_STRING(JSON 'true') AS result;\n\n/*--------*\n | result |\n +--------+\n | true |\n *--------*/\n```\n\n```\nSELECT LAX_STRING(JSON 'false') AS result;\n\n/*--------*\n | result |\n +--------+\n | false |\n *--------*/\n```\n\nExamples with inputs that are JSON numbers:\n\n```\nSELECT LAX_STRING(JSON '10.0') AS result;\n\n/*--------*\n | result |\n +--------+\n | 10 |\n *--------*/\n```\n\n```\nSELECT LAX_STRING(JSON '10') AS result;\n\n/*--------*\n | result |\n +--------+\n | 10 |\n *--------*/\n```\n\n```\nSELECT LAX_STRING(JSON '1e100') AS result;\n\n/*--------*\n | result |\n +--------+\n | 1e+100 |\n *--------*/\n```\n\n\n"
},
{
"name": "LEAD",
"arguments": [],
"category": "Navigation",
"description_markdown": "```\nLEAD (value_expression[, offset [, default_expression]])\nOVER over_clause\n\nover_clause:\n { named_window | ( [ window_specification ] ) }\n\nwindow_specification:\n [ named_window ]\n [ PARTITION BY partition_expression [, ...] ]\n ORDER BY expression [ { ASC | DESC } ] [, ...]\n```\n\n **Description** \n\nReturns the value of the`value_expression`on a subsequent row. Changing the`offset`value changes which subsequent row is returned; the default value is`1`, indicating the next row in the window frame. An error occurs if`offset`is\nNULL or a negative value.\n\nThe optional`default_expression`is used if there isn't a row in the window\nframe at the specified offset. This expression must be a constant expression and\nits type must be implicitly coercible to the type of`value_expression`. If left\nunspecified,`default_expression`defaults to NULL.\n\nTo learn more about the`OVER`clause and how to use it, see[Window function calls](/bigquery/docs/reference/standard-sql/window-function-calls).\n\n **Supported Argument Types** \n\n- ` value_expression`can be any data type that can be returned from an\nexpression.\n- ` offset`must be a non-negative integer literal or parameter.\n- ` default_expression`must be compatible with the value expression type.\n\n **Return Data Type** \n\nSame type as`value_expression`.\n\n **Examples** \n\nThe following example illustrates a basic use of the`LEAD`function.\n\n```\nWITH finishers AS\n (SELECT 'Sophia Liu' as name,\n TIMESTAMP '2016-10-18 2:51:45' as finish_time,\n 'F30-34' as division\n UNION ALL SELECT 'Lisa Stelzner', TIMESTAMP '2016-10-18 2:54:11', 'F35-39'\n UNION ALL SELECT 'Nikki Leith', TIMESTAMP '2016-10-18 2:59:01', 'F30-34'\n UNION ALL SELECT 'Lauren Matthews', TIMESTAMP '2016-10-18 3:01:17', 'F35-39'\n UNION ALL SELECT 'Desiree Berry', TIMESTAMP '2016-10-18 3:05:42', 'F35-39'\n UNION ALL SELECT 'Suzy Slane', TIMESTAMP '2016-10-18 3:06:24', 'F35-39'\n UNION ALL SELECT 'Jen Edwards', TIMESTAMP '2016-10-18 3:06:36', 'F30-34'\n UNION ALL SELECT 'Meghan Lederer', TIMESTAMP '2016-10-18 3:07:41', 'F30-34'\n UNION ALL SELECT 'Carly Forte', TIMESTAMP '2016-10-18 3:08:58', 'F25-29'\n UNION ALL SELECT 'Lauren Reasoner', TIMESTAMP '2016-10-18 3:10:14', 'F30-34')\nSELECT name,\n finish_time,\n division,\n LEAD(name)\n OVER (PARTITION BY division ORDER BY finish_time ASC) AS followed_by\nFROM finishers;\n\n/*-----------------+-------------+----------+-----------------*\n | name | finish_time | division | followed_by |\n +-----------------+-------------+----------+-----------------+\n | Carly Forte | 03:08:58 | F25-29 | NULL |\n | Sophia Liu | 02:51:45 | F30-34 | Nikki Leith |\n | Nikki Leith | 02:59:01 | F30-34 | Jen Edwards |\n | Jen Edwards | 03:06:36 | F30-34 | Meghan Lederer |\n | Meghan Lederer | 03:07:41 | F30-34 | Lauren Reasoner |\n | Lauren Reasoner | 03:10:14 | F30-34 | NULL |\n | Lisa Stelzner | 02:54:11 | F35-39 | Lauren Matthews |\n | Lauren Matthews | 03:01:17 | F35-39 | Desiree Berry |\n | Desiree Berry | 03:05:42 | F35-39 | Suzy Slane |\n | Suzy Slane | 03:06:24 | F35-39 | NULL |\n *-----------------+-------------+----------+-----------------*/\n```\n\nThis next example uses the optional`offset`parameter.\n\n```\nWITH finishers AS\n (SELECT 'Sophia Liu' as name,\n TIMESTAMP '2016-10-18 2:51:45' as finish_time,\n 'F30-34' as division\n UNION ALL SELECT 'Lisa Stelzner', TIMESTAMP '2016-10-18 2:54:11', 'F35-39'\n UNION ALL SELECT 'Nikki Leith', TIMESTAMP '2016-10-18 2:59:01', 'F30-34'\n UNION ALL SELECT 'Lauren Matthews', TIMESTAMP '2016-10-18 3:01:17', 'F35-39'\n UNION ALL SELECT 'Desiree Berry', TIMESTAMP '2016-10-18 3:05:42', 'F35-39'\n UNION ALL SELECT 'Suzy Slane', TIMESTAMP '2016-10-18 3:06:24', 'F35-39'\n UNION ALL SELECT 'Jen Edwards', TIMESTAMP '2016-10-18 3:06:36', 'F30-34'\n UNION ALL SELECT 'Meghan Lederer', TIMESTAMP '2016-10-18 3:07:41', 'F30-34'\n UNION ALL SELECT 'Carly Forte', TIMESTAMP '2016-10-18 3:08:58', 'F25-29'\n UNION ALL SELECT 'Lauren Reasoner', TIMESTAMP '2016-10-18 3:10:14', 'F30-34')\nSELECT name,\n finish_time,\n division,\n LEAD(name, 2)\n OVER (PARTITION BY division ORDER BY finish_time ASC) AS two_runners_back\nFROM finishers;\n\n/*-----------------+-------------+----------+------------------*\n | name | finish_time | division | two_runners_back |\n +-----------------+-------------+----------+------------------+\n | Carly Forte | 03:08:58 | F25-29 | NULL |\n | Sophia Liu | 02:51:45 | F30-34 | Jen Edwards |\n | Nikki Leith | 02:59:01 | F30-34 | Meghan Lederer |\n | Jen Edwards | 03:06:36 | F30-34 | Lauren Reasoner |\n | Meghan Lederer | 03:07:41 | F30-34 | NULL |\n | Lauren Reasoner | 03:10:14 | F30-34 | NULL |\n | Lisa Stelzner | 02:54:11 | F35-39 | Desiree Berry |\n | Lauren Matthews | 03:01:17 | F35-39 | Suzy Slane |\n | Desiree Berry | 03:05:42 | F35-39 | NULL |\n | Suzy Slane | 03:06:24 | F35-39 | NULL |\n *-----------------+-------------+----------+------------------*/\n```\n\nThe following example replaces NULL values with a default value.\n\n```\nWITH finishers AS\n (SELECT 'Sophia Liu' as name,\n TIMESTAMP '2016-10-18 2:51:45' as finish_time,\n 'F30-34' as division\n UNION ALL SELECT 'Lisa Stelzner', TIMESTAMP '2016-10-18 2:54:11', 'F35-39'\n UNION ALL SELECT 'Nikki Leith', TIMESTAMP '2016-10-18 2:59:01', 'F30-34'\n UNION ALL SELECT 'Lauren Matthews', TIMESTAMP '2016-10-18 3:01:17', 'F35-39'\n UNION ALL SELECT 'Desiree Berry', TIMESTAMP '2016-10-18 3:05:42', 'F35-39'\n UNION ALL SELECT 'Suzy Slane', TIMESTAMP '2016-10-18 3:06:24', 'F35-39'\n UNION ALL SELECT 'Jen Edwards', TIMESTAMP '2016-10-18 3:06:36', 'F30-34'\n UNION ALL SELECT 'Meghan Lederer', TIMESTAMP '2016-10-18 3:07:41', 'F30-34'\n UNION ALL SELECT 'Carly Forte', TIMESTAMP '2016-10-18 3:08:58', 'F25-29'\n UNION ALL SELECT 'Lauren Reasoner', TIMESTAMP '2016-10-18 3:10:14', 'F30-34')\nSELECT name,\n finish_time,\n division,\n LEAD(name, 2, 'Nobody')\n OVER (PARTITION BY division ORDER BY finish_time ASC) AS two_runners_back\nFROM finishers;\n\n/*-----------------+-------------+----------+------------------*\n | name | finish_time | division | two_runners_back |\n +-----------------+-------------+----------+------------------+\n | Carly Forte | 03:08:58 | F25-29 | Nobody |\n | Sophia Liu | 02:51:45 | F30-34 | Jen Edwards |\n | Nikki Leith | 02:59:01 | F30-34 | Meghan Lederer |\n | Jen Edwards | 03:06:36 | F30-34 | Lauren Reasoner |\n | Meghan Lederer | 03:07:41 | F30-34 | Nobody |\n | Lauren Reasoner | 03:10:14 | F30-34 | Nobody |\n | Lisa Stelzner | 02:54:11 | F35-39 | Desiree Berry |\n | Lauren Matthews | 03:01:17 | F35-39 | Suzy Slane |\n | Desiree Berry | 03:05:42 | F35-39 | Nobody |\n | Suzy Slane | 03:06:24 | F35-39 | Nobody |\n *-----------------+-------------+----------+------------------*/\n```\n\n\n"
},
{
"name": "LEAST",
"arguments": [],
"category": "Mathematical",
"description_markdown": "```\nLEAST(X1,...,XN)\n```\n\n **Description** \n\nReturns the least value among`X1,...,XN`. If any argument is`NULL`, returns`NULL`. Otherwise, in the case of floating-point arguments, if any argument is`NaN`, returns`NaN`. In all other cases, returns the value among`X1,...,XN`that has the least value according to the ordering used by the`ORDER BY`clause. The arguments`X1, ..., XN`must be coercible to a common supertype, and\nthe supertype must support ordering.\n\n| X1,...,XN | LEAST(X1,...,XN) |\n| --- | --- |\n| 3,5,1 | 1 |\n\nThis function supports specifying[collation](/bigquery/docs/reference/standard-sql/collation-concepts#collate_about).\n\n **Return Data Types** \n\nData type of the input values.\n\n\n\n"
},
{
"name": "LEFT",
"arguments": [],
"category": "String",
"description_markdown": "```\nLEFT(value, length)\n```\n\n **Description** \n\nReturns a`STRING`or`BYTES`value that consists of the specified\nnumber of leftmost characters or bytes from`value`. The`length`is an`INT64`that specifies the length of the returned\nvalue. If`value`is of type`BYTES`,`length`is the number of leftmost bytes\nto return. If`value`is`STRING`,`length`is the number of leftmost characters\nto return.\n\nIf`length`is 0, an empty`STRING`or`BYTES`value will be\nreturned. If`length`is negative, an error will be returned. If`length`exceeds the number of characters or bytes from`value`, the original`value`will be returned.\n\n **Return type** \n\n`STRING`or`BYTES`\n\n **Examples** \n\n```\nWITH examples AS\n(SELECT 'apple' as example\nUNION ALL\nSELECT 'banana' as example\nUNION ALL\nSELECT 'абвгд' as example\n)\nSELECT example, LEFT(example, 3) AS left_example\nFROM examples;\n\n/*---------+--------------*\n | example | left_example |\n +---------+--------------+\n | apple | app |\n | banana | ban |\n | абвгд | абв |\n *---------+--------------*/\n```\n\n```\nWITH examples AS\n(SELECT b'apple' as example\nUNION ALL\nSELECT b'banana' as example\nUNION ALL\nSELECT b'\\xab\\xcd\\xef\\xaa\\xbb' as example\n)\nSELECT example, LEFT(example, 3) AS left_example\nFROM examples;\n\n-- Note that the result of LEFT is of type BYTES, displayed as a base64-encoded string.\n/*----------+--------------*\n | example | left_example |\n +----------+--------------+\n | YXBwbGU= | YXBw |\n | YmFuYW5h | YmFu |\n | q83vqrs= | q83v |\n *----------+--------------*/\n```\n\n\n"
},
{
"name": "LENGTH",
"arguments": [],
"category": "String",
"description_markdown": "```\nLENGTH(value)\n```\n\n **Description** \n\nReturns the length of the`STRING`or`BYTES`value. The returned\nvalue is in characters for`STRING`arguments and in bytes for the`BYTES`argument.\n\n **Return type** \n\n`INT64`\n\n **Examples** \n\n```\nWITH example AS\n (SELECT 'абвгд' AS characters)\n\nSELECT\n characters,\n LENGTH(characters) AS string_example,\n LENGTH(CAST(characters AS BYTES)) AS bytes_example\nFROM example;\n\n/*------------+----------------+---------------*\n | characters | string_example | bytes_example |\n +------------+----------------+---------------+\n | абвгд | 5 | 10 |\n *------------+----------------+---------------*/\n```\n\n\n"
},
{
"name": "LN",
"arguments": [],
"category": "Mathematical",
"description_markdown": "```\nLN(X)\n```\n\n **Description** \n\nComputes the natural logarithm of X. Generates an error if X is less than or\nequal to zero.\n\n| X | LN(X) |\n| --- | --- |\n| 1.0 | 0.0 |\n| `+inf` | `+inf` |\n| `X < 0` | Error |\n\n **Return Data Type** \n\n| INPUT | `INT64` | `NUMERIC` | `BIGNUMERIC` | `FLOAT64` |\n| --- | --- | --- | --- | --- |\n| OUTPUT | `FLOAT64` | `NUMERIC` | `BIGNUMERIC` | `FLOAT64` |\n\n\n\n"
},
{
"name": "LOG",
"arguments": [],
"category": "Mathematical",
"description_markdown": "```\nLOG(X [, Y])\n```\n\n **Description** \n\nIf only X is present,`LOG`is a synonym of`LN`. If Y is also present,`LOG`computes the logarithm of X to base Y.\n\n| X | Y | LOG(X, Y) |\n| --- | --- | --- |\n| 100.0 | 10.0 | 2.0 |\n| `-inf` | Any value | `NaN` |\n| Any value | `+inf` | `NaN` |\n| `+inf` | 0.0 < Y < 1.0 | `-inf` |\n| `+inf` | Y > 1.0 | `+inf` |\n| X <= 0 | Any value | Error |\n| Any value | Y <= 0 | Error |\n| Any value | 1.0 | Error |\n\n **Return Data Type** \n\n| INPUT | `INT64` | `NUMERIC` | `BIGNUMERIC` | `FLOAT64` |\n| --- | --- | --- | --- | --- |\n| `INT64` | `FLOAT64` | `NUMERIC` | `BIGNUMERIC` | `FLOAT64` |\n| `NUMERIC` | `NUMERIC` | `NUMERIC` | `BIGNUMERIC` | `FLOAT64` |\n| `BIGNUMERIC` | `BIGNUMERIC` | `BIGNUMERIC` | `BIGNUMERIC` | `FLOAT64` |\n| `FLOAT64` | `FLOAT64` | `FLOAT64` | `FLOAT64` | `FLOAT64` |\n\n\n\n"
},
{
"name": "LOG10",
"arguments": [],
"category": "Mathematical",
"description_markdown": "```\nLOG10(X)\n```\n\n **Description** \n\nSimilar to`LOG`, but computes logarithm to base 10.\n\n| X | LOG10(X) |\n| --- | --- |\n| 100.0 | 2.0 |\n| `-inf` | `NaN` |\n| `+inf` | `+inf` |\n| X <= 0 | Error |\n\n **Return Data Type** \n\n| INPUT | `INT64` | `NUMERIC` | `BIGNUMERIC` | `FLOAT64` |\n| --- | --- | --- | --- | --- |\n| OUTPUT | `FLOAT64` | `NUMERIC` | `BIGNUMERIC` | `FLOAT64` |\n\n\n\n"
},
{
"name": "LOGICAL_AND",
"arguments": [],
"category": "Aggregate",
"description_markdown": "```\nLOGICAL_AND(\n expression\n)\n```\n\n **Description** \n\nReturns the logical AND of all non-`NULL`expressions. Returns`NULL`if there\nare zero input rows or`expression`evaluates to`NULL`for all rows.\n\nTo learn more about the optional aggregate clauses that you can pass\ninto this function, see[Aggregate function calls](/bigquery/docs/reference/standard-sql/aggregate-function-calls).\n\nThis function can be used with the[AGGREGATION_THRESHOLD clause](/bigquery/docs/reference/standard-sql/query-syntax#agg_threshold_clause).\n\n **Supported Argument Types** \n\n`BOOL`\n\n **Return Data Types** \n\n`BOOL`\n\n **Examples** \n\n`LOGICAL_AND`returns`FALSE`because not all of the values in the array are\nless than 3.\n\n```\nSELECT LOGICAL_AND(x < 3) AS logical_and FROM UNNEST([1, 2, 4]) AS x;\n\n/*-------------*\n | logical_and |\n +-------------+\n | FALSE |\n *-------------*/\n```\n\n\n"
},
{
"name": "LOGICAL_OR",
"arguments": [],
"category": "Aggregate",
"description_markdown": "```\nLOGICAL_OR(\n expression\n)\n```\n\n **Description** \n\nReturns the logical OR of all non-`NULL`expressions. Returns`NULL`if there\nare zero input rows or`expression`evaluates to`NULL`for all rows.\n\nTo learn more about the optional aggregate clauses that you can pass\ninto this function, see[Aggregate function calls](/bigquery/docs/reference/standard-sql/aggregate-function-calls).\n\nThis function can be used with the[AGGREGATION_THRESHOLD clause](/bigquery/docs/reference/standard-sql/query-syntax#agg_threshold_clause).\n\n **Supported Argument Types** \n\n`BOOL`\n\n **Return Data Types** \n\n`BOOL`\n\n **Examples** \n\n`LOGICAL_OR`returns`TRUE`because at least one of the values in the array is\nless than 3.\n\n```\nSELECT LOGICAL_OR(x < 3) AS logical_or FROM UNNEST([1, 2, 4]) AS x;\n\n/*------------*\n | logical_or |\n +------------+\n | TRUE |\n *------------*/\n```\n\n\n"
},
{
"name": "LOWER",
"arguments": [],
"category": "String",
"description_markdown": "```\nLOWER(value)\n```\n\n **Description** \n\nFor`STRING`arguments, returns the original string with all alphabetic\ncharacters in lowercase. Mapping between lowercase and uppercase is done\naccording to the[Unicode Character Database](http://unicode.org/ucd/)without taking into account language-specific mappings.\n\nFor`BYTES`arguments, the argument is treated as ASCII text, with all bytes\ngreater than 127 left intact.\n\n **Return type** \n\n`STRING`or`BYTES`\n\n **Examples** \n\n```\nWITH items AS\n (SELECT\n 'FOO' as item\n UNION ALL\n SELECT\n 'BAR' as item\n UNION ALL\n SELECT\n 'BAZ' as item)\n\nSELECT\n LOWER(item) AS example\nFROM items;\n\n/*---------*\n | example |\n +---------+\n | foo |\n | bar |\n | baz |\n *---------*/\n```\n\n\n"
},
{
"name": "LPAD",
"arguments": [],
"category": "String",
"description_markdown": "```\nLPAD(original_value, return_length[, pattern])\n```\n\n **Description** \n\nReturns a`STRING`or`BYTES`value that consists of`original_value`prepended\nwith`pattern`. The`return_length`is an`INT64`that\nspecifies the length of the returned value. If`original_value`is of type`BYTES`,`return_length`is the number of bytes. If`original_value`is\nof type`STRING`,`return_length`is the number of characters.\n\nThe default value of`pattern`is a blank space.\n\nBoth`original_value`and`pattern`must be the same data type.\n\nIf`return_length`is less than or equal to the`original_value`length, this\nfunction returns the`original_value`value, truncated to the value of`return_length`. For example,`LPAD('hello world', 7);`returns`'hello w'`.\n\nIf`original_value`,`return_length`, or`pattern`is`NULL`, this function\nreturns`NULL`.\n\nThis function returns an error if:\n\n- ` return_length`is negative\n- ` pattern`is empty\n\n **Return type** \n\n`STRING`or`BYTES`\n\n **Examples** \n\n```\nSELECT t, len, FORMAT('%T', LPAD(t, len)) AS LPAD FROM UNNEST([\n STRUCT('abc' AS t, 5 AS len),\n ('abc', 2),\n ('例子', 4)\n]);\n\n/*------+-----+----------*\n | t | len | LPAD |\n |------|-----|----------|\n | abc | 5 | \" abc\" |\n | abc | 2 | \"ab\" |\n | 例子 | 4 | \" 例子\" |\n *------+-----+----------*/\n```\n\n```\nSELECT t, len, pattern, FORMAT('%T', LPAD(t, len, pattern)) AS LPAD FROM UNNEST([\n STRUCT('abc' AS t, 8 AS len, 'def' AS pattern),\n ('abc', 5, '-'),\n ('例子', 5, '中文')\n]);\n\n/*------+-----+---------+--------------*\n | t | len | pattern | LPAD |\n |------|-----|---------|--------------|\n | abc | 8 | def | \"defdeabc\" |\n | abc | 5 | - | \"--abc\" |\n | 例子 | 5 | 中文 | \"中文中例子\" |\n *------+-----+---------+--------------*/\n```\n\n```\nSELECT FORMAT('%T', t) AS t, len, FORMAT('%T', LPAD(t, len)) AS LPAD FROM UNNEST([\n STRUCT(b'abc' AS t, 5 AS len),\n (b'abc', 2),\n (b'\\xab\\xcd\\xef', 4)\n]);\n\n/*-----------------+-----+------------------*\n | t | len | LPAD |\n |-----------------|-----|------------------|\n | b\"abc\" | 5 | b\" abc\" |\n | b\"abc\" | 2 | b\"ab\" |\n | b\"\\xab\\xcd\\xef\" | 4 | b\" \\xab\\xcd\\xef\" |\n *-----------------+-----+------------------*/\n```\n\n```\nSELECT\n FORMAT('%T', t) AS t,\n len,\n FORMAT('%T', pattern) AS pattern,\n FORMAT('%T', LPAD(t, len, pattern)) AS LPAD\nFROM UNNEST([\n STRUCT(b'abc' AS t, 8 AS len, b'def' AS pattern),\n (b'abc', 5, b'-'),\n (b'\\xab\\xcd\\xef', 5, b'\\x00')\n]);\n\n/*-----------------+-----+---------+-------------------------*\n | t | len | pattern | LPAD |\n |-----------------|-----|---------|-------------------------|\n | b\"abc\" | 8 | b\"def\" | b\"defdeabc\" |\n | b\"abc\" | 5 | b\"-\" | b\"--abc\" |\n | b\"\\xab\\xcd\\xef\" | 5 | b\"\\x00\" | b\"\\x00\\x00\\xab\\xcd\\xef\" |\n *-----------------+-----+---------+-------------------------*/\n```\n\n\n"
},
{
"name": "LTRIM",
"arguments": [],
"category": "String",
"description_markdown": "```\nLTRIM(value1[, value2])\n```\n\n **Description** \n\nIdentical to[TRIM](#trim), but only removes leading characters.\n\n **Return type** \n\n`STRING`or`BYTES`\n\n **Examples** \n\n```\nWITH items AS\n (SELECT ' apple ' as item\n UNION ALL\n SELECT ' banana ' as item\n UNION ALL\n SELECT ' orange ' as item)\n\nSELECT\n CONCAT('#', LTRIM(item), '#') as example\nFROM items;\n\n/*-------------*\n | example |\n +-------------+\n | #apple # |\n | #banana # |\n | #orange # |\n *-------------*/\n```\n\n```\nWITH items AS\n (SELECT '***apple***' as item\n UNION ALL\n SELECT '***banana***' as item\n UNION ALL\n SELECT '***orange***' as item)\n\nSELECT\n LTRIM(item, '*') as example\nFROM items;\n\n/*-----------*\n | example |\n +-----------+\n | apple*** |\n | banana*** |\n | orange*** |\n *-----------*/\n```\n\n```\nWITH items AS\n (SELECT 'xxxapplexxx' as item\n UNION ALL\n SELECT 'yyybananayyy' as item\n UNION ALL\n SELECT 'zzzorangezzz' as item\n UNION ALL\n SELECT 'xyzpearxyz' as item)\n\nSELECT\n LTRIM(item, 'xyz') as example\nFROM items;\n\n/*-----------*\n | example |\n +-----------+\n | applexxx |\n | bananayyy |\n | orangezzz |\n | pearxyz |\n *-----------*/\n```\n\n\n"
},
{
"name": "MAKE_INTERVAL",
"arguments": [],
"category": "Interval",
"description_markdown": "```\nMAKE_INTERVAL([year][, month][, day][, hour][, minute][, second])\n```\n\n **Description** \n\nConstructs an[INTERVAL](/bigquery/docs/reference/standard-sql/data-types#interval_type)object using`INT64`values\nrepresenting the year, month, day, hour, minute, and second. All arguments are\noptional,`0`by default, and can be[named arguments](/bigquery/docs/reference/standard-sql/functions-reference#named_arguments).\n\n **Return Data Type** \n\n`INTERVAL`\n\n **Example** \n\n```\nSELECT\n MAKE_INTERVAL(1, 6, 15) AS i1,\n MAKE_INTERVAL(hour => 10, second => 20) AS i2,\n MAKE_INTERVAL(1, minute => 5, day => 2) AS i3\n\n/*--------------+---------------+-------------*\n | i1 | i2 | i3 |\n +--------------+---------------+-------------+\n | 1-6 15 0:0:0 | 0-0 0 10:0:20 | 1-0 2 0:5:0 |\n *--------------+---------------+-------------*/\n```\n\n\n<span id=\"json_functions\">\n## JSON functions\n\n</span>\nGoogleSQL for BigQuery supports the following functions, which can retrieve and\ntransform JSON data.\n\n\n\n"
},
{
"name": "MAX",
"arguments": [],
"category": "Aggregate",
"description_markdown": "```\nMAX(\n expression\n)\n[ OVER over_clause ]\n\nover_clause:\n { named_window | ( [ window_specification ] ) }\n\nwindow_specification:\n [ named_window ]\n [ PARTITION BY partition_expression [, ...] ]\n [ ORDER BY expression [ { ASC | DESC } ] [, ...] ]\n [ window_frame_clause ]\n```\n\n **Description** \n\nReturns the maximum non-`NULL`value in an aggregated group.\n\nCaveats:\n\n- If the aggregated group is empty or the argument is` NULL`for all rows in\nthe group, returns` NULL`.\n- If the argument is` NaN`for any row in the group, returns` NaN`.\n\nTo learn more about the optional aggregate clauses that you can pass\ninto this function, see[Aggregate function calls](/bigquery/docs/reference/standard-sql/aggregate-function-calls).\n\nTo learn more about the`OVER`clause and how to use it, see[Window function calls](/bigquery/docs/reference/standard-sql/window-function-calls).\n\nThis function supports specifying[collation](/bigquery/docs/reference/standard-sql/collation-concepts#collate_about).\n\n **Supported Argument Types** \n\nAny[orderable data type](/bigquery/docs/reference/standard-sql/data-types#data_type_properties)except for`ARRAY`.\n\n **Return Data Types** \n\nThe data type of the input values.\n\n **Examples** \n\n```\nSELECT MAX(x) AS max\nFROM UNNEST([8, 37, 55, 4]) AS x;\n\n/*-----*\n | max |\n +-----+\n | 55 |\n *-----*/\n```\n\n```\nSELECT x, MAX(x) OVER (PARTITION BY MOD(x, 2)) AS max\nFROM UNNEST([8, NULL, 37, 55, NULL, 4]) AS x;\n\n/*------+------*\n | x | max |\n +------+------+\n | NULL | NULL |\n | NULL | NULL |\n | 8 | 8 |\n | 4 | 8 |\n | 37 | 55 |\n | 55 | 55 |\n *------+------*/\n```\n\n\n"
},
{
"name": "MAX_BY",
"arguments": [],
"category": "Aggregate",
"description_markdown": "```\nMAX_BY(\n x, y\n)\n```\n\n **Description** \n\nSynonym for[ANY_VALUE(x HAVING MAX y)](#any_value).\n\n **Return Data Types** \n\nMatches the input`x`data type.\n\n **Examples** \n\n```\nWITH fruits AS (\n SELECT \"apple\" fruit, 3.55 price UNION ALL\n SELECT \"banana\" fruit, 2.10 price UNION ALL\n SELECT \"pear\" fruit, 4.30 price\n)\nSELECT MAX_BY(fruit, price) as fruit\nFROM fruits;\n\n/*-------*\n | fruit |\n +-------+\n | pear |\n *-------*/\n```\n\n\n"
},
{
"name": "MD5",
"arguments": [],
"category": "Hash",
"description_markdown": "```\nMD5(input)\n```\n\n **Description** \n\nComputes the hash of the input using the[MD5 algorithm](https://en.wikipedia.org/wiki/MD5). The input can either be`STRING`or`BYTES`. The string version treats the input as an array of bytes.\n\nThis function returns 16 bytes.\n\n **Warning:** MD5 is no longer considered secure.\nFor increased security use another hashing function. **Return type** \n\n`BYTES`\n\n **Example** \n\n```\nSELECT MD5(\"Hello World\") as md5;\n\n-- Note that the result of MD5 is of type BYTES, displayed as a base64-encoded string.\n/*--------------------------*\n | md5 |\n +--------------------------+\n | sQqNsWTgdUEFt6mb5y4/5Q== |\n *--------------------------*/\n```\n\n\n"
},
{
"name": "MIN",
"arguments": [],
"category": "Aggregate",
"description_markdown": "```\nMIN(\n expression\n)\n[ OVER over_clause ]\n\nover_clause:\n { named_window | ( [ window_specification ] ) }\n\nwindow_specification:\n [ named_window ]\n [ PARTITION BY partition_expression [, ...] ]\n [ ORDER BY expression [ { ASC | DESC } ] [, ...] ]\n [ window_frame_clause ]\n```\n\n **Description** \n\nReturns the minimum non-`NULL`value in an aggregated group.\n\nCaveats:\n\n- If the aggregated group is empty or the argument is` NULL`for all rows in\nthe group, returns` NULL`.\n- If the argument is` NaN`for any row in the group, returns` NaN`.\n\nTo learn more about the optional aggregate clauses that you can pass\ninto this function, see[Aggregate function calls](/bigquery/docs/reference/standard-sql/aggregate-function-calls).\n\nTo learn more about the`OVER`clause and how to use it, see[Window function calls](/bigquery/docs/reference/standard-sql/window-function-calls).\n\nThis function supports specifying[collation](/bigquery/docs/reference/standard-sql/collation-concepts#collate_about).\n\n **Supported Argument Types** \n\nAny[orderable data type](/bigquery/docs/reference/standard-sql/data-types#data_type_properties)except for`ARRAY`.\n\n **Return Data Types** \n\nThe data type of the input values.\n\n **Examples** \n\n```\nSELECT MIN(x) AS min\nFROM UNNEST([8, 37, 4, 55]) AS x;\n\n/*-----*\n | min |\n +-----+\n | 4 |\n *-----*/\n```\n\n```\nSELECT x, MIN(x) OVER (PARTITION BY MOD(x, 2)) AS min\nFROM UNNEST([8, NULL, 37, 4, NULL, 55]) AS x;\n\n/*------+------*\n | x | min |\n +------+------+\n | NULL | NULL |\n | NULL | NULL |\n | 8 | 4 |\n | 4 | 4 |\n | 37 | 37 |\n | 55 | 37 |\n *------+------*/\n```\n\n\n"
},
{
"name": "MIN_BY",
"arguments": [],
"category": "Aggregate",
"description_markdown": "```\nMIN_BY(\n x, y\n)\n```\n\n **Description** \n\nSynonym for[ANY_VALUE(x HAVING MIN y)](#any_value).\n\n **Return Data Types** \n\nMatches the input`x`data type.\n\n **Examples** \n\n```\nWITH fruits AS (\n SELECT \"apple\" fruit, 3.55 price UNION ALL\n SELECT \"banana\" fruit, 2.10 price UNION ALL\n SELECT \"pear\" fruit, 4.30 price\n)\nSELECT MIN_BY(fruit, price) as fruit\nFROM fruits;\n\n/*--------*\n | fruit |\n +--------+\n | banana |\n *--------*/\n```\n\n\n"
},
{
"name": "MOD",
"arguments": [],
"category": "Mathematical",
"description_markdown": "```\nMOD(X, Y)\n```\n\n **Description** \n\nModulo function: returns the remainder of the division of X by Y. Returned\nvalue has the same sign as X. An error is generated if Y is 0.\n\n| X | Y | MOD(X, Y) |\n| --- | --- | --- |\n| 25 | 12 | 1 |\n| 25 | 0 | Error |\n\n **Return Data Type** \n\nThe return data type is determined by the argument types with the following\ntable.\n\n| INPUT | `INT64` | `NUMERIC` | `BIGNUMERIC` |\n| --- | --- | --- | --- |\n| `INT64` | `INT64` | `NUMERIC` | `BIGNUMERIC` |\n| `NUMERIC` | `NUMERIC` | `NUMERIC` | `BIGNUMERIC` |\n| `BIGNUMERIC` | `BIGNUMERIC` | `BIGNUMERIC` | `BIGNUMERIC` |\n\n\n\n"
},
{
"name": "NET.HOST",
"arguments": [],
"category": "Net",
"description_markdown": "```\nNET.HOST(url)\n```\n\n **Description** \n\nTakes a URL as a`STRING`value and returns the host. For best results, URL\nvalues should comply with the format as defined by[RFC 3986](https://tools.ietf.org/html/rfc3986#appendix-A). If the URL value does not comply\nwith RFC 3986 formatting, this function makes a best effort to parse the input\nand return a relevant result. If the function cannot parse the input, it\nreturns`NULL`.\n\n **Note:** The function does not perform any normalization. **Return Data Type** \n\n`STRING`\n\n **Example** \n\n```\nSELECT\n FORMAT(\"%T\", input) AS input,\n description,\n FORMAT(\"%T\", NET.HOST(input)) AS host,\n FORMAT(\"%T\", NET.PUBLIC_SUFFIX(input)) AS suffix,\n FORMAT(\"%T\", NET.REG_DOMAIN(input)) AS domain\nFROM (\n SELECT \"\" AS input, \"invalid input\" AS description\n UNION ALL SELECT \"http://abc.xyz\", \"standard URL\"\n UNION ALL SELECT \"//user:password@a.b:80/path?query\",\n \"standard URL with relative scheme, port, path and query, but no public suffix\"\n UNION ALL SELECT \"https://[::1]:80\", \"standard URL with IPv6 host\"\n UNION ALL SELECT \"http://例子.卷筒纸.中国\", \"standard URL with internationalized domain name\"\n UNION ALL SELECT \" www.Example.Co.UK \",\n \"non-standard URL with spaces, upper case letters, and without scheme\"\n UNION ALL SELECT \"mailto:?to=&subject=&body=\", \"URI rather than URL--unsupported\"\n);\n```\n\n| input | description | host | suffix | domain |\n| --- | --- | --- | --- | --- |\n| \"\" | invalid input | NULL | NULL | NULL |\n| \"http://abc.xyz\" | standard URL | \"abc.xyz\" | \"xyz\" | \"abc.xyz\" |\n| \"//user:password@a.b:80/path?query\" | standard URL with relative scheme, port, path and query, but no public suffix | \"a.b\" | NULL | NULL |\n| \"https://[::1]:80\" | standard URL with IPv6 host | \"[::1]\" | NULL | NULL |\n| \"http://例子.卷筒纸.中国\" | standard URL with internationalized domain name | \"例子.卷筒纸.中国\" | \"中国\" | \"卷筒纸.中国\" |\n| \" www.Example.Co.UK \" | non-standard URL with spaces, upper case letters, and without scheme | \"www.Example.Co.UK\" | \"Co.UK\" | \"Example.Co.UK\" |\n| \"mailto:?to=&subject=&body=\" | URI rather than URL--unsupported | \"mailto\" | NULL | NULL |\n\n\n\n"
},
{
"name": "NET.IPV4_FROM_INT64",
"arguments": [],
"category": "Net",
"description_markdown": "```\nNET.IPV4_FROM_INT64(integer_value)\n```\n\n **Description** \n\nConverts an IPv4 address from integer format to binary (BYTES) format in network\nbyte order. In the integer input, the least significant bit of the IP address is\nstored in the least significant bit of the integer, regardless of host or client\narchitecture. For example,`1`means`0.0.0.1`, and`0x1FF`means`0.0.1.255`.\n\nThis function checks that either all the most significant 32 bits are 0, or all\nthe most significant 33 bits are 1 (sign-extended from a 32-bit integer).\nIn other words, the input should be in the range`[-0x80000000, 0xFFFFFFFF]`;\notherwise, this function throws an error.\n\nThis function does not support IPv6.\n\n **Return Data Type** \n\nBYTES\n\n **Example** \n\n```\nSELECT x, x_hex, FORMAT(\"%T\", NET.IPV4_FROM_INT64(x)) AS ipv4_from_int64\nFROM (\n SELECT CAST(x_hex AS INT64) x, x_hex\n FROM UNNEST([\"0x0\", \"0xABCDEF\", \"0xFFFFFFFF\", \"-0x1\", \"-0x2\"]) AS x_hex\n);\n\n/*-----------------------------------------------*\n | x | x_hex | ipv4_from_int64 |\n +-----------------------------------------------+\n | 0 | 0x0 | b\"\\x00\\x00\\x00\\x00\" |\n | 11259375 | 0xABCDEF | b\"\\x00\\xab\\xcd\\xef\" |\n | 4294967295 | 0xFFFFFFFF | b\"\\xff\\xff\\xff\\xff\" |\n | -1 | -0x1 | b\"\\xff\\xff\\xff\\xff\" |\n | -2 | -0x2 | b\"\\xff\\xff\\xff\\xfe\" |\n *-----------------------------------------------*/\n```\n\n\n"
},
{
"name": "NET.IPV4_TO_INT64",
"arguments": [],
"category": "Net",
"description_markdown": "```\nNET.IPV4_TO_INT64(addr_bin)\n```\n\n **Description** \n\nConverts an IPv4 address from binary (BYTES) format in network byte order to\ninteger format. In the integer output, the least significant bit of the IP\naddress is stored in the least significant bit of the integer, regardless of\nhost or client architecture. For example,`1`means`0.0.0.1`, and`0x1FF`means`0.0.1.255`. The output is in the range`[0, 0xFFFFFFFF]`.\n\nIf the input length is not 4, this function throws an error.\n\nThis function does not support IPv6.\n\n **Return Data Type** \n\nINT64\n\n **Example** \n\n```\nSELECT\n FORMAT(\"%T\", x) AS addr_bin,\n FORMAT(\"0x%X\", NET.IPV4_TO_INT64(x)) AS ipv4_to_int64\nFROM\nUNNEST([b\"\\x00\\x00\\x00\\x00\", b\"\\x00\\xab\\xcd\\xef\", b\"\\xff\\xff\\xff\\xff\"]) AS x;\n\n/*-------------------------------------*\n | addr_bin | ipv4_to_int64 |\n +-------------------------------------+\n | b\"\\x00\\x00\\x00\\x00\" | 0x0 |\n | b\"\\x00\\xab\\xcd\\xef\" | 0xABCDEF |\n | b\"\\xff\\xff\\xff\\xff\" | 0xFFFFFFFF |\n *-------------------------------------*/\n```\n\n\n"
},
{
"name": "NET.IP_FROM_STRING",
"arguments": [],
"category": "Net",
"description_markdown": "```\nNET.IP_FROM_STRING(addr_str)\n```\n\n **Description** \n\nConverts an IPv4 or IPv6 address from text (STRING) format to binary (BYTES)\nformat in network byte order.\n\nThis function supports the following formats for`addr_str`:\n\n- IPv4: Dotted-quad format. For example,` 10.1.2.3`.\n- IPv6: Colon-separated format. For example,` 1234:5678:90ab:cdef:1234:5678:90ab:cdef`. For more examples, see the[IP Version 6 Addressing Architecture](http://www.ietf.org/rfc/rfc2373.txt).\n\nThis function does not support[CIDR notation](https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing), such as`10.1.2.3/32`.\n\nIf this function receives a`NULL`input, it returns`NULL`. If the input is\nconsidered invalid, an`OUT_OF_RANGE`error occurs.\n\n **Return Data Type** \n\nBYTES\n\n **Example** \n\n```\nSELECT\n addr_str, FORMAT(\"%T\", NET.IP_FROM_STRING(addr_str)) AS ip_from_string\nFROM UNNEST([\n '48.49.50.51',\n '::1',\n '3031:3233:3435:3637:3839:4041:4243:4445',\n '::ffff:192.0.2.128'\n]) AS addr_str;\n\n/*---------------------------------------------------------------------------------------------------------------*\n | addr_str | ip_from_string |\n +---------------------------------------------------------------------------------------------------------------+\n | 48.49.50.51 | b\"0123\" |\n | ::1 | b\"\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x01\" |\n | 3031:3233:3435:3637:3839:4041:4243:4445 | b\"0123456789@ABCDE\" |\n | ::ffff:192.0.2.128 | b\"\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\xff\\xff\\xc0\\x00\\x02\\x80\" |\n *---------------------------------------------------------------------------------------------------------------*/\n```\n\n\n"
},
{
"name": "NET.IP_NET_MASK",
"arguments": [],
"category": "Net",
"description_markdown": "```\nNET.IP_NET_MASK(num_output_bytes, prefix_length)\n```\n\n **Description** \n\nReturns a network mask: a byte sequence with length equal to`num_output_bytes`,\nwhere the first`prefix_length`bits are set to 1 and the other bits are set to\n0.`num_output_bytes`and`prefix_length`are INT64.\nThis function throws an error if`num_output_bytes`is not 4 (for IPv4) or 16\n(for IPv6). It also throws an error if`prefix_length`is negative or greater\nthan`8 * num_output_bytes`.\n\n **Return Data Type** \n\nBYTES\n\n **Example** \n\n```\nSELECT x, y, FORMAT(\"%T\", NET.IP_NET_MASK(x, y)) AS ip_net_mask\nFROM UNNEST([\n STRUCT(4 as x, 0 as y),\n (4, 20),\n (4, 32),\n (16, 0),\n (16, 1),\n (16, 128)\n]);\n\n/*--------------------------------------------------------------------------------*\n | x | y | ip_net_mask |\n +--------------------------------------------------------------------------------+\n | 4 | 0 | b\"\\x00\\x00\\x00\\x00\" |\n | 4 | 20 | b\"\\xff\\xff\\xf0\\x00\" |\n | 4 | 32 | b\"\\xff\\xff\\xff\\xff\" |\n | 16 | 0 | b\"\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\" |\n | 16 | 1 | b\"\\x80\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\" |\n | 16 | 128 | b\"\\xff\\xff\\xff\\xff\\xff\\xff\\xff\\xff\\xff\\xff\\xff\\xff\\xff\\xff\\xff\\xff\" |\n *--------------------------------------------------------------------------------*/\n```\n\n\n"
},
{
"name": "NET.IP_TO_STRING",
"arguments": [],
"category": "Net",
"description_markdown": "```\nNET.IP_TO_STRING(addr_bin)\n```\n\n **Description** Converts an IPv4 or IPv6 address from binary (BYTES) format in network byte\norder to text (STRING) format.\n\nIf the input is 4 bytes, this function returns an IPv4 address as a STRING. If\nthe input is 16 bytes, it returns an IPv6 address as a STRING.\n\nIf this function receives a`NULL`input, it returns`NULL`. If the input has\na length different from 4 or 16, an`OUT_OF_RANGE`error occurs.\n\n **Return Data Type** \n\nSTRING\n\n **Example** \n\n```\nSELECT FORMAT(\"%T\", x) AS addr_bin, NET.IP_TO_STRING(x) AS ip_to_string\nFROM UNNEST([\n b\"0123\",\n b\"\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x01\",\n b\"0123456789@ABCDE\",\n b\"\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\xff\\xff\\xc0\\x00\\x02\\x80\"\n]) AS x;\n\n/*---------------------------------------------------------------------------------------------------------------*\n | addr_bin | ip_to_string |\n +---------------------------------------------------------------------------------------------------------------+\n | b\"0123\" | 48.49.50.51 |\n | b\"\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x01\" | ::1 |\n | b\"0123456789@ABCDE\" | 3031:3233:3435:3637:3839:4041:4243:4445 |\n | b\"\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\xff\\xff\\xc0\\x00\\x02\\x80\" | ::ffff:192.0.2.128 |\n *---------------------------------------------------------------------------------------------------------------*/\n```\n\n\n"
},
{
"name": "NET.IP_TRUNC",
"arguments": [],
"category": "Net",
"description_markdown": "```\nNET.IP_TRUNC(addr_bin, prefix_length)\n```\n\n **Description** Takes`addr_bin`, an IPv4 or IPv6 address in binary (BYTES) format in network\nbyte order, and returns a subnet address in the same format. The result has the\nsame length as`addr_bin`, where the first`prefix_length`bits are equal to\nthose in`addr_bin`and the remaining bits are 0.\n\nThis function throws an error if`LENGTH(addr_bin)`is not 4 or 16, or if`prefix_len`is negative or greater than`LENGTH(addr_bin) * 8`.\n\n **Return Data Type** \n\nBYTES\n\n **Example** \n\n```\nSELECT\n FORMAT(\"%T\", x) as addr_bin, prefix_length,\n FORMAT(\"%T\", NET.IP_TRUNC(x, prefix_length)) AS ip_trunc\nFROM UNNEST([\n STRUCT(b\"\\xAA\\xBB\\xCC\\xDD\" as x, 0 as prefix_length),\n (b\"\\xAA\\xBB\\xCC\\xDD\", 11), (b\"\\xAA\\xBB\\xCC\\xDD\", 12),\n (b\"\\xAA\\xBB\\xCC\\xDD\", 24), (b\"\\xAA\\xBB\\xCC\\xDD\", 32),\n (b'0123456789@ABCDE', 80)\n]);\n\n/*-----------------------------------------------------------------------------*\n | addr_bin | prefix_length | ip_trunc |\n +-----------------------------------------------------------------------------+\n | b\"\\xaa\\xbb\\xcc\\xdd\" | 0 | b\"\\x00\\x00\\x00\\x00\" |\n | b\"\\xaa\\xbb\\xcc\\xdd\" | 11 | b\"\\xaa\\xa0\\x00\\x00\" |\n | b\"\\xaa\\xbb\\xcc\\xdd\" | 12 | b\"\\xaa\\xb0\\x00\\x00\" |\n | b\"\\xaa\\xbb\\xcc\\xdd\" | 24 | b\"\\xaa\\xbb\\xcc\\x00\" |\n | b\"\\xaa\\xbb\\xcc\\xdd\" | 32 | b\"\\xaa\\xbb\\xcc\\xdd\" |\n | b\"0123456789@ABCDE\" | 80 | b\"0123456789\\x00\\x00\\x00\\x00\\x00\\x00\" |\n *-----------------------------------------------------------------------------*/\n```\n\n\n"
},
{
"name": "NET.PUBLIC_SUFFIX",
"arguments": [],
"category": "Net",
"description_markdown": "```\nNET.PUBLIC_SUFFIX(url)\n```\n\n **Description** \n\nTakes a URL as a`STRING`value and returns the public suffix (such as`com`,`org`, or`net`). A public suffix is an ICANN domain registered at[publicsuffix.org](https://publicsuffix.org/list/). For best results, URL values\nshould comply with the format as defined by[RFC 3986](https://tools.ietf.org/html/rfc3986#appendix-A). If the URL value does not comply\nwith RFC 3986 formatting, this function makes a best effort to parse the input\nand return a relevant result.\n\nThis function returns`NULL`if any of the following is true:\n\n- It cannot parse the host from the input;\n- The parsed host contains adjacent dots in the middle\n(not leading or trailing);\n- The parsed host does not contain any public suffix.\n\nBefore looking up the public suffix, this function temporarily normalizes the\nhost by converting uppercase English letters to lowercase and encoding all\nnon-ASCII characters with[Punycode](https://en.wikipedia.org/wiki/Punycode).\nThe function then returns the public suffix as part of the original host instead\nof the normalized host.\n\n **Note:** The function does not perform[Unicode normalization](https://en.wikipedia.org/wiki/Unicode_equivalence). **Note:** The public suffix data at[publicsuffix.org](https://publicsuffix.org/list/)also contains\nprivate domains. This function ignores the private domains. **Note:** The public suffix data may change over time. Consequently, input that\nproduces a`NULL`result now may produce a non-`NULL`value in the future. **Return Data Type** \n\n`STRING`\n\n **Example** \n\n```\nSELECT\n FORMAT(\"%T\", input) AS input,\n description,\n FORMAT(\"%T\", NET.HOST(input)) AS host,\n FORMAT(\"%T\", NET.PUBLIC_SUFFIX(input)) AS suffix,\n FORMAT(\"%T\", NET.REG_DOMAIN(input)) AS domain\nFROM (\n SELECT \"\" AS input, \"invalid input\" AS description\n UNION ALL SELECT \"http://abc.xyz\", \"standard URL\"\n UNION ALL SELECT \"//user:password@a.b:80/path?query\",\n \"standard URL with relative scheme, port, path and query, but no public suffix\"\n UNION ALL SELECT \"https://[::1]:80\", \"standard URL with IPv6 host\"\n UNION ALL SELECT \"http://例子.卷筒纸.中国\", \"standard URL with internationalized domain name\"\n UNION ALL SELECT \" www.Example.Co.UK \",\n \"non-standard URL with spaces, upper case letters, and without scheme\"\n UNION ALL SELECT \"mailto:?to=&subject=&body=\", \"URI rather than URL--unsupported\"\n);\n```\n\n| input | description | host | suffix | domain |\n| --- | --- | --- | --- | --- |\n| \"\" | invalid input | NULL | NULL | NULL |\n| \"http://abc.xyz\" | standard URL | \"abc.xyz\" | \"xyz\" | \"abc.xyz\" |\n| \"//user:password@a.b:80/path?query\" | standard URL with relative scheme, port, path and query, but no public suffix | \"a.b\" | NULL | NULL |\n| \"https://[::1]:80\" | standard URL with IPv6 host | \"[::1]\" | NULL | NULL |\n| \"http://例子.卷筒纸.中国\" | standard URL with internationalized domain name | \"例子.卷筒纸.中国\" | \"中国\" | \"卷筒纸.中国\" |\n| \" www.Example.Co.UK \" | non-standard URL with spaces, upper case letters, and without scheme | \"www.Example.Co.UK\" | \"Co.UK\" | \"Example.Co.UK |\n| \"mailto:?to=&subject=&body=\" | URI rather than URL--unsupported | \"mailto\" | NULL | NULL |\n\n\n\n"
},
{
"name": "NET.REG_DOMAIN",
"arguments": [],
"category": "Net",
"description_markdown": "```\nNET.REG_DOMAIN(url)\n```\n\n **Description** \n\nTakes a URL as a string and returns the registered or registrable domain (the[public suffix](#netpublic_suffix)plus one preceding label), as a\nstring. For best results, URL values should comply with the format as defined by[RFC 3986](https://tools.ietf.org/html/rfc3986#appendix-A). If the URL value does not comply\nwith RFC 3986 formatting, this function makes a best effort to parse the input\nand return a relevant result.\n\nThis function returns`NULL`if any of the following is true:\n\n- It cannot parse the host from the input;\n- The parsed host contains adjacent dots in the middle\n(not leading or trailing);\n- The parsed host does not contain any public suffix;\n- The parsed host contains only a public suffix without any preceding label.\n\nBefore looking up the public suffix, this function temporarily normalizes the\nhost by converting uppercase English letters to lowercase and encoding all\nnon-ASCII characters with[Punycode](https://en.wikipedia.org/wiki/Punycode). The function then\nreturns the registered or registerable domain as part of the original host\ninstead of the normalized host.\n\n **Note:** The function does not perform[Unicode normalization](https://en.wikipedia.org/wiki/Unicode_equivalence). **Note:** The public suffix data at[publicsuffix.org](https://publicsuffix.org/list/)also contains\nprivate domains. This function does not treat a private domain as a public\nsuffix. For example, if`us.com`is a private domain in the public suffix data,`NET.REG_DOMAIN(\"foo.us.com\")`returns`us.com`(the public suffix`com`plus\nthe preceding label`us`) rather than`foo.us.com`(the private domain`us.com`plus the preceding label`foo`). **Note:** The public suffix data may change over time.\nConsequently, input that produces a`NULL`result now may produce a non-`NULL`value in the future. **Return Data Type** \n\n`STRING`\n\n **Example** \n\n```\nSELECT\n FORMAT(\"%T\", input) AS input,\n description,\n FORMAT(\"%T\", NET.HOST(input)) AS host,\n FORMAT(\"%T\", NET.PUBLIC_SUFFIX(input)) AS suffix,\n FORMAT(\"%T\", NET.REG_DOMAIN(input)) AS domain\nFROM (\n SELECT \"\" AS input, \"invalid input\" AS description\n UNION ALL SELECT \"http://abc.xyz\", \"standard URL\"\n UNION ALL SELECT \"//user:password@a.b:80/path?query\",\n \"standard URL with relative scheme, port, path and query, but no public suffix\"\n UNION ALL SELECT \"https://[::1]:80\", \"standard URL with IPv6 host\"\n UNION ALL SELECT \"http://例子.卷筒纸.中国\", \"standard URL with internationalized domain name\"\n UNION ALL SELECT \" www.Example.Co.UK \",\n \"non-standard URL with spaces, upper case letters, and without scheme\"\n UNION ALL SELECT \"mailto:?to=&subject=&body=\", \"URI rather than URL--unsupported\"\n);\n```\n\n| input | description | host | suffix | domain |\n| --- | --- | --- | --- | --- |\n| \"\" | invalid input | NULL | NULL | NULL |\n| \"http://abc.xyz\" | standard URL | \"abc.xyz\" | \"xyz\" | \"abc.xyz\" |\n| \"//user:password@a.b:80/path?query\" | standard URL with relative scheme, port, path and query, but no public suffix | \"a.b\" | NULL | NULL |\n| \"https://[::1]:80\" | standard URL with IPv6 host | \"[::1]\" | NULL | NULL |\n| \"http://例子.卷筒纸.中国\" | standard URL with internationalized domain name | \"例子.卷筒纸.中国\" | \"中国\" | \"卷筒纸.中国\" |\n| \" www.Example.Co.UK \" | non-standard URL with spaces, upper case letters, and without scheme | \"www.Example.Co.UK\" | \"Co.UK\" | \"Example.Co.UK\" |\n| \"mailto:?to=&subject=&body=\" | URI rather than URL--unsupported | \"mailto\" | NULL | NULL |\n\n\n\n"
},
{
"name": "NET.SAFE_IP_FROM_STRING",
"arguments": [],
"category": "Net",
"description_markdown": "```\nNET.SAFE_IP_FROM_STRING(addr_str)\n```\n\n **Description** \n\nSimilar to[NET.IP_FROM_STRING](#netip_from_string), but returns`NULL`instead of throwing an error if the input is invalid.\n\n **Return Data Type** \n\nBYTES\n\n **Example** \n\n```\nSELECT\n addr_str,\n FORMAT(\"%T\", NET.SAFE_IP_FROM_STRING(addr_str)) AS safe_ip_from_string\nFROM UNNEST([\n '48.49.50.51',\n '::1',\n '3031:3233:3435:3637:3839:4041:4243:4445',\n '::ffff:192.0.2.128',\n '48.49.50.51/32',\n '48.49.50',\n '::wxyz'\n]) AS addr_str;\n\n/*---------------------------------------------------------------------------------------------------------------*\n | addr_str | safe_ip_from_string |\n +---------------------------------------------------------------------------------------------------------------+\n | 48.49.50.51 | b\"0123\" |\n | ::1 | b\"\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x01\" |\n | 3031:3233:3435:3637:3839:4041:4243:4445 | b\"0123456789@ABCDE\" |\n | ::ffff:192.0.2.128 | b\"\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\xff\\xff\\xc0\\x00\\x02\\x80\" |\n | 48.49.50.51/32 | NULL |\n | 48.49.50 | NULL |\n | ::wxyz | NULL |\n *---------------------------------------------------------------------------------------------------------------*/\n```\n\n\n<span id=\"numbering_functions\">\n## Numbering functions\n\n</span>\nGoogleSQL for BigQuery supports numbering functions.\nNumbering functions are a subset of window functions. To create a\nwindow function call and learn about the syntax for window functions,\nsee[Window function calls](/bigquery/docs/reference/standard-sql/window-function-calls).\n\nNumbering functions assign integer values to each row based on their position\nwithin the specified window. The`OVER`clause syntax varies across\nnumbering functions.\n\n\n\n"
},
{
"name": "NORMALIZE",
"arguments": [],
"category": "String",
"description_markdown": "```\nNORMALIZE(value[, normalization_mode])\n```\n\n **Description** \n\nTakes a string value and returns it as a normalized string. If you do not\nprovide a normalization mode,`NFC`is used.\n\n[Normalization](https://en.wikipedia.org/wiki/Unicode_equivalence#Normalization)is used to ensure that\ntwo strings are equivalent. Normalization is often used in situations in which\ntwo strings render the same on the screen but have different Unicode code\npoints.\n\n`NORMALIZE`supports four optional normalization modes:\n\n| Value | Name | Description |\n| --- | --- | --- |\n| `NFC` | Normalization Form Canonical Composition | Decomposes and recomposes characters by canonical equivalence. |\n| `NFKC` | Normalization Form Compatibility Composition | Decomposes characters by compatibility, then recomposes them by canonical equivalence. |\n| `NFD` | Normalization Form Canonical Decomposition | Decomposes characters by canonical equivalence, and multiple combining characters are arranged in a specific order. |\n| `NFKD` | Normalization Form Compatibility Decomposition | Decomposes characters by compatibility, and multiple combining characters are arranged in a specific order. |\n\n **Return type** \n\n`STRING`\n\n **Examples** \n\n```\nSELECT a, b, a = b as normalized\nFROM (SELECT NORMALIZE('\\u00ea') as a, NORMALIZE('\\u0065\\u0302') as b);\n\n/*---+---+------------*\n | a | b | normalized |\n +---+---+------------+\n | ê | ê | true |\n *---+---+------------*/\n```\n\nThe following example normalizes different space characters.\n\n```\nWITH EquivalentNames AS (\n SELECT name\n FROM UNNEST([\n 'Jane\\u2004Doe',\n 'John\\u2004Smith',\n 'Jane\\u2005Doe',\n 'Jane\\u2006Doe',\n 'John Smith']) AS name\n)\nSELECT\n NORMALIZE(name, NFKC) AS normalized_name,\n COUNT(*) AS name_count\nFROM EquivalentNames\nGROUP BY 1;\n\n/*-----------------+------------*\n | normalized_name | name_count |\n +-----------------+------------+\n | John Smith | 2 |\n | Jane Doe | 3 |\n *-----------------+------------*/\n```\n\n\n"
},
{
"name": "NORMALIZE_AND_CASEFOLD",
"arguments": [],
"category": "String",
"description_markdown": "```\nNORMALIZE_AND_CASEFOLD(value[, normalization_mode])\n```\n\n **Description** \n\nTakes a string value and returns it as a normalized string. If you do not\nprovide a normalization mode,`NFC`is used.\n\n[Normalization](https://en.wikipedia.org/wiki/Unicode_equivalence#Normalization)is used to ensure that\ntwo strings are equivalent. Normalization is often used in situations in which\ntwo strings render the same on the screen but have different Unicode code\npoints.\n\n[Case folding](https://en.wikipedia.org/wiki/Letter_case#Case_folding)is used for the caseless\ncomparison of strings. If you need to compare strings and case should not be\nconsidered, use`NORMALIZE_AND_CASEFOLD`, otherwise use[NORMALIZE](#normalize).\n\n`NORMALIZE_AND_CASEFOLD`supports four optional normalization modes:\n\n| Value | Name | Description |\n| --- | --- | --- |\n| `NFC` | Normalization Form Canonical Composition | Decomposes and recomposes characters by canonical equivalence. |\n| `NFKC` | Normalization Form Compatibility Composition | Decomposes characters by compatibility, then recomposes them by canonical equivalence. |\n| `NFD` | Normalization Form Canonical Decomposition | Decomposes characters by canonical equivalence, and multiple combining characters are arranged in a specific order. |\n| `NFKD` | Normalization Form Compatibility Decomposition | Decomposes characters by compatibility, and multiple combining characters are arranged in a specific order. |\n\n **Return type** \n\n`STRING`\n\n **Examples** \n\n```\nSELECT\n a, b,\n NORMALIZE(a) = NORMALIZE(b) as normalized,\n NORMALIZE_AND_CASEFOLD(a) = NORMALIZE_AND_CASEFOLD(b) as normalized_with_case_folding\nFROM (SELECT 'The red barn' AS a, 'The Red Barn' AS b);\n\n/*--------------+--------------+------------+------------------------------*\n | a | b | normalized | normalized_with_case_folding |\n +--------------+--------------+------------+------------------------------+\n | The red barn | The Red Barn | false | true |\n *--------------+--------------+------------+------------------------------*/\n```\n\n```\nWITH Strings AS (\n SELECT '\\u2168' AS a, 'IX' AS b UNION ALL\n SELECT '\\u0041\\u030A', '\\u00C5'\n)\nSELECT a, b,\n NORMALIZE_AND_CASEFOLD(a, NFD)=NORMALIZE_AND_CASEFOLD(b, NFD) AS nfd,\n NORMALIZE_AND_CASEFOLD(a, NFC)=NORMALIZE_AND_CASEFOLD(b, NFC) AS nfc,\n NORMALIZE_AND_CASEFOLD(a, NFKD)=NORMALIZE_AND_CASEFOLD(b, NFKD) AS nkfd,\n NORMALIZE_AND_CASEFOLD(a, NFKC)=NORMALIZE_AND_CASEFOLD(b, NFKC) AS nkfc\nFROM Strings;\n\n/*---+----+-------+-------+------+------*\n | a | b | nfd | nfc | nkfd | nkfc |\n +---+----+-------+-------+------+------+\n | Ⅸ | IX | false | false | true | true |\n | Å | Å | true | true | true | true |\n *---+----+-------+-------+------+------*/\n```\n\n\n"
},
{
"name": "NTH_VALUE",
"arguments": [],
"category": "Navigation",
"description_markdown": "```\nNTH_VALUE (value_expression, constant_integer_expression [{RESPECT | IGNORE} NULLS])\nOVER over_clause\n\nover_clause:\n { named_window | ( [ window_specification ] ) }\n\nwindow_specification:\n [ named_window ]\n [ PARTITION BY partition_expression [, ...] ]\n ORDER BY expression [ { ASC | DESC } ] [, ...]\n [ window_frame_clause ]\n```\n\n **Description** \n\nReturns the value of`value_expression`at the Nth row of the current window\nframe, where Nth is defined by`constant_integer_expression`. Returns NULL if\nthere is no such row.\n\nThis function includes`NULL`values in the calculation unless`IGNORE NULLS`is\npresent. If`IGNORE NULLS`is present, the function excludes`NULL`values from\nthe calculation.\n\nTo learn more about the`OVER`clause and how to use it, see[Window function calls](/bigquery/docs/reference/standard-sql/window-function-calls).\n\n **Supported Argument Types** \n\n- ` value_expression`can be any data type that can be returned from an\nexpression.\n- ` constant_integer_expression`can be any constant expression that returns an\ninteger.\n\n **Return Data Type** \n\nSame type as`value_expression`.\n\n **Examples** \n\n```\nWITH finishers AS\n (SELECT 'Sophia Liu' as name,\n TIMESTAMP '2016-10-18 2:51:45' as finish_time,\n 'F30-34' as division\n UNION ALL SELECT 'Lisa Stelzner', TIMESTAMP '2016-10-18 2:54:11', 'F35-39'\n UNION ALL SELECT 'Nikki Leith', TIMESTAMP '2016-10-18 2:59:01', 'F30-34'\n UNION ALL SELECT 'Lauren Matthews', TIMESTAMP '2016-10-18 3:01:17', 'F35-39'\n UNION ALL SELECT 'Desiree Berry', TIMESTAMP '2016-10-18 3:05:42', 'F35-39'\n UNION ALL SELECT 'Suzy Slane', TIMESTAMP '2016-10-18 3:06:24', 'F35-39'\n UNION ALL SELECT 'Jen Edwards', TIMESTAMP '2016-10-18 3:06:36', 'F30-34'\n UNION ALL SELECT 'Meghan Lederer', TIMESTAMP '2016-10-18 3:07:41', 'F30-34'\n UNION ALL SELECT 'Carly Forte', TIMESTAMP '2016-10-18 3:08:58', 'F25-29'\n UNION ALL SELECT 'Lauren Reasoner', TIMESTAMP '2016-10-18 3:10:14', 'F30-34')\nSELECT name,\n FORMAT_TIMESTAMP('%X', finish_time) AS finish_time,\n division,\n FORMAT_TIMESTAMP('%X', fastest_time) AS fastest_time,\n FORMAT_TIMESTAMP('%X', second_fastest) AS second_fastest\nFROM (\n SELECT name,\n finish_time,\n division,finishers,\n FIRST_VALUE(finish_time)\n OVER w1 AS fastest_time,\n NTH_VALUE(finish_time, 2)\n OVER w1 as second_fastest\n FROM finishers\n WINDOW w1 AS (\n PARTITION BY division ORDER BY finish_time ASC\n ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING));\n\n/*-----------------+-------------+----------+--------------+----------------*\n | name | finish_time | division | fastest_time | second_fastest |\n +-----------------+-------------+----------+--------------+----------------+\n | Carly Forte | 03:08:58 | F25-29 | 03:08:58 | NULL |\n | Sophia Liu | 02:51:45 | F30-34 | 02:51:45 | 02:59:01 |\n | Nikki Leith | 02:59:01 | F30-34 | 02:51:45 | 02:59:01 |\n | Jen Edwards | 03:06:36 | F30-34 | 02:51:45 | 02:59:01 |\n | Meghan Lederer | 03:07:41 | F30-34 | 02:51:45 | 02:59:01 |\n | Lauren Reasoner | 03:10:14 | F30-34 | 02:51:45 | 02:59:01 |\n | Lisa Stelzner | 02:54:11 | F35-39 | 02:54:11 | 03:01:17 |\n | Lauren Matthews | 03:01:17 | F35-39 | 02:54:11 | 03:01:17 |\n | Desiree Berry | 03:05:42 | F35-39 | 02:54:11 | 03:01:17 |\n | Suzy Slane | 03:06:24 | F35-39 | 02:54:11 | 03:01:17 |\n *-----------------+-------------+----------+--------------+----------------*/\n```\n\n\n"
},
{
"name": "NTILE",
"arguments": [],
"category": "Numbering",
"description_markdown": "```\nNTILE(constant_integer_expression)\nOVER over_clause\n\nover_clause:\n { named_window | ( [ window_specification ] ) }\n\nwindow_specification:\n [ named_window ]\n [ PARTITION BY partition_expression [, ...] ]\n ORDER BY expression [ { ASC | DESC } ] [, ...]\n```\n\n **Description** \n\nThis function divides the rows into`constant_integer_expression`buckets based on row ordering and returns the 1-based bucket number that is\nassigned to each row. The number of rows in the buckets can differ by at most 1.\nThe remainder values (the remainder of number of rows divided by buckets) are\ndistributed one for each bucket, starting with bucket 1. If`constant_integer_expression`evaluates to NULL, 0 or negative, an\nerror is provided.\n\nTo learn more about the`OVER`clause and how to use it, see[Window function calls](/bigquery/docs/reference/standard-sql/window-function-calls).\n\n **Return Type** \n\n`INT64`\n\n **Example** \n\n```\nWITH finishers AS\n (SELECT 'Sophia Liu' as name,\n TIMESTAMP '2016-10-18 2:51:45' as finish_time,\n 'F30-34' as division\n UNION ALL SELECT 'Lisa Stelzner', TIMESTAMP '2016-10-18 2:54:11', 'F35-39'\n UNION ALL SELECT 'Nikki Leith', TIMESTAMP '2016-10-18 2:59:01', 'F30-34'\n UNION ALL SELECT 'Lauren Matthews', TIMESTAMP '2016-10-18 3:01:17', 'F35-39'\n UNION ALL SELECT 'Desiree Berry', TIMESTAMP '2016-10-18 3:05:42', 'F35-39'\n UNION ALL SELECT 'Suzy Slane', TIMESTAMP '2016-10-18 3:06:24', 'F35-39'\n UNION ALL SELECT 'Jen Edwards', TIMESTAMP '2016-10-18 3:06:36', 'F30-34'\n UNION ALL SELECT 'Meghan Lederer', TIMESTAMP '2016-10-18 2:59:01', 'F30-34')\nSELECT name,\n finish_time,\n division,\n NTILE(3) OVER (PARTITION BY division ORDER BY finish_time ASC) AS finish_rank\nFROM finishers;\n\n/*-----------------+------------------------+----------+-------------*\n | name | finish_time | division | finish_rank |\n +-----------------+------------------------+----------+-------------+\n | Sophia Liu | 2016-10-18 09:51:45+00 | F30-34 | 1 |\n | Meghan Lederer | 2016-10-18 09:59:01+00 | F30-34 | 1 |\n | Nikki Leith | 2016-10-18 09:59:01+00 | F30-34 | 2 |\n | Jen Edwards | 2016-10-18 10:06:36+00 | F30-34 | 3 |\n | Lisa Stelzner | 2016-10-18 09:54:11+00 | F35-39 | 1 |\n | Lauren Matthews | 2016-10-18 10:01:17+00 | F35-39 | 1 |\n | Desiree Berry | 2016-10-18 10:05:42+00 | F35-39 | 2 |\n | Suzy Slane | 2016-10-18 10:06:24+00 | F35-39 | 3 |\n *-----------------+------------------------+----------+-------------*/\n```\n\n\n"
},
{
"name": "OCTET_LENGTH",
"arguments": [],
"category": "String",
"description_markdown": "```\nOCTET_LENGTH(value)\n```\n\nAlias for[BYTE_LENGTH](#byte_length).\n\n\n\n"
},
{
"name": "PARSE_BIGNUMERIC",
"arguments": [],
"category": "Conversion",
"description_markdown": "```\nPARSE_BIGNUMERIC(string_expression)\n```\n\n **Description** \n\nConverts a`STRING`to a`BIGNUMERIC`value.\n\nThe numeric literal contained in the string must not exceed the[maximum precision or range](/bigquery/docs/reference/standard-sql/data-types#decimal_types)of the`BIGNUMERIC`type, or an\nerror occurs. If the number of digits after the decimal point exceeds 38, then\nthe resulting`BIGNUMERIC`value rounds[half away from zero](https://en.wikipedia.org/wiki/Rounding#Round_half_away_from_zero)to have 38 digits after the\ndecimal point.\n\n```\n-- This example shows how a string with a decimal point is parsed.\nSELECT PARSE_BIGNUMERIC(\"123.45\") AS parsed;\n\n/*--------*\n | parsed |\n +--------+\n | 123.45 |\n *--------*/\n\n-- This example shows how a string with an exponent is parsed.\nSELECT PARSE_BIGNUMERIC(\"123.456E37\") AS parsed;\n\n/*-----------------------------------------*\n | parsed |\n +-----------------------------------------+\n | 123400000000000000000000000000000000000 |\n *-----------------------------------------*/\n\n-- This example shows the rounding when digits after the decimal point exceeds 38.\nSELECT PARSE_BIGNUMERIC(\"1.123456789012345678901234567890123456789\") as parsed;\n\n/*------------------------------------------*\n | parsed |\n +------------------------------------------+\n | 1.12345678901234567890123456789012345679 |\n *------------------------------------------*/\n```\n\nThis function is similar to using the[CAST AS BIGNUMERIC](#cast_bignumeric)function except that the`PARSE_BIGNUMERIC`function only accepts string inputs\nand allows the following in the string:\n\n- Spaces between the sign (+/-) and the number\n- Signs (+/-) after the number\n\nRules for valid input strings:\n\n| Rule | Example Input | Output |\n| --- | --- | --- |\n| The string can only contain digits, commas, decimal points and signs. | \"- 12,34567,89.0\" | -123456789 |\n| Whitespaces are allowed anywhere except between digits. | \" - 12.345 \" | -12.345 |\n| Only digits and commas are allowed before the decimal point. | \" 12,345,678\" | 12345678 |\n| Only digits are allowed after the decimal point. | \"1.234 \" | 1.234 |\n| Use`E`or`e`for exponents. After the`e`, digits and a leading sign indicator are allowed. | \" 123.45e-1\" | 12.345 |\n| If the integer part is not empty, then it must contain at least one\n digit. | \" 0,.12 -\" | -0.12 |\n| If the string contains a decimal point, then it must contain at least\n one digit. | \" .1\" | 0.1 |\n| The string cannot contain more than one sign. | \" 0.5 +\" | 0.5 |\n\n **Return Data Type** \n\n`BIGNUMERIC`\n\n **Examples** \n\nThis example shows an input with spaces before, after, and between the\nsign and the number:\n\n```\nSELECT PARSE_BIGNUMERIC(\" - 12.34 \") as parsed;\n\n/*--------*\n | parsed |\n +--------+\n | -12.34 |\n *--------*/\n```\n\nThis example shows an input with an exponent as well as the sign after the\nnumber:\n\n```\nSELECT PARSE_BIGNUMERIC(\"12.34e-1-\") as parsed;\n\n/*--------*\n | parsed |\n +--------+\n | -1.234 |\n *--------*/\n```\n\nThis example shows an input with multiple commas in the integer part of the\nnumber:\n\n```\nSELECT PARSE_BIGNUMERIC(\" 1,2,,3,.45 + \") as parsed;\n\n/*--------*\n | parsed |\n +--------+\n | 123.45 |\n *--------*/\n```\n\nThis example shows an input with a decimal point and no digits in the whole\nnumber part:\n\n```\nSELECT PARSE_BIGNUMERIC(\".1234 \") as parsed;\n\n/*--------*\n | parsed |\n +--------+\n | 0.1234 |\n *--------*/\n```\n\n **Examples of invalid inputs** \n\nThis example is invalid because the whole number part contains no digits:\n\n```\nSELECT PARSE_BIGNUMERIC(\",,,.1234 \") as parsed;\n```\n\nThis example is invalid because there are whitespaces between digits:\n\n```\nSELECT PARSE_BIGNUMERIC(\"1 23.4 5 \") as parsed;\n```\n\nThis example is invalid because the number is empty except for an exponent:\n\n```\nSELECT PARSE_BIGNUMERIC(\" e1 \") as parsed;\n```\n\nThis example is invalid because the string contains multiple signs:\n\n```\nSELECT PARSE_BIGNUMERIC(\" - 12.3 - \") as parsed;\n```\n\nThis example is invalid because the value of the number falls outside the range\nof`BIGNUMERIC`:\n\n```\nSELECT PARSE_BIGNUMERIC(\"12.34E100 \") as parsed;\n```\n\nThis example is invalid because the string contains invalid characters:\n\n```\nSELECT PARSE_BIGNUMERIC(\"$12.34\") as parsed;\n```\n\n\n"
},
{
"name": "PARSE_DATE",
"arguments": [],
"category": "Date",
"description_markdown": "```\nPARSE_DATE(format_string, date_string)\n```\n\n **Description** \n\nConverts a[string representation of date](#format_date)to a`DATE`object.\n\n`format_string`contains the[format elements](/bigquery/docs/reference/standard-sql/format-elements#format_elements_date_time)that define how`date_string`is formatted. Each element in`date_string`must have a corresponding element in`format_string`. The\nlocation of each element in`format_string`must match the location of\neach element in`date_string`.\n\n```\n-- This works because elements on both sides match.\nSELECT PARSE_DATE('%A %b %e %Y', 'Thursday Dec 25 2008');\n\n-- This produces an error because the year element is in different locations.\nSELECT PARSE_DATE('%Y %A %b %e', 'Thursday Dec 25 2008');\n\n-- This produces an error because one of the year elements is missing.\nSELECT PARSE_DATE('%A %b %e', 'Thursday Dec 25 2008');\n\n-- This works because %F can find all matching elements in date_string.\nSELECT PARSE_DATE('%F', '2000-12-30');\n```\n\nWhen using`PARSE_DATE`, keep the following in mind:\n\n- **Unspecified fields.** Any unspecified field is initialized from` 1970-01-01`.\n- **Case insensitivity.** Names, such as` Monday`,` February`, and so on, are\ncase insensitive.\n- **Whitespace.** One or more consecutive white spaces in the format string\nmatches zero or more consecutive white spaces in the date string. In\naddition, leading and trailing white spaces in the date string are always\nallowed -- even if they are not in the format string.\n- **Format precedence.** When two (or more) format elements have overlapping\ninformation (for example both` %F`and` %Y`affect the year), the last one\ngenerally overrides any earlier ones.\n\n **Return Data Type** \n\nDATE\n\n **Examples** \n\nThis example converts a`MM/DD/YY`formatted string to a`DATE`object:\n\n```\nSELECT PARSE_DATE('%x', '12/25/08') AS parsed;\n\n/*------------*\n | parsed |\n +------------+\n | 2008-12-25 |\n *------------*/\n```\n\nThis example converts a`YYYYMMDD`formatted string to a`DATE`object:\n\n```\nSELECT PARSE_DATE('%Y%m%d', '20081225') AS parsed;\n\n/*------------*\n | parsed |\n +------------+\n | 2008-12-25 |\n *------------*/\n```\n\n\n"
},
{
"name": "PARSE_DATETIME",
"arguments": [],
"category": "Datetime",
"description_markdown": "```\nPARSE_DATETIME(format_string, datetime_string)\n```\n\n **Description** \n\nConverts a[string representation of a datetime](#format_datetime)to a`DATETIME`object.\n\n`format_string`contains the[format elements](/bigquery/docs/reference/standard-sql/format-elements#format_elements_date_time)that define how`datetime_string`is formatted. Each element in`datetime_string`must have a corresponding element in`format_string`. The\nlocation of each element in`format_string`must match the location of\neach element in`datetime_string`.\n\n```\n-- This works because elements on both sides match.\nSELECT PARSE_DATETIME(\"%a %b %e %I:%M:%S %Y\", \"Thu Dec 25 07:30:00 2008\");\n\n-- This produces an error because the year element is in different locations.\nSELECT PARSE_DATETIME(\"%a %b %e %Y %I:%M:%S\", \"Thu Dec 25 07:30:00 2008\");\n\n-- This produces an error because one of the year elements is missing.\nSELECT PARSE_DATETIME(\"%a %b %e %I:%M:%S\", \"Thu Dec 25 07:30:00 2008\");\n\n-- This works because %c can find all matching elements in datetime_string.\nSELECT PARSE_DATETIME(\"%c\", \"Thu Dec 25 07:30:00 2008\");\n```\n\nThe format string fully supports most format elements, except for`%P`.\n\n`PARSE_DATETIME`parses`string`according to the following rules:\n\n- **Unspecified fields.** Any unspecified field is initialized from` 1970-01-01 00:00:00.0`. For example, if the year is unspecified then it\ndefaults to` 1970`.\n- **Case insensitivity.** Names, such as` Monday`and` February`,\nare case insensitive.\n- **Whitespace.** One or more consecutive white spaces in the format string\nmatches zero or more consecutive white spaces in the` DATETIME`string. Leading and trailing\nwhite spaces in the` DATETIME`string are always\nallowed, even if they are not in the format string.\n- **Format precedence.** When two or more format elements have overlapping\ninformation, the last one generally overrides any earlier ones, with some\nexceptions. For example, both` %F`and` %Y`affect the year, so the earlier\nelement overrides the later. See the descriptions\nof` %s`,` %C`, and` %y`in[Supported Format Elements For DATETIME](/bigquery/docs/reference/standard-sql/format-elements#format_elements_date_time).\n- **Format divergence.** ` %p`can be used with` am`,` AM`,` pm`, and` PM`.\n\n **Return Data Type** \n\n`DATETIME`\n\n **Examples** \n\nThe following examples parse a`STRING`literal as a`DATETIME`.\n\n```\nSELECT PARSE_DATETIME('%Y-%m-%d %H:%M:%S', '1998-10-18 13:45:55') AS datetime;\n\n/*---------------------*\n | datetime |\n +---------------------+\n | 1998-10-18T13:45:55 |\n *---------------------*/\n```\n\n```\nSELECT PARSE_DATETIME('%m/%d/%Y %I:%M:%S %p', '8/30/2018 2:23:38 pm') AS datetime;\n\n/*---------------------*\n | datetime |\n +---------------------+\n | 2018-08-30T14:23:38 |\n *---------------------*/\n```\n\nThe following example parses a`STRING`literal\ncontaining a date in a natural language format as a`DATETIME`.\n\n```\nSELECT PARSE_DATETIME('%A, %B %e, %Y','Wednesday, December 19, 2018')\n AS datetime;\n\n/*---------------------*\n | datetime |\n +---------------------+\n | 2018-12-19T00:00:00 |\n *---------------------*/\n```\n\n\n<span id=\"debugging_functions\">\n## Debugging functions\n\n</span>\nGoogleSQL for BigQuery supports the following debugging functions.\n\n\n\n"
},
{
"name": "PARSE_JSON",
"arguments": [],
"category": "JSON",
"description_markdown": "```\nPARSE_JSON(json_string_expr[, wide_number_mode=>{ 'exact' | 'round' }])\n```\n\n **Description** \n\nConverts a JSON-formatted`STRING`value to a`JSON`value.\n\nArguments:\n\n- ` json_string_expr`: A JSON-formatted string. For example:\n \n \n ```\n '{\"class\": {\"students\": [{\"name\": \"Jane\"}]}}'\n ```\n \n \n- ` wide_number_mode`: Optional mandatory-named argument that determines how to\nhandle numbers that cannot be stored in a` JSON`value without the loss of\nprecision. If used,` wide_number_mode`must include one of these values:\n \n \n - ` exact`(default): Only accept numbers that can be stored without loss\nof precision. If a number that cannot be stored without loss of\nprecision is encountered, the function throws an error.\n - ` round`: If a number that cannot be stored without loss of precision is\nencountered, attempt to round it to a number that can be stored without\nloss of precision. If the number cannot be rounded, the function throws\nan error.If a number appears in a JSON object or array, the` wide_number_mode`argument is applied to the number in the object or array.\n \n \n\nNumbers from the following domains can be stored in JSON without loss of\nprecision:\n\n- 64-bit signed/unsigned integers, such as` INT64`\n- ` FLOAT64`\n\n **Return type** \n\n`JSON`\n\n **Examples** \n\nIn the following example, a JSON-formatted string is converted to`JSON`.\n\n```\nSELECT PARSE_JSON('{\"coordinates\": [10, 20], \"id\": 1}') AS json_data;\n\n/*--------------------------------*\n | json_data |\n +--------------------------------+\n | {\"coordinates\":[10,20],\"id\":1} |\n *--------------------------------*/\n```\n\nThe following queries fail because:\n\n- The number that was passed in cannot be stored without loss of precision.\n- ` wide_number_mode=>'exact'`is used implicitly in the first query and\nexplicitly in the second query.\n\n```\nSELECT PARSE_JSON('{\"id\": 922337203685477580701}') AS json_data; -- fails\nSELECT PARSE_JSON('{\"id\": 922337203685477580701}', wide_number_mode=>'exact') AS json_data; -- fails\n```\n\nThe following query rounds the number to a number that can be stored in JSON.\n\n```\nSELECT PARSE_JSON('{\"id\": 922337203685477580701}', wide_number_mode=>'round') AS json_data;\n\n/*------------------------------*\n | json_data |\n +------------------------------+\n | {\"id\":9.223372036854776e+20} |\n *------------------------------*/\n```\n\n\n"
},
{
"name": "PARSE_NUMERIC",
"arguments": [],
"category": "Conversion",
"description_markdown": "```\nPARSE_NUMERIC(string_expression)\n```\n\n **Description** \n\nConverts a`STRING`to a`NUMERIC`value.\n\nThe numeric literal contained in the string must not exceed the[maximum precision or range](/bigquery/docs/reference/standard-sql/data-types#decimal_types)of the`NUMERIC`type, or an error\noccurs. If the number of digits after the decimal point exceeds nine, then the\nresulting`NUMERIC`value rounds[half away from zero](https://en.wikipedia.org/wiki/Rounding#Round_half_away_from_zero)to have nine digits after the\ndecimal point.\n\n```\n-- This example shows how a string with a decimal point is parsed.\nSELECT PARSE_NUMERIC(\"123.45\") AS parsed;\n\n/*--------*\n | parsed |\n +--------+\n | 123.45 |\n *--------*/\n\n-- This example shows how a string with an exponent is parsed.\nSELECT PARSE_NUMERIC(\"12.34E27\") as parsed;\n\n/*-------------------------------*\n | parsed |\n +-------------------------------+\n | 12340000000000000000000000000 |\n *-------------------------------*/\n\n-- This example shows the rounding when digits after the decimal point exceeds 9.\nSELECT PARSE_NUMERIC(\"1.0123456789\") as parsed;\n\n/*-------------*\n | parsed |\n +-------------+\n | 1.012345679 |\n *-------------*/\n```\n\nThis function is similar to using the[CAST AS NUMERIC](#cast_numeric)function\nexcept that the`PARSE_NUMERIC`function only accepts string inputs and allows\nthe following in the string:\n\n- Spaces between the sign (+/-) and the number\n- Signs (+/-) after the number\n\nRules for valid input strings:\n\n| Rule | Example Input | Output |\n| --- | --- | --- |\n| The string can only contain digits, commas, decimal points and signs. | \"- 12,34567,89.0\" | -123456789 |\n| Whitespaces are allowed anywhere except between digits. | \" - 12.345 \" | -12.345 |\n| Only digits and commas are allowed before the decimal point. | \" 12,345,678\" | 12345678 |\n| Only digits are allowed after the decimal point. | \"1.234 \" | 1.234 |\n| Use`E`or`e`for exponents. After\n the`e`,\n digits and a leading sign indicator are allowed. | \" 123.45e-1\" | 12.345 |\n| If the integer part is not empty, then it must contain at least one\n digit. | \" 0,.12 -\" | -0.12 |\n| If the string contains a decimal point, then it must contain at least\n one digit. | \" .1\" | 0.1 |\n| The string cannot contain more than one sign. | \" 0.5 +\" | 0.5 |\n\n **Return Data Type** \n\n`NUMERIC`\n\n **Examples** \n\nThis example shows an input with spaces before, after, and between the\nsign and the number:\n\n```\nSELECT PARSE_NUMERIC(\" - 12.34 \") as parsed;\n\n/*--------*\n | parsed |\n +--------+\n | -12.34 |\n *--------*/\n```\n\nThis example shows an input with an exponent as well as the sign after the\nnumber:\n\n```\nSELECT PARSE_NUMERIC(\"12.34e-1-\") as parsed;\n\n/*--------*\n | parsed |\n +--------+\n | -1.234 |\n *--------*/\n```\n\nThis example shows an input with multiple commas in the integer part of the\nnumber:\n\n```\nSELECT PARSE_NUMERIC(\" 1,2,,3,.45 + \") as parsed;\n\n/*--------*\n | parsed |\n +--------+\n | 123.45 |\n *--------*/\n```\n\nThis example shows an input with a decimal point and no digits in the whole\nnumber part:\n\n```\nSELECT PARSE_NUMERIC(\".1234 \") as parsed;\n\n/*--------*\n | parsed |\n +--------+\n | 0.1234 |\n *--------*/\n```\n\n **Examples of invalid inputs** \n\nThis example is invalid because the whole number part contains no digits:\n\n```\nSELECT PARSE_NUMERIC(\",,,.1234 \") as parsed;\n```\n\nThis example is invalid because there are whitespaces between digits:\n\n```\nSELECT PARSE_NUMERIC(\"1 23.4 5 \") as parsed;\n```\n\nThis example is invalid because the number is empty except for an exponent:\n\n```\nSELECT PARSE_NUMERIC(\" e1 \") as parsed;\n```\n\nThis example is invalid because the string contains multiple signs:\n\n```\nSELECT PARSE_NUMERIC(\" - 12.3 - \") as parsed;\n```\n\nThis example is invalid because the value of the number falls outside the range\nof`BIGNUMERIC`:\n\n```\nSELECT PARSE_NUMERIC(\"12.34E100 \") as parsed;\n```\n\nThis example is invalid because the string contains invalid characters:\n\n```\nSELECT PARSE_NUMERIC(\"$12.34\") as parsed;\n```\n\n\n"
},
{
"name": "PARSE_TIME",
"arguments": [],
"category": "Time",
"description_markdown": "```\nPARSE_TIME(format_string, time_string)\n```\n\n **Description** \n\nConverts a[string representation of time](#format_time)to a`TIME`object.\n\n`format_string`contains the[format elements](/bigquery/docs/reference/standard-sql/format-elements#format_elements_date_time)that define how`time_string`is formatted. Each element in`time_string`must have a corresponding element in`format_string`. The\nlocation of each element in`format_string`must match the location of\neach element in`time_string`.\n\n```\n-- This works because elements on both sides match.\nSELECT PARSE_TIME(\"%I:%M:%S\", \"07:30:00\");\n\n-- This produces an error because the seconds element is in different locations.\nSELECT PARSE_TIME(\"%S:%I:%M\", \"07:30:00\");\n\n-- This produces an error because one of the seconds elements is missing.\nSELECT PARSE_TIME(\"%I:%M\", \"07:30:00\");\n\n-- This works because %T can find all matching elements in time_string.\nSELECT PARSE_TIME(\"%T\", \"07:30:00\");\n```\n\nThe format string fully supports most format elements except for`%P`.\n\nWhen using`PARSE_TIME`, keep the following in mind:\n\n- **Unspecified fields.** Any unspecified field is initialized from` 00:00:00.0`. For instance, if` seconds`is unspecified then it\ndefaults to` 00`, and so on.\n- **Whitespace.** One or more consecutive white spaces in the format string\nmatches zero or more consecutive white spaces in the` TIME`string. In\naddition, leading and trailing white spaces in the` TIME`string are always\nallowed, even if they are not in the format string.\n- **Format precedence.** When two (or more) format elements have overlapping\ninformation, the last one generally overrides any earlier ones.\n- **Format divergence.** ` %p`can be used with` am`,` AM`,` pm`, and` PM`.\n\n **Return Data Type** \n\n`TIME`\n\n **Example** \n\n```\nSELECT PARSE_TIME(\"%H\", \"15\") as parsed_time;\n\n/*-------------*\n | parsed_time |\n +-------------+\n | 15:00:00 |\n *-------------*/\n```\n\n```\nSELECT PARSE_TIME('%I:%M:%S %p', '2:23:38 pm') AS parsed_time;\n\n/*-------------*\n | parsed_time |\n +-------------+\n | 14:23:38 |\n *-------------*/\n```\n\n\n"
},
{
"name": "PARSE_TIMESTAMP",
"arguments": [],
"category": "Timestamp",
"description_markdown": "```\nPARSE_TIMESTAMP(format_string, timestamp_string[, time_zone])\n```\n\n **Description** \n\nConverts a[string representation of a timestamp](#format_timestamp)to a`TIMESTAMP`object.\n\n`format_string`contains the[format elements](/bigquery/docs/reference/standard-sql/format-elements#format_elements_date_time)that define how`timestamp_string`is formatted. Each element in`timestamp_string`must have a corresponding element in`format_string`. The\nlocation of each element in`format_string`must match the location of\neach element in`timestamp_string`.\n\n```\n-- This works because elements on both sides match.\nSELECT PARSE_TIMESTAMP(\"%a %b %e %I:%M:%S %Y\", \"Thu Dec 25 07:30:00 2008\");\n\n-- This produces an error because the year element is in different locations.\nSELECT PARSE_TIMESTAMP(\"%a %b %e %Y %I:%M:%S\", \"Thu Dec 25 07:30:00 2008\");\n\n-- This produces an error because one of the year elements is missing.\nSELECT PARSE_TIMESTAMP(\"%a %b %e %I:%M:%S\", \"Thu Dec 25 07:30:00 2008\");\n\n-- This works because %c can find all matching elements in timestamp_string.\nSELECT PARSE_TIMESTAMP(\"%c\", \"Thu Dec 25 07:30:00 2008\");\n```\n\nThe format string fully supports most format elements, except for`%P`.\n\nWhen using`PARSE_TIMESTAMP`, keep the following in mind:\n\n- **Unspecified fields.** Any unspecified field is initialized from` 1970-01-01 00:00:00.0`. This initialization value uses the time zone specified by the\nfunction's time zone argument, if present. If not, the initialization value\nuses the default time zone, UTC. For instance, if the year\nis unspecified then it defaults to` 1970`, and so on.\n- **Case insensitivity.** Names, such as` Monday`,` February`, and so on, are\ncase insensitive.\n- **Whitespace.** One or more consecutive white spaces in the format string\nmatches zero or more consecutive white spaces in the timestamp string. In\naddition, leading and trailing white spaces in the timestamp string are always\nallowed, even if they are not in the format string.\n- **Format precedence.** When two (or more) format elements have overlapping\ninformation (for example both` %F`and` %Y`affect the year), the last one\ngenerally overrides any earlier ones, with some exceptions (see the\ndescriptions of` %s`,` %C`, and` %y`).\n- **Format divergence.** ` %p`can be used with` am`,` AM`,` pm`, and` PM`.\n\n **Return Data Type** \n\n`TIMESTAMP`\n\n **Example** \n\n```\nSELECT PARSE_TIMESTAMP(\"%c\", \"Thu Dec 25 07:30:00 2008\") AS parsed;\n\n-- Display of results may differ, depending upon the environment and time zone where this query was executed.\n/*-------------------------*\n | parsed |\n +-------------------------+\n | 2008-12-25 07:30:00 UTC |\n *-------------------------*/\n```\n\n\n"
},
{
"name": "PERCENTILE_CONT",
"arguments": [],
"category": "Navigation",
"description_markdown": "```\nPERCENTILE_CONT (value_expression, percentile [{RESPECT | IGNORE} NULLS])\nOVER over_clause\n\nover_clause:\n { named_window | ( [ window_specification ] ) }\n\nwindow_specification:\n [ named_window ]\n [ PARTITION BY partition_expression [, ...] ]\n```\n\n **Description** \n\nComputes the specified percentile value for the value_expression, with linear\ninterpolation.\n\nThis function ignores NULL\nvalues if`RESPECT NULLS`is absent. If`RESPECT NULLS`is present:\n\n- Interpolation between two` NULL`values returns` NULL`.\n- Interpolation between a` NULL`value and a non-` NULL`value returns the\nnon-` NULL`value.\n\nTo learn more about the`OVER`clause and how to use it, see[Window function calls](/bigquery/docs/reference/standard-sql/window-function-calls).\n\n`PERCENTILE_CONT`can be used with differential privacy. To learn more, see[Differentially private aggregate functions](#aggregate-dp-functions).\n\n **Supported Argument Types** \n\n- ` value_expression`and` percentile`must have one of the following types:\n - ` NUMERIC`\n - ` BIGNUMERIC`\n - ` FLOAT64`\n- ` percentile`must be a literal in the range` [0, 1]`.\n\n **Return Data Type** \n\nThe return data type is determined by the argument types with the following\ntable.\n\n| INPUT | `NUMERIC` | `BIGNUMERIC` | `FLOAT64` |\n| --- | --- | --- | --- |\n| `NUMERIC` | `NUMERIC` | `BIGNUMERIC` | `FLOAT64` |\n| `BIGNUMERIC` | `BIGNUMERIC` | `BIGNUMERIC` | `FLOAT64` |\n| `FLOAT64` | `FLOAT64` | `FLOAT64` | `FLOAT64` |\n\n **Examples** \n\nThe following example computes the value for some percentiles from a column of\nvalues while ignoring nulls.\n\n```\nSELECT\n PERCENTILE_CONT(x, 0) OVER() AS min,\n PERCENTILE_CONT(x, 0.01) OVER() AS percentile1,\n PERCENTILE_CONT(x, 0.5) OVER() AS median,\n PERCENTILE_CONT(x, 0.9) OVER() AS percentile90,\n PERCENTILE_CONT(x, 1) OVER() AS max\nFROM UNNEST([0, 3, NULL, 1, 2]) AS x LIMIT 1;\n\n /*-----+-------------+--------+--------------+-----*\n | min | percentile1 | median | percentile90 | max |\n +-----+-------------+--------+--------------+-----+\n | 0 | 0.03 | 1.5 | 2.7 | 3 |\n *-----+-------------+--------+--------------+-----*/\n```\n\nThe following example computes the value for some percentiles from a column of\nvalues while respecting nulls.\n\n```\nSELECT\n PERCENTILE_CONT(x, 0 RESPECT NULLS) OVER() AS min,\n PERCENTILE_CONT(x, 0.01 RESPECT NULLS) OVER() AS percentile1,\n PERCENTILE_CONT(x, 0.5 RESPECT NULLS) OVER() AS median,\n PERCENTILE_CONT(x, 0.9 RESPECT NULLS) OVER() AS percentile90,\n PERCENTILE_CONT(x, 1 RESPECT NULLS) OVER() AS max\nFROM UNNEST([0, 3, NULL, 1, 2]) AS x LIMIT 1;\n\n/*------+-------------+--------+--------------+-----*\n | min | percentile1 | median | percentile90 | max |\n +------+-------------+--------+--------------+-----+\n | NULL | 0 | 1 | 2.6 | 3 |\n *------+-------------+--------+--------------+-----*/\n```\n\n\n"
},
{
"name": "PERCENTILE_DISC",
"arguments": [],
"category": "Navigation",
"description_markdown": "```\nPERCENTILE_DISC (value_expression, percentile [{RESPECT | IGNORE} NULLS])\nOVER over_clause\n\nover_clause:\n { named_window | ( [ window_specification ] ) }\n\nwindow_specification:\n [ named_window ]\n [ PARTITION BY partition_expression [, ...] ]\n```\n\n **Description** \n\nComputes the specified percentile value for a discrete`value_expression`. The\nreturned value is the first sorted value of`value_expression`with cumulative\ndistribution greater than or equal to the given`percentile`value.\n\nThis function ignores`NULL`values unless`RESPECT NULLS`is present.\n\nTo learn more about the`OVER`clause and how to use it, see[Window function calls](/bigquery/docs/reference/standard-sql/window-function-calls).\n\n **Supported Argument Types** \n\n- ` value_expression`can be any orderable type.\n- ` percentile`must be a literal in the range` [0, 1]`, with one of the\nfollowing types:\n - ` NUMERIC`\n - ` BIGNUMERIC`\n - ` FLOAT64`\n\n **Return Data Type** \n\nSame type as`value_expression`.\n\n **Examples** \n\nThe following example computes the value for some percentiles from a column of\nvalues while ignoring nulls.\n\n```\nSELECT\n x,\n PERCENTILE_DISC(x, 0) OVER() AS min,\n PERCENTILE_DISC(x, 0.5) OVER() AS median,\n PERCENTILE_DISC(x, 1) OVER() AS max\nFROM UNNEST(['c', NULL, 'b', 'a']) AS x;\n\n/*------+-----+--------+-----*\n | x | min | median | max |\n +------+-----+--------+-----+\n | c | a | b | c |\n | NULL | a | b | c |\n | b | a | b | c |\n | a | a | b | c |\n *------+-----+--------+-----*/\n```\n\nThe following example computes the value for some percentiles from a column of\nvalues while respecting nulls.\n\n```\nSELECT\n x,\n PERCENTILE_DISC(x, 0 RESPECT NULLS) OVER() AS min,\n PERCENTILE_DISC(x, 0.5 RESPECT NULLS) OVER() AS median,\n PERCENTILE_DISC(x, 1 RESPECT NULLS) OVER() AS max\nFROM UNNEST(['c', NULL, 'b', 'a']) AS x;\n\n/*------+------+--------+-----*\n | x | min | median | max |\n +------+------+--------+-----+\n | c | NULL | a | c |\n | NULL | NULL | a | c |\n | b | NULL | a | c |\n | a | NULL | a | c |\n *------+------+--------+-----*/\n```\n\n\n<span id=\"net_functions\">\n## Net functions\n\n</span>\nGoogleSQL for BigQuery supports the following Net functions.\n\n\n\n"
},
{
"name": "PERCENT_RANK",
"arguments": [],
"category": "Numbering",
"description_markdown": "```\nPERCENT_RANK()\nOVER over_clause\n\nover_clause:\n { named_window | ( [ window_specification ] ) }\n\nwindow_specification:\n [ named_window ]\n [ PARTITION BY partition_expression [, ...] ]\n ORDER BY expression [ { ASC | DESC } ] [, ...]\n```\n\n **Description** \n\nReturn the percentile rank of a row defined as (RK-1)/(NR-1), where RK is\nthe`RANK`of the row and NR is the number of rows in the partition.\nReturns 0 if NR=1.\n\nTo learn more about the`OVER`clause and how to use it, see[Window function calls](/bigquery/docs/reference/standard-sql/window-function-calls).\n\n **Return Type** \n\n`FLOAT64`\n\n **Example** \n\n```\nWITH finishers AS\n (SELECT 'Sophia Liu' as name,\n TIMESTAMP '2016-10-18 2:51:45' as finish_time,\n 'F30-34' as division\n UNION ALL SELECT 'Lisa Stelzner', TIMESTAMP '2016-10-18 2:54:11', 'F35-39'\n UNION ALL SELECT 'Nikki Leith', TIMESTAMP '2016-10-18 2:59:01', 'F30-34'\n UNION ALL SELECT 'Lauren Matthews', TIMESTAMP '2016-10-18 3:01:17', 'F35-39'\n UNION ALL SELECT 'Desiree Berry', TIMESTAMP '2016-10-18 3:05:42', 'F35-39'\n UNION ALL SELECT 'Suzy Slane', TIMESTAMP '2016-10-18 3:06:24', 'F35-39'\n UNION ALL SELECT 'Jen Edwards', TIMESTAMP '2016-10-18 3:06:36', 'F30-34'\n UNION ALL SELECT 'Meghan Lederer', TIMESTAMP '2016-10-18 2:59:01', 'F30-34')\nSELECT name,\n finish_time,\n division,\n PERCENT_RANK() OVER (PARTITION BY division ORDER BY finish_time ASC) AS finish_rank\nFROM finishers;\n\n/*-----------------+------------------------+----------+---------------------*\n | name | finish_time | division | finish_rank |\n +-----------------+------------------------+----------+---------------------+\n | Sophia Liu | 2016-10-18 09:51:45+00 | F30-34 | 0 |\n | Meghan Lederer | 2016-10-18 09:59:01+00 | F30-34 | 0.33333333333333331 |\n | Nikki Leith | 2016-10-18 09:59:01+00 | F30-34 | 0.33333333333333331 |\n | Jen Edwards | 2016-10-18 10:06:36+00 | F30-34 | 1 |\n | Lisa Stelzner | 2016-10-18 09:54:11+00 | F35-39 | 0 |\n | Lauren Matthews | 2016-10-18 10:01:17+00 | F35-39 | 0.33333333333333331 |\n | Desiree Berry | 2016-10-18 10:05:42+00 | F35-39 | 0.66666666666666663 |\n | Suzy Slane | 2016-10-18 10:06:24+00 | F35-39 | 1 |\n *-----------------+------------------------+----------+---------------------*/\n```\n\n\n"
},
{
"name": "POW",
"arguments": [],
"category": "Mathematical",
"description_markdown": "```\nPOW(X, Y)\n```\n\n **Description** \n\nReturns the value of X raised to the power of Y. If the result underflows and is\nnot representable, then the function returns a value of zero.\n\n| X | Y | POW(X, Y) |\n| --- | --- | --- |\n| 2.0 | 3.0 | 8.0 |\n| 1.0 | Any value including`NaN` | 1.0 |\n| Any value including`NaN` | 0 | 1.0 |\n| -1.0 | `+inf` | 1.0 |\n| -1.0 | `-inf` | 1.0 |\n| ABS(X) < 1 | `-inf` | `+inf` |\n| ABS(X) > 1 | `-inf` | 0.0 |\n| ABS(X) < 1 | `+inf` | 0.0 |\n| ABS(X) > 1 | `+inf` | `+inf` |\n| `-inf` | Y < 0 | 0.0 |\n| `-inf` | Y > 0 | `-inf`if Y is an odd integer,`+inf`otherwise |\n| `+inf` | Y < 0 | 0 |\n| `+inf` | Y > 0 | `+inf` |\n| Finite value < 0 | Non-integer | Error |\n| 0 | Finite value < 0 | Error |\n\n **Return Data Type** \n\nThe return data type is determined by the argument types with the following\ntable.\n\n| INPUT | `INT64` | `NUMERIC` | `BIGNUMERIC` | `FLOAT64` |\n| --- | --- | --- | --- | --- |\n| `INT64` | `FLOAT64` | `NUMERIC` | `BIGNUMERIC` | `FLOAT64` |\n| `NUMERIC` | `NUMERIC` | `NUMERIC` | `BIGNUMERIC` | `FLOAT64` |\n| `BIGNUMERIC` | `BIGNUMERIC` | `BIGNUMERIC` | `BIGNUMERIC` | `FLOAT64` |\n| `FLOAT64` | `FLOAT64` | `FLOAT64` | `FLOAT64` | `FLOAT64` |\n\n\n\n"
},
{
"name": "POWER",
"arguments": [],
"category": "Mathematical",
"description_markdown": "```\nPOWER(X, Y)\n```\n\n **Description** \n\nSynonym of[POW(X, Y)](#pow).\n\n\n\n"
},
{
"name": "RAND",
"arguments": [],
"category": "Mathematical",
"description_markdown": "```\nRAND()\n```\n\n **Description** \n\nGenerates a pseudo-random value of type`FLOAT64`in\nthe range of [0, 1), inclusive of 0 and exclusive of 1.\n\n\n\n"
},
{
"name": "RANGE",
"arguments": [],
"category": "Range",
"description_markdown": " **Preview** \n\nThis product or feature is subject to the \"Pre-GA Offerings Terms\"\n in the General Service Terms section of the[Service Specific Terms](/terms/service-terms).\n Pre-GA products and features are available \"as is\" and might have\n limited support. For more information, see the[launch stage descriptions](/products#product-launch-stages).\n\n **Note:** To provide feedback or request support for this feature, send an email to[bigquery-time-series-preview-support@google.com](mailto:bigquery-time-series-preview-support@google.com).```\nRANGE(lower_bound, upper_bound)\n```\n\n **Description** \n\nConstructs a range of[DATE](/bigquery/docs/reference/standard-sql/data-types#date_type),[DATETIME](/bigquery/docs/reference/standard-sql/data-types#datetime_type), or[TIMESTAMP](/bigquery/docs/reference/standard-sql/data-types#timestamp_type)values.\n\n **Definitions** \n\n- ` lower_bound`: The range starts from this value. This can be a` DATE`,` DATETIME`, or` TIMESTAMP`value. If this value is` NULL`, the range\ndoesn't include a lower bound.\n- ` upper_bound`: The range ends before this value. This can be a` DATE`,` DATETIME`, or` TIMESTAMP`value. If this value is` NULL`, the range\ndoesn't include an upper bound.\n\n **Details** \n\n`lower_bound`and`upper_bound`must be of the same data type.\n\nProduces an error if`lower_bound`is greater than or equal to`upper_bound`.\nTo return`NULL`instead, add the`SAFE.`prefix to the function name.\n\n **Return type** \n\n`RANGE<T>`, where`T`is the same data type as the input.\n\n **Examples** \n\nThe following query constructs a date range:\n\n```\nSELECT RANGE(DATE '2022-12-01', DATE '2022-12-31') AS results;\n\n/*--------------------------+\n | results |\n +--------------------------+\n | [2022-12-01, 2022-12-31) |\n +--------------------------*/\n```\n\nThe following query constructs a datetime range:\n\n```\nSELECT RANGE(DATETIME '2022-10-01 14:53:27',\n DATETIME '2022-10-01 16:00:00') AS results;\n\n/*---------------------------------------------+\n | results |\n +---------------------------------------------+\n | [2022-10-01T14:53:27, 2022-10-01T16:00:00) |\n +---------------------------------------------*/\n```\n\nThe following query constructs a timestamp range:\n\n```\nSELECT RANGE(TIMESTAMP '2022-10-01 14:53:27 America/Los_Angeles',\n TIMESTAMP '2022-10-01 16:00:00 America/Los_Angeles') AS results;\n\n-- Results depend upon where this query was executed.\n/*------------------------------------------------------------------+\n | results |\n +------------------------------------------------------------------+\n | [2022-10-01 21:53:27.000000 UTC, 2022-10-01 23:00:00.000000 UTC) |\n +------------------------------------------------------------------*/\n```\n\nThe following query constructs a date range with no lower bound:\n\n```\nSELECT RANGE(NULL, DATE '2022-12-31') AS results;\n\n/*-------------------------+\n | results |\n +-------------------------+\n | [UNBOUNDED, 2022-12-31) |\n +-------------------------*/\n```\n\nThe following query constructs a date range with no upper bound:\n\n```\nSELECT RANGE(DATE '2022-10-01', NULL) AS results;\n\n/*--------------------------+\n | results |\n +--------------------------+\n | [2022-10-01, UNBOUNDED) |\n +--------------------------*/\n```\n\n\n"
},
{
"name": "RANGE_BUCKET",
"arguments": [],
"category": "Mathematical",
"description_markdown": "```\nRANGE_BUCKET(point, boundaries_array)\n```\n\n **Description** \n\n`RANGE_BUCKET`scans through a sorted array and returns the 0-based position\nof the point's upper bound. This can be useful if you need to group your data to\nbuild partitions, histograms, business-defined rules, and more.\n\n`RANGE_BUCKET`follows these rules:\n\n- If the point exists in the array, returns the index of the next larger value.\n \n \n ```\n RANGE_BUCKET(20, [0, 10, 20, 30, 40]) -- 3 is return value RANGE_BUCKET(20, [0, 10, 20, 20, 40, 40]) -- 4 is return value\n ```\n \n \n- If the point does not exist in the array, but it falls between two values,\nreturns the index of the larger value.\n \n \n ```\n RANGE_BUCKET(25, [0, 10, 20, 30, 40]) -- 3 is return value\n ```\n \n \n- If the point is smaller than the first value in the array, returns 0.\n \n \n ```\n RANGE_BUCKET(-10, [5, 10, 20, 30, 40]) -- 0 is return value\n ```\n \n \n- If the point is greater than or equal to the last value in the array,\nreturns the length of the array.\n \n \n ```\n RANGE_BUCKET(80, [0, 10, 20, 30, 40]) -- 5 is return value\n ```\n \n \n- If the array is empty, returns 0.\n \n \n ```\n RANGE_BUCKET(80, []) -- 0 is return value\n ```\n \n \n- If the point is` NULL`or` NaN`, returns` NULL`.\n \n \n ```\n RANGE_BUCKET(NULL, [0, 10, 20, 30, 40]) -- NULL is return value\n ```\n \n \n- The data type for the point and array must be compatible.\n \n \n ```\n RANGE_BUCKET('a', ['a', 'b', 'c', 'd']) -- 1 is return value RANGE_BUCKET(1.2, [1, 1.2, 1.4, 1.6]) -- 2 is return value RANGE_BUCKET(1.2, [1, 2, 4, 6]) -- execution failure\n ```\n \n \n\nExecution failure occurs when:\n\n- The array has a` NaN`or` NULL`value in it.\n \n \n ```\n RANGE_BUCKET(80, [NULL, 10, 20, 30, 40]) -- execution failure\n ```\n \n \n- The array is not sorted in ascending order.\n \n \n ```\n RANGE_BUCKET(30, [10, 30, 20, 40, 50]) -- execution failure\n ```\n \n \n\n **Parameters** \n\n- ` point`: A generic value.\n- ` boundaries_array`: A generic array of values.\n\n **Note:** The data type for`point`and the element type of`boundaries_array`must be equivalent. The data type must be[comparable](/bigquery/docs/reference/standard-sql/data-types#data_type_properties). **Return Value** \n\n`INT64`\n\n **Examples** \n\nIn a table called`students`, check to see how many records would\nexist in each`age_group`bucket, based on a student's age:\n\n- age_group 0 (age < 10)\n- age_group 1 (age >= 10, age < 20)\n- age_group 2 (age >= 20, age < 30)\n- age_group 3 (age >= 30)\n\n```\nWITH students AS\n(\n SELECT 9 AS age UNION ALL\n SELECT 20 AS age UNION ALL\n SELECT 25 AS age UNION ALL\n SELECT 31 AS age UNION ALL\n SELECT 32 AS age UNION ALL\n SELECT 33 AS age\n)\nSELECT RANGE_BUCKET(age, [10, 20, 30]) AS age_group, COUNT(*) AS count\nFROM students\nGROUP BY 1\n\n/*--------------+-------*\n | age_group | count |\n +--------------+-------+\n | 0 | 1 |\n | 2 | 2 |\n | 3 | 3 |\n *--------------+-------*/\n```\n\n\n"
},
{
"name": "RANGE_CONTAINS",
"arguments": [],
"category": "Range",
"description_markdown": " **Preview** \n\nThis product or feature is subject to the \"Pre-GA Offerings Terms\"\n in the General Service Terms section of the[Service Specific Terms](/terms/service-terms).\n Pre-GA products and features are available \"as is\" and might have\n limited support. For more information, see the[launch stage descriptions](/products#product-launch-stages).\n\n **Note:** To provide feedback or request support for this feature, send an email to[bigquery-time-series-preview-support@google.com](mailto:bigquery-time-series-preview-support@google.com).- [Signature 1](#signature_1): Checks if every value in one range is\nin another range.\n- [Signature 2](#signature_2): Checks if a value is in a range.\n\n\n<span id=\"signature_1_3\">\n#### Signature 1\n\n</span>\n```\nRANGE_CONTAINS(outer_range, inner_range)\n```\n\n **Description** \n\nChecks if the inner range is in the outer range.\n\n **Definitions** \n\n- ` outer_range`: The` RANGE<T>`value to search within.\n- ` inner_range`: The` RANGE<T>`value to search for in` outer_range`.\n\n **Details** \n\nReturns`TRUE`if`inner_range`exists in`outer_range`.\nOtherwise, returns`FALSE`.\n\n`T`must be of the same type for all inputs.\n\n **Return type** \n\n`BOOL`\n\n **Examples** \n\nIn the following query, the inner range is in the outer range:\n\n```\nSELECT RANGE_CONTAINS(\n RANGE<DATE> '[2022-01-01, 2023-01-01)',\n RANGE<DATE> '[2022-04-01, 2022-07-01)') AS results;\n\n/*---------+\n | results |\n +---------+\n | TRUE |\n +---------*/\n```\n\nIn the following query, the inner range is not in the outer range:\n\n```\nSELECT RANGE_CONTAINS(\n RANGE<DATE> '[2022-01-01, 2023-01-01)',\n RANGE<DATE> '[2023-01-01, 2023-04-01)') AS results;\n\n/*---------+\n | results |\n +---------+\n | FALSE |\n +---------*/\n```\n\n\n<span id=\"signature_2_3\">\n#### Signature 2\n\n</span>\n```\nRANGE_CONTAINS(range_to_search, value_to_find)\n```\n\n **Description** \n\nChecks if a value is in a range.\n\n **Definitions** \n\n- ` range_to_search`: The` RANGE<T>`value to search within.\n- ` value_to_find`: The value to search for in` range_to_search`.\n\n **Details** \n\nReturns`TRUE`if`value_to_find`exists in`range_to_search`.\nOtherwise, returns`FALSE`.\n\nThe data type for`value_to_find`must be the same data type as`T`in`range_to_search`.\n\n **Return type** \n\n`BOOL`\n\n **Examples** \n\nIn the following query, the value`2022-04-01`is found in the range`[2022-01-01, 2023-01-01)`:\n\n```\nSELECT RANGE_CONTAINS(\n RANGE<DATE> '[2022-01-01, 2023-01-01)',\n DATE '2022-04-01') AS results;\n\n/*---------+\n | results |\n +---------+\n | TRUE |\n +---------*/\n```\n\nIn the following query, the value`2023-04-01`is not found in the range`[2022-01-01, 2023-01-01)`:\n\n```\nSELECT RANGE_CONTAINS(\n RANGE<DATE> '[2022-01-01, 2023-01-01)',\n DATE '2023-04-01') AS results;\n\n/*---------+\n | results |\n +---------+\n | FALSE |\n +---------*/\n```\n\n\n"
},
{
"name": "RANGE_END",
"arguments": [],
"category": "Range",
"description_markdown": " **Preview** \n\nThis product or feature is subject to the \"Pre-GA Offerings Terms\"\n in the General Service Terms section of the[Service Specific Terms](/terms/service-terms).\n Pre-GA products and features are available \"as is\" and might have\n limited support. For more information, see the[launch stage descriptions](/products#product-launch-stages).\n\n **Note:** To provide feedback or request support for this feature, send an email to[bigquery-time-series-preview-support@google.com](mailto:bigquery-time-series-preview-support@google.com).```\nRANGE_END(range_to_check)\n```\n\n **Description** \n\nGets the upper bound of a range.\n\n **Definitions** \n\n- ` range_to_check`: The` RANGE<T>`value.\n\n **Details** \n\nReturns`NULL`if the upper bound in`range_value`is`UNBOUNDED`.\n\nReturns`NULL`if`range_to_check`is`NULL`.\n\n **Return type** \n\n`T`in`range_value`\n\n **Examples** \n\nIn the following query, the upper bound of the range is retrieved:\n\n```\nSELECT RANGE_END(RANGE<DATE> '[2022-12-01, 2022-12-31)') AS results;\n\n/*------------+\n | results |\n +------------+\n | 2022-12-31 |\n +------------*/\n```\n\nIn the following query, the upper bound of the range is unbounded, so`NULL`is returned:\n\n```\nSELECT RANGE_END(RANGE<DATE> '[2022-12-01, UNBOUNDED)') AS results;\n\n/*------------+\n | results |\n +------------+\n | NULL |\n +------------*/\n```\n\n\n"
},
{
"name": "RANGE_INTERSECT",
"arguments": [],
"category": "Range",
"description_markdown": " **Preview** \n\nThis product or feature is subject to the \"Pre-GA Offerings Terms\"\n in the General Service Terms section of the[Service Specific Terms](/terms/service-terms).\n Pre-GA products and features are available \"as is\" and might have\n limited support. For more information, see the[launch stage descriptions](/products#product-launch-stages).\n\n **Note:** To provide feedback or request support for this feature, send an email to[bigquery-time-series-preview-support@google.com](mailto:bigquery-time-series-preview-support@google.com).```\nRANGE_INTERSECT(range_a, range_b)\n```\n\n **Description** \n\nGets a segment of two ranges that intersect.\n\n **Definitions** \n\n- ` range_a`: The first` RANGE<T>`value.\n- ` range_b`: The second` RANGE<T>`value.\n\n **Details** \n\nReturns`NULL`if any input is`NULL`.\n\nProduces an error if`range_a`and`range_b`don't overlap. To return`NULL`instead, add the`SAFE.`prefix to the function name.\n\n`T`must be of the same type for all inputs.\n\n **Return type** \n\n`RANGE<T>`\n\n **Examples** \n\n```\nSELECT RANGE_INTERSECT(\n RANGE<DATE> '[2022-02-01, 2022-09-01)',\n RANGE<DATE> '[2021-06-15, 2022-04-15)') AS results;\n\n/*--------------------------+\n | results |\n +--------------------------+\n | [2022-02-01, 2022-04-15) |\n +--------------------------*/\n```\n\n```\nSELECT RANGE_INTERSECT(\n RANGE<DATE> '[2022-02-01, UNBOUNDED)',\n RANGE<DATE> '[2021-06-15, 2022-04-15)') AS results;\n\n/*--------------------------+\n | results |\n +--------------------------+\n | [2022-02-01, 2022-04-15) |\n +--------------------------*/\n```\n\n```\nSELECT RANGE_INTERSECT(\n RANGE<DATE> '[2022-02-01, UNBOUNDED)',\n RANGE<DATE> '[2021-06-15, UNBOUNDED)') AS results;\n\n/*-------------------------+\n | results |\n +-------------------------+\n | [2022-02-01, UNBOUNDED) |\n +-------------------------*/\n```\n\n\n"
},
{
"name": "RANGE_OVERLAPS",
"arguments": [],
"category": "Range",
"description_markdown": " **Preview** \n\nThis product or feature is subject to the \"Pre-GA Offerings Terms\"\n in the General Service Terms section of the[Service Specific Terms](/terms/service-terms).\n Pre-GA products and features are available \"as is\" and might have\n limited support. For more information, see the[launch stage descriptions](/products#product-launch-stages).\n\n **Note:** To provide feedback or request support for this feature, send an email to[bigquery-time-series-preview-support@google.com](mailto:bigquery-time-series-preview-support@google.com).```\nRANGE_OVERLAPS(range_a, range_b)\n```\n\n **Description** \n\nChecks if two ranges overlap.\n\n **Definitions** \n\n- ` range_a`: The first` RANGE<T>`value.\n- ` range_b`: The second` RANGE<T>`value.\n\n **Details** \n\nReturns`TRUE`if a part of`range_a`intersects with`range_b`, otherwise\nreturns`FALSE`.\n\n`T`must be of the same type for all inputs.\n\nTo get the part of the range that overlaps, use the[RANGE_INTERSECT](#range_intersect)function.\n\n **Return type** \n\n`BOOL`\n\n **Examples** \n\nIn the following query, the first and second ranges overlap between`2022-02-01`and`2022-04-15`:\n\n```\nSELECT RANGE_OVERLAPS(\n RANGE<DATE> '[2022-02-01, 2022-09-01)',\n RANGE<DATE> '[2021-06-15, 2022-04-15)') AS results;\n\n/*---------+\n | results |\n +---------+\n | TRUE |\n +---------*/\n```\n\nIn the following query, the first and second ranges don't overlap:\n\n```\nSELECT RANGE_OVERLAPS(\n RANGE<DATE> '[2020-02-01, 2020-09-01)',\n RANGE<DATE> '[2021-06-15, 2022-04-15)') AS results;\n\n/*---------+\n | results |\n +---------+\n | FALSE |\n +---------*/\n```\n\nIn the following query, the first and second ranges overlap between`2022-02-01`and`UNBOUNDED`:\n\n```\nSELECT RANGE_OVERLAPS(\n RANGE<DATE> '[2022-02-01, UNBOUNDED)',\n RANGE<DATE> '[2021-06-15, UNBOUNDED)') AS results;\n\n/*---------+\n | results |\n +---------+\n | TRUE |\n +---------*/\n```\n\n\n"
},
{
"name": "RANGE_SESSIONIZE",
"arguments": [],
"category": "Range",
"description_markdown": " **Preview** \n\nThis product or feature is subject to the \"Pre-GA Offerings Terms\"\n in the General Service Terms section of the[Service Specific Terms](/terms/service-terms).\n Pre-GA products and features are available \"as is\" and might have\n limited support. For more information, see the[launch stage descriptions](/products#product-launch-stages).\n\n **Note:** To provide feedback or request support for this feature, send an email to[bigquery-time-series-preview-support@google.com](mailto:bigquery-time-series-preview-support@google.com).```\nRANGE_SESSIONIZE(\n TABLE table_name,\n range_column,\n partitioning_columns\n)\n```\n\n```\nRANGE_SESSIONIZE(\n TABLE table_name,\n range_column,\n partitioning_columns,\n sessionize_option\n)\n```\n\n **Description** \n\nProduces a table of sessionized ranges.\n\n **Definitions** \n\n- ` table_name`: A table expression that represents the name of the table to\nconstruct. This can represent any relation with` range_column`.\n- ` range_column`: A` STRING`literal that indicates which` RANGE`column\nin a table contains the data to sessionize.\n- ` partitioning_columns`: An` ARRAY<STRING>`literal that indicates which\ncolumns should partition the data before the data is sessionized.\n- ` sessionize_option`: A` STRING`value that describes how order-adjacent\nranges are sessionized. Your choices are as follows:\n \n \n - ` MEETS`(default): Ranges that meet or overlap are sessionized.\n \n \n - ` OVERLAPS`: Only a range that is overlapped by another range is\nsessionized.\n \n If this argument is not provided,` MEETS`is used by default.\n \n \n\n **Details** \n\nThis function produces a table that includes all columns in the\ninput table and an additional`RANGE`column called`session_range`, which indicates the start and end of a session. The\nstart and end of each session is determined by the`sessionize_option`argument.\n\n **Return type** \n\n`TABLE`\n\n **Examples** \n\nThe examples in this section reference the following table called`my_sessionized_range_table`in a dataset called`mydataset`:\n\n```\nINSERT mydataset.my_sessionized_range_table (emp_id, dept_id, duration)\nVALUES(10, 1000, RANGE<DATE> '[2010-01-10, 2010-03-10)'),\n (10, 2000, RANGE<DATE> '[2010-03-10, 2010-07-15)'),\n (10, 2000, RANGE<DATE> '[2010-06-15, 2010-08-18)'),\n (20, 2000, RANGE<DATE> '[2010-03-10, 2010-07-20)'),\n (20, 1000, RANGE<DATE> '[2020-05-10, 2020-09-20)');\n\nSELECT * FROM mydataset.my_sessionized_range_table ORDER BY emp_id;\n\n/*--------+---------+--------------------------+\n | emp_id | dept_id | duration |\n +--------+---------+--------------------------+\n | 10 | 1000 | [2010-01-10, 2010-03-10) |\n | 10 | 2000 | [2010-03-10, 2010-07-15) |\n | 10 | 2000 | [2010-06-15, 2010-08-18) |\n | 20 | 2000 | [2010-03-10, 2010-07-20) |\n | 20 | 1000 | [2020-05-10, 2020-09-20) |\n +--------+---------+--------------------------*/\n```\n\nIn the following query, a table of sessionized data is produced for`my_sessionized_range_table`, and only ranges that meet or overlap are\nsessionized:\n\n```\nSELECT\n emp_id, duration, session_range\nFROM\n RANGE_SESSIONIZE(\n TABLE mydataset.my_sessionized_range_table,\n 'duration',\n ['emp_id'])\nORDER BY emp_id;\n\n/*--------+--------------------------+--------------------------+\n | emp_id | duration | session_range |\n +--------+--------------------------+--------------------------+\n | 10 | [2010-01-10, 2010-03-10) | [2010-01-10, 2010-08-18) |\n | 10 | [2010-03-10, 2010-07-15) | [2010-01-10, 2010-08-18) |\n | 10 | [2010-06-15, 2010-08-18) | [2010-01-10, 2010-08-18) |\n | 20 | [2010-03-10, 2010-07-20) | [2010-03-10, 2010-07-20) |\n | 20 | [2020-05-10, 2020-09-20) | [2020-05-10, 2020-09-20) |\n +--------+-----------------------------------------------------*/\n```\n\nIn the following query, a table of sessionized data is produced for`my_sessionized_range_table`, and only a range that is overlapped by another\nrange is sessionized:\n\n```\nSELECT\n emp_id, duration, session_range\nFROM\n RANGE_SESSIONIZE(\n TABLE mydataset.my_sessionized_range_table,\n 'duration',\n ['emp_id'],\n 'OVERLAPS')\nORDER BY emp_id;\n\n/*--------+--------------------------+--------------------------+\n | emp_id | duration | session_range |\n +--------+--------------------------+--------------------------+\n | 10 | [2010-03-10, 2010-07-15) | [2010-03-10, 2010-08-18) |\n | 10 | [2010-06-15, 2010-08-18) | [2010-03-10, 2010-08-18) |\n | 10 | [2010-01-10, 2010-03-10) | [2010-01-10, 2010-03-10) |\n | 20 | [2020-05-10, 2020-09-20) | [2020-05-10, 2020-09-20) |\n | 20 | [2010-03-10, 2010-07-20) | [2010-03-10, 2010-07-20) |\n +--------+-----------------------------------------------------*/\n```\n\nIf you need to normalize sessionized data, you can use a query similar to the\nfollowing:\n\n```\nSELECT emp_id, session_range AS normalized FROM (\n SELECT emp_id, session_range\n FROM RANGE_SESSIONIZE(\n TABLE mydataset.my_sessionized_range_table,\n 'duration',\n ['emp_id'],\n 'MEETS')\n)\nGROUP BY emp_id, normalized;\n\n/*--------+--------------------------+\n | emp_id | normalized |\n +--------+--------------------------+\n | 20 | [2010-03-10, 2010-07-20) |\n | 10 | [2010-01-10, 2010-08-18) |\n | 20 | [2020-05-10, 2020-09-20) |\n +--------+--------------------------*/\n```\n\n\n"
},
{
"name": "RANGE_START",
"arguments": [],
"category": "Range",
"description_markdown": " **Preview** \n\nThis product or feature is subject to the \"Pre-GA Offerings Terms\"\n in the General Service Terms section of the[Service Specific Terms](/terms/service-terms).\n Pre-GA products and features are available \"as is\" and might have\n limited support. For more information, see the[launch stage descriptions](/products#product-launch-stages).\n\n **Note:** To provide feedback or request support for this feature, send an email to[bigquery-time-series-preview-support@google.com](mailto:bigquery-time-series-preview-support@google.com).```\nRANGE_START(range_to_check)\n```\n\n **Description** \n\nGets the lower bound of a range.\n\n **Definitions** \n\n- ` range_to_check`: The` RANGE<T>`value.\n\n **Details** \n\nReturns`NULL`if the lower bound of`range_value`is`UNBOUNDED`.\n\nReturns`NULL`if`range_to_check`is`NULL`.\n\n **Return type** \n\n`T`in`range_value`\n\n **Examples** \n\nIn the following query, the lower bound of the range is retrieved:\n\n```\nSELECT RANGE_START(RANGE<DATE> '[2022-12-01, 2022-12-31)') AS results;\n\n/*------------+\n | results |\n +------------+\n | 2022-12-01 |\n +------------*/\n```\n\nIn the following query, the lower bound of the range is unbounded, so`NULL`is returned:\n\n```\nSELECT RANGE_START(RANGE<DATE> '[UNBOUNDED, 2022-12-31)') AS results;\n\n/*------------+\n | results |\n +------------+\n | NULL |\n +------------*/\n```\n\n\n<span id=\"search_functions\">\n## Search functions\n\n</span>\nGoogleSQL for BigQuery supports the following search functions.\n\n\n\n"
},
{
"name": "RANK",
"arguments": [],
"category": "Numbering",
"description_markdown": "```\nRANK()\nOVER over_clause\n\nover_clause:\n { named_window | ( [ window_specification ] ) }\n\nwindow_specification:\n [ named_window ]\n [ PARTITION BY partition_expression [, ...] ]\n ORDER BY expression [ { ASC | DESC } ] [, ...]\n```\n\n **Description** \n\nReturns the ordinal (1-based) rank of each row within the ordered partition.\nAll peer rows receive the same rank value. The next row or set of peer rows\nreceives a rank value which increments by the number of peers with the previous\nrank value, instead of`DENSE_RANK`, which always increments by 1.\n\nTo learn more about the`OVER`clause and how to use it, see[Window function calls](/bigquery/docs/reference/standard-sql/window-function-calls).\n\n **Return Type** \n\n`INT64`\n\n **Examples** \n\n```\nWITH Numbers AS\n (SELECT 1 as x\n UNION ALL SELECT 2\n UNION ALL SELECT 2\n UNION ALL SELECT 5\n UNION ALL SELECT 8\n UNION ALL SELECT 10\n UNION ALL SELECT 10\n)\nSELECT x,\n RANK() OVER (ORDER BY x ASC) AS rank\nFROM Numbers\n\n/*-------------------------*\n | x | rank |\n +-------------------------+\n | 1 | 1 |\n | 2 | 2 |\n | 2 | 2 |\n | 5 | 4 |\n | 8 | 5 |\n | 10 | 6 |\n | 10 | 6 |\n *-------------------------*/\n```\n\n```\nWITH finishers AS\n (SELECT 'Sophia Liu' as name,\n TIMESTAMP '2016-10-18 2:51:45' as finish_time,\n 'F30-34' as division\n UNION ALL SELECT 'Lisa Stelzner', TIMESTAMP '2016-10-18 2:54:11', 'F35-39'\n UNION ALL SELECT 'Nikki Leith', TIMESTAMP '2016-10-18 2:59:01', 'F30-34'\n UNION ALL SELECT 'Lauren Matthews', TIMESTAMP '2016-10-18 3:01:17', 'F35-39'\n UNION ALL SELECT 'Desiree Berry', TIMESTAMP '2016-10-18 3:05:42', 'F35-39'\n UNION ALL SELECT 'Suzy Slane', TIMESTAMP '2016-10-18 3:06:24', 'F35-39'\n UNION ALL SELECT 'Jen Edwards', TIMESTAMP '2016-10-18 3:06:36', 'F30-34'\n UNION ALL SELECT 'Meghan Lederer', TIMESTAMP '2016-10-18 2:59:01', 'F30-34')\nSELECT name,\n finish_time,\n division,\n RANK() OVER (PARTITION BY division ORDER BY finish_time ASC) AS finish_rank\nFROM finishers;\n\n/*-----------------+------------------------+----------+-------------*\n | name | finish_time | division | finish_rank |\n +-----------------+------------------------+----------+-------------+\n | Sophia Liu | 2016-10-18 09:51:45+00 | F30-34 | 1 |\n | Meghan Lederer | 2016-10-18 09:59:01+00 | F30-34 | 2 |\n | Nikki Leith | 2016-10-18 09:59:01+00 | F30-34 | 2 |\n | Jen Edwards | 2016-10-18 10:06:36+00 | F30-34 | 4 |\n | Lisa Stelzner | 2016-10-18 09:54:11+00 | F35-39 | 1 |\n | Lauren Matthews | 2016-10-18 10:01:17+00 | F35-39 | 2 |\n | Desiree Berry | 2016-10-18 10:05:42+00 | F35-39 | 3 |\n | Suzy Slane | 2016-10-18 10:06:24+00 | F35-39 | 4 |\n *-----------------+------------------------+----------+-------------*/\n```\n\n\n"
},
{
"name": "REGEXP_CONTAINS",
"arguments": [],
"category": "String",
"description_markdown": "```\nREGEXP_CONTAINS(value, regexp)\n```\n\n **Description** \n\nReturns`TRUE`if`value`is a partial match for the regular expression,`regexp`.\n\nIf the`regexp`argument is invalid, the function returns an error.\n\nYou can search for a full match by using`^`(beginning of text) and`$`(end of\ntext). Due to regular expression operator precedence, it is good practice to use\nparentheses around everything between`^`and`$`.\n\n **Note:** GoogleSQL provides regular expression support using the[re2](https://github.com/google/re2/wiki/Syntax)library; see that documentation for its\nregular expression syntax. **Return type** \n\n`BOOL`\n\n **Examples** \n\n```\nSELECT\n email,\n REGEXP_CONTAINS(email, r'@[a-zA-Z0-9-]+\\.[a-zA-Z0-9-.]+') AS is_valid\nFROM\n (SELECT\n ['foo@example.com', 'bar@example.org', 'www.example.net']\n AS addresses),\n UNNEST(addresses) AS email;\n\n/*-----------------+----------*\n | email | is_valid |\n +-----------------+----------+\n | foo@example.com | true |\n | bar@example.org | true |\n | www.example.net | false |\n *-----------------+----------*/\n\n-- Performs a full match, using ^ and $. Due to regular expression operator\n-- precedence, it is good practice to use parentheses around everything between ^\n-- and $.\nSELECT\n email,\n REGEXP_CONTAINS(email, r'^([\\w.+-]+@foo\\.com|[\\w.+-]+@bar\\.org)$')\n AS valid_email_address,\n REGEXP_CONTAINS(email, r'^[\\w.+-]+@foo\\.com|[\\w.+-]+@bar\\.org$')\n AS without_parentheses\nFROM\n (SELECT\n ['a@foo.com', 'a@foo.computer', 'b@bar.org', '!b@bar.org', 'c@buz.net']\n AS addresses),\n UNNEST(addresses) AS email;\n\n/*----------------+---------------------+---------------------*\n | email | valid_email_address | without_parentheses |\n +----------------+---------------------+---------------------+\n | a@foo.com | true | true |\n | a@foo.computer | false | true |\n | b@bar.org | true | true |\n | !b@bar.org | false | true |\n | c@buz.net | false | false |\n *----------------+---------------------+---------------------*/\n```\n\n\n"
},
{
"name": "REGEXP_EXTRACT",
"arguments": [],
"category": "String",
"description_markdown": "```\nREGEXP_EXTRACT(value, regexp[, position[, occurrence]])\n```\n\n **Description** \n\nReturns the substring in`value`that matches the[re2 regular expression](https://github.com/google/re2/wiki/Syntax),`regexp`.\nReturns`NULL`if there is no match.\n\nIf the regular expression contains a capturing group (`(...)`), and there is a\nmatch for that capturing group, that match is returned. If there\nare multiple matches for a capturing group, the first match is returned.\n\nIf`position`is specified, the search starts at this\nposition in`value`, otherwise it starts at the beginning of`value`. The`position`must be a positive integer and cannot be 0. If`position`is greater\nthan the length of`value`,`NULL`is returned.\n\nIf`occurrence`is specified, the search returns a specific occurrence of the`regexp`in`value`, otherwise returns the first match. If`occurrence`is\ngreater than the number of matches found,`NULL`is returned. For`occurrence`> 1, the function searches for additional occurrences beginning\nwith the character following the previous occurrence.\n\nReturns an error if:\n\n- The regular expression is invalid\n- The regular expression has more than one capturing group\n- The` position`is not a positive integer\n- The` occurrence`is not a positive integer\n\n **Return type** \n\n`STRING`or`BYTES`\n\n **Examples** \n\n```\nWITH email_addresses AS\n (SELECT 'foo@example.com' as email\n UNION ALL\n SELECT 'bar@example.org' as email\n UNION ALL\n SELECT 'baz@example.net' as email)\n\nSELECT\n REGEXP_EXTRACT(email, r'^[a-zA-Z0-9_.+-]+')\n AS user_name\nFROM email_addresses;\n\n/*-----------*\n | user_name |\n +-----------+\n | foo |\n | bar |\n | baz |\n *-----------*/\n```\n\n```\nWITH email_addresses AS\n (SELECT 'foo@example.com' as email\n UNION ALL\n SELECT 'bar@example.org' as email\n UNION ALL\n SELECT 'baz@example.net' as email)\n\nSELECT\n REGEXP_EXTRACT(email, r'^[a-zA-Z0-9_.+-]+@[a-zA-Z0-9-]+\\.([a-zA-Z0-9-.]+$)')\n AS top_level_domain\nFROM email_addresses;\n\n/*------------------*\n | top_level_domain |\n +------------------+\n | com |\n | org |\n | net |\n *------------------*/\n```\n\n```\nWITH\n characters AS (\n SELECT 'ab' AS value, '.b' AS regex UNION ALL\n SELECT 'ab' AS value, '(.)b' AS regex UNION ALL\n SELECT 'xyztb' AS value, '(.)+b' AS regex UNION ALL\n SELECT 'ab' AS value, '(z)?b' AS regex\n )\nSELECT value, regex, REGEXP_EXTRACT(value, regex) AS result FROM characters;\n\n/*-------+---------+----------*\n | value | regex | result |\n +-------+---------+----------+\n | ab | .b | ab |\n | ab | (.)b | a |\n | xyztb | (.)+b | t |\n | ab | (z)?b | NULL |\n *-------+---------+----------*/\n```\n\n```\nWITH example AS\n(SELECT 'Hello Helloo and Hellooo' AS value, 'H?ello+' AS regex, 1 as position,\n1 AS occurrence UNION ALL\nSELECT 'Hello Helloo and Hellooo', 'H?ello+', 1, 2 UNION ALL\nSELECT 'Hello Helloo and Hellooo', 'H?ello+', 1, 3 UNION ALL\nSELECT 'Hello Helloo and Hellooo', 'H?ello+', 1, 4 UNION ALL\nSELECT 'Hello Helloo and Hellooo', 'H?ello+', 2, 1 UNION ALL\nSELECT 'Hello Helloo and Hellooo', 'H?ello+', 3, 1 UNION ALL\nSELECT 'Hello Helloo and Hellooo', 'H?ello+', 3, 2 UNION ALL\nSELECT 'Hello Helloo and Hellooo', 'H?ello+', 3, 3 UNION ALL\nSELECT 'Hello Helloo and Hellooo', 'H?ello+', 20, 1 UNION ALL\nSELECT 'cats&dogs&rabbits' ,'\\\\w+&', 1, 2 UNION ALL\nSELECT 'cats&dogs&rabbits', '\\\\w+&', 2, 3\n)\nSELECT value, regex, position, occurrence, REGEXP_EXTRACT(value, regex,\nposition, occurrence) AS regexp_value FROM example;\n\n/*--------------------------+---------+----------+------------+--------------*\n | value | regex | position | occurrence | regexp_value |\n +--------------------------+---------+----------+------------+--------------+\n | Hello Helloo and Hellooo | H?ello+ | 1 | 1 | Hello |\n | Hello Helloo and Hellooo | H?ello+ | 1 | 2 | Helloo |\n | Hello Helloo and Hellooo | H?ello+ | 1 | 3 | Hellooo |\n | Hello Helloo and Hellooo | H?ello+ | 1 | 4 | NULL |\n | Hello Helloo and Hellooo | H?ello+ | 2 | 1 | ello |\n | Hello Helloo and Hellooo | H?ello+ | 3 | 1 | Helloo |\n | Hello Helloo and Hellooo | H?ello+ | 3 | 2 | Hellooo |\n | Hello Helloo and Hellooo | H?ello+ | 3 | 3 | NULL |\n | Hello Helloo and Hellooo | H?ello+ | 20 | 1 | NULL |\n | cats&dogs&rabbits | \\w+& | 1 | 2 | dogs& |\n | cats&dogs&rabbits | \\w+& | 2 | 3 | NULL |\n *--------------------------+---------+----------+------------+--------------*/\n```\n\n\n"
},
{
"name": "REGEXP_EXTRACT_ALL",
"arguments": [],
"category": "String",
"description_markdown": "```\nREGEXP_EXTRACT_ALL(value, regexp)\n```\n\n **Description** \n\nReturns an array of all substrings of`value`that match the[re2 regular expression](https://github.com/google/re2/wiki/Syntax),`regexp`. Returns an empty array\nif there is no match.\n\nIf the regular expression contains a capturing group (`(...)`), and there is a\nmatch for that capturing group, that match is added to the results.\n\nThe`REGEXP_EXTRACT_ALL`function only returns non-overlapping matches. For\nexample, using this function to extract`ana`from`banana`returns only one\nsubstring, not two.\n\nReturns an error if:\n\n- The regular expression is invalid\n- The regular expression has more than one capturing group\n\n **Return type** \n\n`ARRAY<STRING>`or`ARRAY<BYTES>`\n\n **Examples** \n\n```\nWITH code_markdown AS\n (SELECT 'Try `function(x)` or `function(y)`' as code)\n\nSELECT\n REGEXP_EXTRACT_ALL(code, '`(.+?)`') AS example\nFROM code_markdown;\n\n/*----------------------------*\n | example |\n +----------------------------+\n | [function(x), function(y)] |\n *----------------------------*/\n```\n\n\n"
},
{
"name": "REGEXP_INSTR",
"arguments": [],
"category": "String",
"description_markdown": "```\nREGEXP_INSTR(source_value, regexp [, position[, occurrence, [occurrence_position]]])\n```\n\n **Description** \n\nReturns the lowest 1-based position of a regular expression,`regexp`, in`source_value`.`source_value`and`regexp`must be the same type, either`STRING`or`BYTES`.\n\nIf`position`is specified, the search starts at this position in`source_value`, otherwise it starts at`1`, which is the beginning of`source_value`.`position`is of type`INT64`and must be positive.\n\nIf`occurrence`is specified, the search returns the position of a specific\ninstance of`regexp`in`source_value`. If not specified,`occurrence`defaults\nto`1`and returns the position of the first occurrence. For`occurrence`> 1,\nthe function searches for the next, non-overlapping occurrence.`occurrence`is of type`INT64`and must be positive.\n\nYou can optionally use`occurrence_position`to specify where a position\nin relation to an`occurrence`starts. Your choices are:\n\n- ` 0`: Returns the start position of` occurrence`.\n- ` 1`: Returns the end position of` occurrence`+` 1`. If the\nend of the occurrence is at the end of` source_value`,` LENGTH(source_value) + 1`is returned.\n\nReturns`0`if:\n\n- No match is found.\n- If` occurrence`is greater than the number of matches found.\n- If` position`is greater than the length of` source_value`.\n- The regular expression is empty.\n\nReturns`NULL`if:\n\n- ` position`is` NULL`.\n- ` occurrence`is` NULL`.\n\nReturns an error if:\n\n- ` position`is` 0`or negative.\n- ` occurrence`is` 0`or negative.\n- ` occurrence_position`is neither` 0`nor` 1`.\n- The regular expression is invalid.\n- The regular expression has more than one capturing group.\n\n **Return type** \n\n`INT64`\n\n **Examples** \n\n```\nWITH example AS (\n SELECT 'ab@cd-ef' AS source_value, '@[^-]*' AS regexp UNION ALL\n SELECT 'ab@d-ef', '@[^-]*' UNION ALL\n SELECT 'abc@cd-ef', '@[^-]*' UNION ALL\n SELECT 'abc-ef', '@[^-]*')\nSELECT source_value, regexp, REGEXP_INSTR(source_value, regexp) AS instr\nFROM example;\n\n/*--------------+--------+-------*\n | source_value | regexp | instr |\n +--------------+--------+-------+\n | ab@cd-ef | @[^-]* | 3 |\n | ab@d-ef | @[^-]* | 3 |\n | abc@cd-ef | @[^-]* | 4 |\n | abc-ef | @[^-]* | 0 |\n *--------------+--------+-------*/\n```\n\n```\nWITH example AS (\n SELECT 'a@cd-ef b@cd-ef' AS source_value, '@[^-]*' AS regexp, 1 AS position UNION ALL\n SELECT 'a@cd-ef b@cd-ef', '@[^-]*', 2 UNION ALL\n SELECT 'a@cd-ef b@cd-ef', '@[^-]*', 3 UNION ALL\n SELECT 'a@cd-ef b@cd-ef', '@[^-]*', 4)\nSELECT\n source_value, regexp, position,\n REGEXP_INSTR(source_value, regexp, position) AS instr\nFROM example;\n\n/*-----------------+--------+----------+-------*\n | source_value | regexp | position | instr |\n +-----------------+--------+----------+-------+\n | a@cd-ef b@cd-ef | @[^-]* | 1 | 2 |\n | a@cd-ef b@cd-ef | @[^-]* | 2 | 2 |\n | a@cd-ef b@cd-ef | @[^-]* | 3 | 10 |\n | a@cd-ef b@cd-ef | @[^-]* | 4 | 10 |\n *-----------------+--------+----------+-------*/\n```\n\n```\nWITH example AS (\n SELECT 'a@cd-ef b@cd-ef c@cd-ef' AS source_value,\n '@[^-]*' AS regexp, 1 AS position, 1 AS occurrence UNION ALL\n SELECT 'a@cd-ef b@cd-ef c@cd-ef', '@[^-]*', 1, 2 UNION ALL\n SELECT 'a@cd-ef b@cd-ef c@cd-ef', '@[^-]*', 1, 3)\nSELECT\n source_value, regexp, position, occurrence,\n REGEXP_INSTR(source_value, regexp, position, occurrence) AS instr\nFROM example;\n\n/*-------------------------+--------+----------+------------+-------*\n | source_value | regexp | position | occurrence | instr |\n +-------------------------+--------+----------+------------+-------+\n | a@cd-ef b@cd-ef c@cd-ef | @[^-]* | 1 | 1 | 2 |\n | a@cd-ef b@cd-ef c@cd-ef | @[^-]* | 1 | 2 | 10 |\n | a@cd-ef b@cd-ef c@cd-ef | @[^-]* | 1 | 3 | 18 |\n *-------------------------+--------+----------+------------+-------*/\n```\n\n```\nWITH example AS (\n SELECT 'a@cd-ef' AS source_value, '@[^-]*' AS regexp,\n 1 AS position, 1 AS occurrence, 0 AS o_position UNION ALL\n SELECT 'a@cd-ef', '@[^-]*', 1, 1, 1)\nSELECT\n source_value, regexp, position, occurrence, o_position,\n REGEXP_INSTR(source_value, regexp, position, occurrence, o_position) AS instr\nFROM example;\n\n/*--------------+--------+----------+------------+------------+-------*\n | source_value | regexp | position | occurrence | o_position | instr |\n +--------------+--------+----------+------------+------------+-------+\n | a@cd-ef | @[^-]* | 1 | 1 | 0 | 2 |\n | a@cd-ef | @[^-]* | 1 | 1 | 1 | 5 |\n *--------------+--------+----------+------------+------------+-------*/\n```\n\n\n"
},
{
"name": "REGEXP_REPLACE",
"arguments": [],
"category": "String",
"description_markdown": "```\nREGEXP_REPLACE(value, regexp, replacement)\n```\n\n **Description** \n\nReturns a`STRING`where all substrings of`value`that\nmatch regular expression`regexp`are replaced with`replacement`.\n\nYou can use backslashed-escaped digits (\\1 to \\9) within the`replacement`argument to insert text matching the corresponding parenthesized group in the`regexp`pattern. Use \\0 to refer to the entire matching text.\n\nTo add a backslash in your regular expression, you must first escape it. For\nexample,`SELECT REGEXP_REPLACE('abc', 'b(.)', 'X\\\\1');`returns`aXc`. You can\nalso use[raw strings](/bigquery/docs/reference/standard-sql/lexical#string_and_bytes_literals)to remove one layer of\nescaping, for example`SELECT REGEXP_REPLACE('abc', 'b(.)', r'X\\1');`.\n\nThe`REGEXP_REPLACE`function only replaces non-overlapping matches. For\nexample, replacing`ana`within`banana`results in only one replacement, not\ntwo.\n\nIf the`regexp`argument is not a valid regular expression, this function\nreturns an error.\n\n **Note:** GoogleSQL provides regular expression support using the[re2](https://github.com/google/re2/wiki/Syntax)library; see that documentation for its\nregular expression syntax. **Return type** \n\n`STRING`or`BYTES`\n\n **Examples** \n\n```\nWITH markdown AS\n (SELECT '# Heading' as heading\n UNION ALL\n SELECT '# Another heading' as heading)\n\nSELECT\n REGEXP_REPLACE(heading, r'^# ([a-zA-Z0-9\\s]+$)', '<h1>\\\\1</h1>')\n AS html\nFROM markdown;\n\n/*--------------------------*\n | html |\n +--------------------------+\n | <h1>Heading</h1> |\n | <h1>Another heading</h1> |\n *--------------------------*/\n```\n\n\n"
},
{
"name": "REGEXP_SUBSTR",
"arguments": [],
"category": "String",
"description_markdown": "```\nREGEXP_SUBSTR(value, regexp[, position[, occurrence]])\n```\n\n **Description** \n\nSynonym for[REGEXP_EXTRACT](#regexp_extract).\n\n **Return type** \n\n`STRING`or`BYTES`\n\n **Examples** \n\n```\nWITH example AS\n(SELECT 'Hello World Helloo' AS value, 'H?ello+' AS regex, 1 AS position, 1 AS\noccurrence\n)\nSELECT value, regex, position, occurrence, REGEXP_SUBSTR(value, regex,\nposition, occurrence) AS regexp_value FROM example;\n\n/*--------------------+---------+----------+------------+--------------*\n | value | regex | position | occurrence | regexp_value |\n +--------------------+---------+----------+------------+--------------+\n | Hello World Helloo | H?ello+ | 1 | 1 | Hello |\n *--------------------+---------+----------+------------+--------------*/\n```\n\n\n"
},
{
"name": "REPEAT",
"arguments": [],
"category": "String",
"description_markdown": "```\nREPEAT(original_value, repetitions)\n```\n\n **Description** \n\nReturns a`STRING`or`BYTES`value that consists of`original_value`, repeated.\nThe`repetitions`parameter specifies the number of times to repeat`original_value`. Returns`NULL`if either`original_value`or`repetitions`are`NULL`.\n\nThis function returns an error if the`repetitions`value is negative.\n\n **Return type** \n\n`STRING`or`BYTES`\n\n **Examples** \n\n```\nSELECT t, n, REPEAT(t, n) AS REPEAT FROM UNNEST([\n STRUCT('abc' AS t, 3 AS n),\n ('例子', 2),\n ('abc', null),\n (null, 3)\n]);\n\n/*------+------+-----------*\n | t | n | REPEAT |\n |------|------|-----------|\n | abc | 3 | abcabcabc |\n | 例子 | 2 | 例子例子 |\n | abc | NULL | NULL |\n | NULL | 3 | NULL |\n *------+------+-----------*/\n```\n\n\n"
},
{
"name": "REPLACE",
"arguments": [],
"category": "String",
"description_markdown": "```\nREPLACE(original_value, from_pattern, to_pattern)\n```\n\n **Description** \n\nReplaces all occurrences of`from_pattern`with`to_pattern`in`original_value`. If`from_pattern`is empty, no replacement is made.\n\nThis function supports specifying[collation](/bigquery/docs/reference/standard-sql/collation-concepts#collate_about).\n\n **Return type** \n\n`STRING`or`BYTES`\n\n **Examples** \n\n```\nWITH desserts AS\n (SELECT 'apple pie' as dessert\n UNION ALL\n SELECT 'blackberry pie' as dessert\n UNION ALL\n SELECT 'cherry pie' as dessert)\n\nSELECT\n REPLACE (dessert, 'pie', 'cobbler') as example\nFROM desserts;\n\n/*--------------------*\n | example |\n +--------------------+\n | apple cobbler |\n | blackberry cobbler |\n | cherry cobbler |\n *--------------------*/\n```\n\n\n"
},
{
"name": "REVERSE",
"arguments": [],
"category": "String",
"description_markdown": "```\nREVERSE(value)\n```\n\n **Description** \n\nReturns the reverse of the input`STRING`or`BYTES`.\n\n **Return type** \n\n`STRING`or`BYTES`\n\n **Examples** \n\n```\nWITH example AS (\n SELECT 'foo' AS sample_string, b'bar' AS sample_bytes UNION ALL\n SELECT 'абвгд' AS sample_string, b'123' AS sample_bytes\n)\nSELECT\n sample_string,\n REVERSE(sample_string) AS reverse_string,\n sample_bytes,\n REVERSE(sample_bytes) AS reverse_bytes\nFROM example;\n\n/*---------------+----------------+--------------+---------------*\n | sample_string | reverse_string | sample_bytes | reverse_bytes |\n +---------------+----------------+--------------+---------------+\n | foo | oof | bar | rab |\n | абвгд | дгвба | 123 | 321 |\n *---------------+----------------+--------------+---------------*/\n```\n\n\n"
},
{
"name": "RIGHT",
"arguments": [],
"category": "String",
"description_markdown": "```\nRIGHT(value, length)\n```\n\n **Description** \n\nReturns a`STRING`or`BYTES`value that consists of the specified\nnumber of rightmost characters or bytes from`value`. The`length`is an`INT64`that specifies the length of the returned\nvalue. If`value`is`BYTES`,`length`is the number of rightmost bytes to\nreturn. If`value`is`STRING`,`length`is the number of rightmost characters\nto return.\n\nIf`length`is 0, an empty`STRING`or`BYTES`value will be\nreturned. If`length`is negative, an error will be returned. If`length`exceeds the number of characters or bytes from`value`, the original`value`will be returned.\n\n **Return type** \n\n`STRING`or`BYTES`\n\n **Examples** \n\n```\nWITH examples AS\n(SELECT 'apple' as example\nUNION ALL\nSELECT 'banana' as example\nUNION ALL\nSELECT 'абвгд' as example\n)\nSELECT example, RIGHT(example, 3) AS right_example\nFROM examples;\n\n/*---------+---------------*\n | example | right_example |\n +---------+---------------+\n | apple | ple |\n | banana | ana |\n | абвгд | вгд |\n *---------+---------------*/\n```\n\n```\nWITH examples AS\n(SELECT b'apple' as example\nUNION ALL\nSELECT b'banana' as example\nUNION ALL\nSELECT b'\\xab\\xcd\\xef\\xaa\\xbb' as example\n)\nSELECT example, RIGHT(example, 3) AS right_example\nFROM examples;\n\n-- Note that the result of RIGHT is of type BYTES, displayed as a base64-encoded string.\n/*----------+---------------*\n | example | right_example |\n +----------+---------------+\n | YXBwbGU= | cGxl |\n | YmFuYW5h | YW5h |\n | q83vqrs= | 76q7 |\n *----------+---------------*/\n```\n\n\n"
},
{
"name": "ROUND",
"arguments": [],
"category": "Mathematical",
"description_markdown": "```\nROUND(X [, N [, rounding_mode]])\n```\n\n **Description** \n\nIf only X is present, rounds X to the nearest integer. If N is present,\nrounds X to N decimal places after the decimal point. If N is negative,\nrounds off digits to the left of the decimal point. Rounds halfway cases\naway from zero. Generates an error if overflow occurs.\n\nIf X is a`NUMERIC`or`BIGNUMERIC`type, then you can\nexplicitly set`rounding_mode`to one of the following:\n\n- [\"ROUND_HALF_AWAY_FROM_ZERO\"](https://en.wikipedia.org/wiki/Rounding#Rounding_half_away_from_zero): (Default) Rounds\nhalfway cases away from zero.\n- [\"ROUND_HALF_EVEN\"](https://en.wikipedia.org/wiki/Rounding#Rounding_half_to_even): Rounds halfway cases\ntowards the nearest even digit.\n\nIf you set the`rounding_mode`and X is not a`NUMERIC`or`BIGNUMERIC`type,\nthen the function generates an error.\n\n| Expression | Return Value |\n| --- | --- |\n| `ROUND(2.0)` | 2.0 |\n| `ROUND(2.3)` | 2.0 |\n| `ROUND(2.8)` | 3.0 |\n| `ROUND(2.5)` | 3.0 |\n| `ROUND(-2.3)` | -2.0 |\n| `ROUND(-2.8)` | -3.0 |\n| `ROUND(-2.5)` | -3.0 |\n| `ROUND(0)` | 0 |\n| `ROUND(+inf)` | `+inf` |\n| `ROUND(-inf)` | `-inf` |\n| `ROUND(NaN)` | `NaN` |\n| `ROUND(123.7, -1)` | 120.0 |\n| `ROUND(1.235, 2)` | 1.24 |\n| `ROUND(NUMERIC \"2.25\", 1, \"ROUND_HALF_EVEN\")` | 2.2 |\n| `ROUND(NUMERIC \"2.35\", 1, \"ROUND_HALF_EVEN\")` | 2.4 |\n| `ROUND(NUMERIC \"2.251\", 1, \"ROUND_HALF_EVEN\")` | 2.3 |\n| `ROUND(NUMERIC \"-2.5\", 0, \"ROUND_HALF_EVEN\")` | -2 |\n| `ROUND(NUMERIC \"2.5\", 0, \"ROUND_HALF_AWAY_FROM_ZERO\")` | 3 |\n| `ROUND(NUMERIC \"-2.5\", 0, \"ROUND_HALF_AWAY_FROM_ZERO\")` | -3 |\n\n **Return Data Type** \n\n| INPUT | `INT64` | `NUMERIC` | `BIGNUMERIC` | `FLOAT64` |\n| --- | --- | --- | --- | --- |\n| OUTPUT | `FLOAT64` | `NUMERIC` | `BIGNUMERIC` | `FLOAT64` |\n\n\n\n"
},
{
"name": "ROW_NUMBER",
"arguments": [],
"category": "Numbering",
"description_markdown": "```\nROW_NUMBER()\nOVER over_clause\n\nover_clause:\n { named_window | ( [ window_specification ] ) }\n\nwindow_specification:\n [ named_window ]\n [ PARTITION BY partition_expression [, ...] ]\n [ ORDER BY expression [ { ASC | DESC } ] [, ...] ]\n```\n\n **Description** \n\nDoes not require the`ORDER BY`clause. Returns the sequential\nrow ordinal (1-based) of each row for each ordered partition. If the`ORDER BY`clause is unspecified then the result is\nnon-deterministic.\n\nTo learn more about the`OVER`clause and how to use it, see[Window function calls](/bigquery/docs/reference/standard-sql/window-function-calls).\n\n **Return Type** \n\n`INT64`\n\n **Examples** \n\n```\nWITH Numbers AS\n (SELECT 1 as x\n UNION ALL SELECT 2\n UNION ALL SELECT 2\n UNION ALL SELECT 5\n UNION ALL SELECT 8\n UNION ALL SELECT 10\n UNION ALL SELECT 10\n)\nSELECT x,\n ROW_NUMBER() OVER (ORDER BY x) AS row_num\nFROM Numbers\n\n/*-------------------------*\n | x | row_num |\n +-------------------------+\n | 1 | 1 |\n | 2 | 2 |\n | 2 | 3 |\n | 5 | 4 |\n | 8 | 5 |\n | 10 | 6 |\n | 10 | 7 |\n *-------------------------*/\n```\n\n```\nWITH finishers AS\n (SELECT 'Sophia Liu' as name,\n TIMESTAMP '2016-10-18 2:51:45' as finish_time,\n 'F30-34' as division\n UNION ALL SELECT 'Lisa Stelzner', TIMESTAMP '2016-10-18 2:54:11', 'F35-39'\n UNION ALL SELECT 'Nikki Leith', TIMESTAMP '2016-10-18 2:59:01', 'F30-34'\n UNION ALL SELECT 'Lauren Matthews', TIMESTAMP '2016-10-18 3:01:17', 'F35-39'\n UNION ALL SELECT 'Desiree Berry', TIMESTAMP '2016-10-18 3:05:42', 'F35-39'\n UNION ALL SELECT 'Suzy Slane', TIMESTAMP '2016-10-18 3:06:24', 'F35-39'\n UNION ALL SELECT 'Jen Edwards', TIMESTAMP '2016-10-18 3:06:36', 'F30-34'\n UNION ALL SELECT 'Meghan Lederer', TIMESTAMP '2016-10-18 2:59:01', 'F30-34')\nSELECT name,\n finish_time,\n division,\n ROW_NUMBER() OVER (PARTITION BY division ORDER BY finish_time ASC) AS finish_rank\nFROM finishers;\n\n/*-----------------+------------------------+----------+-------------*\n | name | finish_time | division | finish_rank |\n +-----------------+------------------------+----------+-------------+\n | Sophia Liu | 2016-10-18 09:51:45+00 | F30-34 | 1 |\n | Meghan Lederer | 2016-10-18 09:59:01+00 | F30-34 | 2 |\n | Nikki Leith | 2016-10-18 09:59:01+00 | F30-34 | 3 |\n | Jen Edwards | 2016-10-18 10:06:36+00 | F30-34 | 4 |\n | Lisa Stelzner | 2016-10-18 09:54:11+00 | F35-39 | 1 |\n | Lauren Matthews | 2016-10-18 10:01:17+00 | F35-39 | 2 |\n | Desiree Berry | 2016-10-18 10:05:42+00 | F35-39 | 3 |\n | Suzy Slane | 2016-10-18 10:06:24+00 | F35-39 | 4 |\n *-----------------+------------------------+----------+-------------*/\n```\n\n\n<span id=\"range_functions\">\n## Range functions\n\n</span>\nGoogleSQL for BigQuery supports the following range functions.\n\n\n\n"
},
{
"name": "RPAD",
"arguments": [],
"category": "String",
"description_markdown": "```\nRPAD(original_value, return_length[, pattern])\n```\n\n **Description** \n\nReturns a`STRING`or`BYTES`value that consists of`original_value`appended\nwith`pattern`. The`return_length`parameter is an`INT64`that specifies the length of the\nreturned value. If`original_value`is`BYTES`,`return_length`is the number of bytes. If`original_value`is`STRING`,`return_length`is the number of characters.\n\nThe default value of`pattern`is a blank space.\n\nBoth`original_value`and`pattern`must be the same data type.\n\nIf`return_length`is less than or equal to the`original_value`length, this\nfunction returns the`original_value`value, truncated to the value of`return_length`. For example,`RPAD('hello world', 7);`returns`'hello w'`.\n\nIf`original_value`,`return_length`, or`pattern`is`NULL`, this function\nreturns`NULL`.\n\nThis function returns an error if:\n\n- ` return_length`is negative\n- ` pattern`is empty\n\n **Return type** \n\n`STRING`or`BYTES`\n\n **Examples** \n\n```\nSELECT t, len, FORMAT('%T', RPAD(t, len)) AS RPAD FROM UNNEST([\n STRUCT('abc' AS t, 5 AS len),\n ('abc', 2),\n ('例子', 4)\n]);\n\n/*------+-----+----------*\n | t | len | RPAD |\n +------+-----+----------+\n | abc | 5 | \"abc \" |\n | abc | 2 | \"ab\" |\n | 例子 | 4 | \"例子 \" |\n *------+-----+----------*/\n```\n\n```\nSELECT t, len, pattern, FORMAT('%T', RPAD(t, len, pattern)) AS RPAD FROM UNNEST([\n STRUCT('abc' AS t, 8 AS len, 'def' AS pattern),\n ('abc', 5, '-'),\n ('例子', 5, '中文')\n]);\n\n/*------+-----+---------+--------------*\n | t | len | pattern | RPAD |\n +------+-----+---------+--------------+\n | abc | 8 | def | \"abcdefde\" |\n | abc | 5 | - | \"abc--\" |\n | 例子 | 5 | 中文 | \"例子中文中\" |\n *------+-----+---------+--------------*/\n```\n\n```\nSELECT FORMAT('%T', t) AS t, len, FORMAT('%T', RPAD(t, len)) AS RPAD FROM UNNEST([\n STRUCT(b'abc' AS t, 5 AS len),\n (b'abc', 2),\n (b'\\xab\\xcd\\xef', 4)\n]);\n\n/*-----------------+-----+------------------*\n | t | len | RPAD |\n +-----------------+-----+------------------+\n | b\"abc\" | 5 | b\"abc \" |\n | b\"abc\" | 2 | b\"ab\" |\n | b\"\\xab\\xcd\\xef\" | 4 | b\"\\xab\\xcd\\xef \" |\n *-----------------+-----+------------------*/\n```\n\n```\nSELECT\n FORMAT('%T', t) AS t,\n len,\n FORMAT('%T', pattern) AS pattern,\n FORMAT('%T', RPAD(t, len, pattern)) AS RPAD\nFROM UNNEST([\n STRUCT(b'abc' AS t, 8 AS len, b'def' AS pattern),\n (b'abc', 5, b'-'),\n (b'\\xab\\xcd\\xef', 5, b'\\x00')\n]);\n\n/*-----------------+-----+---------+-------------------------*\n | t | len | pattern | RPAD |\n +-----------------+-----+---------+-------------------------+\n | b\"abc\" | 8 | b\"def\" | b\"abcdefde\" |\n | b\"abc\" | 5 | b\"-\" | b\"abc--\" |\n | b\"\\xab\\xcd\\xef\" | 5 | b\"\\x00\" | b\"\\xab\\xcd\\xef\\x00\\x00\" |\n *-----------------+-----+---------+-------------------------*/\n```\n\n\n"
},
{
"name": "RTRIM",
"arguments": [],
"category": "String",
"description_markdown": "```\nRTRIM(value1[, value2])\n```\n\n **Description** \n\nIdentical to[TRIM](#trim), but only removes trailing characters.\n\n **Return type** \n\n`STRING`or`BYTES`\n\n **Examples** \n\n```\nWITH items AS\n (SELECT '***apple***' as item\n UNION ALL\n SELECT '***banana***' as item\n UNION ALL\n SELECT '***orange***' as item)\n\nSELECT\n RTRIM(item, '*') as example\nFROM items;\n\n/*-----------*\n | example |\n +-----------+\n | ***apple |\n | ***banana |\n | ***orange |\n *-----------*/\n```\n\n```\nWITH items AS\n (SELECT 'applexxx' as item\n UNION ALL\n SELECT 'bananayyy' as item\n UNION ALL\n SELECT 'orangezzz' as item\n UNION ALL\n SELECT 'pearxyz' as item)\n\nSELECT\n RTRIM(item, 'xyz') as example\nFROM items;\n\n/*---------*\n | example |\n +---------+\n | apple |\n | banana |\n | orange |\n | pear |\n *---------*/\n```\n\n\n"
},
{
"name": "S2_CELLIDFROMPOINT",
"arguments": [],
"category": "Geography",
"description_markdown": "```\nS2_CELLIDFROMPOINT(point_geography[, level => cell_level])\n```\n\n **Description** \n\nReturns the[S2 cell ID](https://s2geometry.io/devguide/s2cell_hierarchy)covering a point`GEOGRAPHY`.\n\n- The optional` INT64`parameter` level`specifies the S2 cell level for the\nreturned cell. Naming this argument is optional.\n\nThis is advanced functionality for interoperability with systems utilizing the[S2 Geometry Library](https://s2geometry.io/).\n\n **Constraints** \n\n- Returns the cell ID as a signed` INT64`bit-equivalent to[unsigned 64-bit integer representation](https://s2geometry.io/devguide/s2cell_hierarchy).\n- Can return negative cell IDs.\n- Valid S2 cell levels are 0 to 30.\n- ` level`defaults to 30 if not explicitly specified.\n- The function only supports a single point GEOGRAPHY. Use the` SAFE`prefix if\nthe input can be multipoint, linestring, polygon, or an empty` GEOGRAPHY`.\n- To compute the covering of a complex` GEOGRAPHY`, use[S2_COVERINGCELLIDS](#s2_coveringcellids).\n\n **Return type** \n\n`INT64`\n\n **Example** \n\n```\nWITH data AS (\n SELECT 1 AS id, ST_GEOGPOINT(-122, 47) AS geo\n UNION ALL\n -- empty geography is not supported\n SELECT 2 AS id, ST_GEOGFROMTEXT('POINT EMPTY') AS geo\n UNION ALL\n -- only points are supported\n SELECT 3 AS id, ST_GEOGFROMTEXT('LINESTRING(1 2, 3 4)') AS geo\n)\nSELECT id,\n SAFE.S2_CELLIDFROMPOINT(geo) cell30,\n SAFE.S2_CELLIDFROMPOINT(geo, level => 10) cell10\nFROM data;\n\n/*----+---------------------+---------------------*\n | id | cell30 | cell10 |\n +----+---------------------+---------------------+\n | 1 | 6093613931972369317 | 6093613287902019584 |\n | 2 | NULL | NULL |\n | 3 | NULL | NULL |\n *----+---------------------+---------------------*/\n```\n\n\n"
},
{
"name": "S2_COVERINGCELLIDS",
"arguments": [],
"category": "Geography",
"description_markdown": "```\nS2_COVERINGCELLIDS(\n geography\n [, min_level => cell_level]\n [, max_level => cell_level]\n [, max_cells => max_cells]\n [, buffer => buffer])\n```\n\n **Description** \n\nReturns an array of[S2 cell IDs](https://s2geometry.io/devguide/s2cell_hierarchy)that cover the input`GEOGRAPHY`. The function returns at most`max_cells`cells. The optional\narguments`min_level`and`max_level`specify minimum and maximum levels for\nreturned S2 cells. The array size is limited by the optional`max_cells`argument. The optional`buffer`argument specifies a buffering factor in\nmeters; the region being covered is expanded from the extent of the\ninput geography by this amount.\n\nThis is advanced functionality for interoperability with systems utilizing the[S2 Geometry Library](https://s2geometry.io/).\n\n **Constraints** \n\n- Returns the cell ID as a signed` INT64`bit-equivalent to[unsigned 64-bit integer representation](https://s2geometry.io/devguide/s2cell_hierarchy).\n- Can return negative cell IDs.\n- Valid S2 cell levels are 0 to 30.\n- ` max_cells`defaults to 8 if not explicitly specified.\n- ` buffer`should be nonnegative. It defaults to 0.0 meters if not explicitly\nspecified.\n\n **Return type** \n\n`ARRAY<INT64>`\n\n **Example** \n\n```\nWITH data AS (\n SELECT 1 AS id, ST_GEOGPOINT(-122, 47) AS geo\n UNION ALL\n SELECT 2 AS id, ST_GEOGFROMTEXT('POINT EMPTY') AS geo\n UNION ALL\n SELECT 3 AS id, ST_GEOGFROMTEXT('LINESTRING(-122.12 47.67, -122.19 47.69)') AS geo\n)\nSELECT id, S2_COVERINGCELLIDS(geo, min_level => 12) cells\nFROM data;\n\n/*----+--------------------------------------------------------------------------------------*\n | id | cells |\n +----+--------------------------------------------------------------------------------------+\n | 1 | [6093613931972369317] |\n | 2 | [] |\n | 3 | [6093384954555662336, 6093390709811838976, 6093390735581642752, 6093390740145045504, |\n | | 6093390791416217600, 6093390812891054080, 6093390817187069952, 6093496378892222464] |\n *----+--------------------------------------------------------------------------------------*/\n```\n\n\n"
},
{
"name": "SAFE_ADD",
"arguments": [],
"category": "Mathematical",
"description_markdown": "```\nSAFE_ADD(X, Y)\n```\n\n **Description** \n\nEquivalent to the addition operator (`+`), but returns`NULL`if overflow occurs.\n\n| X | Y | SAFE_ADD(X, Y) |\n| --- | --- | --- |\n| 5 | 4 | 9 |\n\n **Return Data Type** \n\n| INPUT | `INT64` | `NUMERIC` | `BIGNUMERIC` | `FLOAT64` |\n| --- | --- | --- | --- | --- |\n| `INT64` | `INT64` | `NUMERIC` | `BIGNUMERIC` | `FLOAT64` |\n| `NUMERIC` | `NUMERIC` | `NUMERIC` | `BIGNUMERIC` | `FLOAT64` |\n| `BIGNUMERIC` | `BIGNUMERIC` | `BIGNUMERIC` | `BIGNUMERIC` | `FLOAT64` |\n| `FLOAT64` | `FLOAT64` | `FLOAT64` | `FLOAT64` | `FLOAT64` |\n\n\n\n"
},
{
"name": "SAFE_CAST",
"arguments": [],
"category": "Conversion",
"description_markdown": "```\nSAFE_CAST(expression AS typename [format_clause])\n```\n\n **Description** \n\nWhen using`CAST`, a query can fail if GoogleSQL is unable to perform\nthe cast. For example, the following query generates an error:\n\n```\nSELECT CAST(\"apple\" AS INT64) AS not_a_number;\n```\n\nIf you want to protect your queries from these types of errors, you can use`SAFE_CAST`.`SAFE_CAST`replaces runtime errors with`NULL`s. However, during\nstatic analysis, impossible casts between two non-castable types still produce\nan error because the query is invalid.\n\n```\nSELECT SAFE_CAST(\"apple\" AS INT64) AS not_a_number;\n\n/*--------------*\n | not_a_number |\n +--------------+\n | NULL |\n *--------------*/\n```\n\nSome casts can include a[format clause](/bigquery/docs/reference/standard-sql/format-elements#formatting_syntax), which provides\ninstructions for how to conduct the\ncast. For example, you could\ninstruct a cast to convert a sequence of bytes to a BASE64-encoded string\ninstead of a UTF-8-encoded string.\n\nThe structure of the format clause is unique to each type of cast and more\ninformation is available in the section for that cast.\n\nIf you are casting from bytes to strings, you can also use the\nfunction,[SAFE_CONVERT_BYTES_TO_STRING](#safe_convert_bytes_to_string). Any invalid UTF-8 characters\nare replaced with the unicode replacement character,`U+FFFD`.\n\n\n\n"
},
{
"name": "SAFE_CONVERT_BYTES_TO_STRING",
"arguments": [],
"category": "String",
"description_markdown": "```\nSAFE_CONVERT_BYTES_TO_STRING(value)\n```\n\n **Description** \n\nConverts a sequence of`BYTES`to a`STRING`. Any invalid UTF-8 characters are\nreplaced with the Unicode replacement character,`U+FFFD`.\n\n **Return type** \n\n`STRING`\n\n **Examples** \n\nThe following statement returns the Unicode replacement character, �.\n\n```\nSELECT SAFE_CONVERT_BYTES_TO_STRING(b'\\xc2') as safe_convert;\n```\n\n\n"
},
{
"name": "SAFE_DIVIDE",
"arguments": [],
"category": "Mathematical",
"description_markdown": "```\nSAFE_DIVIDE(X, Y)\n```\n\n **Description** \n\nEquivalent to the division operator (`X / Y`), but returns`NULL`if an error occurs, such as a division by zero error.\n\n| X | Y | SAFE_DIVIDE(X, Y) |\n| --- | --- | --- |\n| 20 | 4 | 5 |\n| 0 | 20 | `0` |\n| 20 | 0 | `NULL` |\n\n **Return Data Type** \n\n| INPUT | `INT64` | `NUMERIC` | `BIGNUMERIC` | `FLOAT64` |\n| --- | --- | --- | --- | --- |\n| `INT64` | `FLOAT64` | `NUMERIC` | `BIGNUMERIC` | `FLOAT64` |\n| `NUMERIC` | `NUMERIC` | `NUMERIC` | `BIGNUMERIC` | `FLOAT64` |\n| `BIGNUMERIC` | `BIGNUMERIC` | `BIGNUMERIC` | `BIGNUMERIC` | `FLOAT64` |\n| `FLOAT64` | `FLOAT64` | `FLOAT64` | `FLOAT64` | `FLOAT64` |\n\n\n\n"
},
{
"name": "SAFE_MULTIPLY",
"arguments": [],
"category": "Mathematical",
"description_markdown": "```\nSAFE_MULTIPLY(X, Y)\n```\n\n **Description** \n\nEquivalent to the multiplication operator (`*`), but returns`NULL`if overflow occurs.\n\n| X | Y | SAFE_MULTIPLY(X, Y) |\n| --- | --- | --- |\n| 20 | 4 | 80 |\n\n **Return Data Type** \n\n| INPUT | `INT64` | `NUMERIC` | `BIGNUMERIC` | `FLOAT64` |\n| --- | --- | --- | --- | --- |\n| `INT64` | `INT64` | `NUMERIC` | `BIGNUMERIC` | `FLOAT64` |\n| `NUMERIC` | `NUMERIC` | `NUMERIC` | `BIGNUMERIC` | `FLOAT64` |\n| `BIGNUMERIC` | `BIGNUMERIC` | `BIGNUMERIC` | `BIGNUMERIC` | `FLOAT64` |\n| `FLOAT64` | `FLOAT64` | `FLOAT64` | `FLOAT64` | `FLOAT64` |\n\n\n\n"
},
{
"name": "SAFE_NEGATE",
"arguments": [],
"category": "Mathematical",
"description_markdown": "```\nSAFE_NEGATE(X)\n```\n\n **Description** \n\nEquivalent to the unary minus operator (`-`), but returns`NULL`if overflow occurs.\n\n| X | SAFE_NEGATE(X) |\n| --- | --- |\n| +1 | -1 |\n| -1 | +1 |\n| 0 | 0 |\n\n **Return Data Type** \n\n| INPUT | `INT64` | `NUMERIC` | `BIGNUMERIC` | `FLOAT64` |\n| --- | --- | --- | --- | --- |\n| OUTPUT | `INT64` | `NUMERIC` | `BIGNUMERIC` | `FLOAT64` |\n\n\n\n"
},
{
"name": "SAFE_SUBTRACT",
"arguments": [],
"category": "Mathematical",
"description_markdown": "```\nSAFE_SUBTRACT(X, Y)\n```\n\n **Description** \n\nReturns the result of Y subtracted from X.\nEquivalent to the subtraction operator (`-`), but returns`NULL`if overflow occurs.\n\n| X | Y | SAFE_SUBTRACT(X, Y) |\n| --- | --- | --- |\n| 5 | 4 | 1 |\n\n **Return Data Type** \n\n| INPUT | `INT64` | `NUMERIC` | `BIGNUMERIC` | `FLOAT64` |\n| --- | --- | --- | --- | --- |\n| `INT64` | `INT64` | `NUMERIC` | `BIGNUMERIC` | `FLOAT64` |\n| `NUMERIC` | `NUMERIC` | `NUMERIC` | `BIGNUMERIC` | `FLOAT64` |\n| `BIGNUMERIC` | `BIGNUMERIC` | `BIGNUMERIC` | `BIGNUMERIC` | `FLOAT64` |\n| `FLOAT64` | `FLOAT64` | `FLOAT64` | `FLOAT64` | `FLOAT64` |\n\n\n\n"
},
{
"name": "SEARCH",
"arguments": [],
"category": "Search",
"description_markdown": "```\nSEARCH(\n data_to_search, search_query\n [, json_scope=>{ 'JSON_VALUES' | 'JSON_KEYS' | 'JSON_KEYS_AND_VALUES' }]\n [, analyzer=>{ 'LOG_ANALYZER' | 'NO_OP_ANALYZER' | 'PATTERN_ANALYZER'}]\n [, analyzer_options=>analyzer_options_values]\n)\n```\n\n **Description** \n\nThe`SEARCH`function checks to see whether a BigQuery table or other\nsearch data contains a set of search terms (tokens). It returns`TRUE`if all\nsearch terms appear in the data, based on the[rules for search_query](#search_query_rules)and text analysis described in the[text analyzer](/bigquery/docs/reference/standard-sql/text-analysis). Otherwise,\nthis function returns`FALSE`.\n\n **Definitions** \n\n<span id=\"data_to_search_arg\"></span>\n\n- ` data_to_search`: The data to search over. The value can be:\n \n \n - Any GoogleSQL data type literal\n - A list of columns\n - A table reference\n - A column of any typeA table reference is evaluated as a` STRUCT`whose fields are the columns of\nthe table.` data_to_search`can be any type, but` SEARCH`will return` FALSE`for all types except those listed here:\n \n \n - ` ARRAY<STRING>`\n - ` ARRAY<STRUCT>`\n - ` JSON`\n - ` STRING`\n - ` STRUCT`You can search for string literals in columns of the preceding types.\nFor additional rules, see[Search data rules](#data_to_search_rules).\n \n \n\n<span id=\"search_query_arg\"></span>\n\n- ` search_query`: A` STRING`literal, or a` STRING`constant expression that\nrepresents the terms of the search query. If` search_query`is` NULL`, an\nerror is returned. If` search_query`produces no search tokens,\nand the text analyzer is` LOG_ANALYZER`or` PATTERN_ANALYZER`, an error is\nproduced.\n- ` json_scope`: Optional mandatory-named argument that\ntakes one of the following values to indicate the scope of JSON data to be\nsearched. It has no effect if` data_to_search`isn't a JSON value or\ndoesn't contain a JSON field.\n \n \n - ` 'JSON_VALUES'`(default): Only the JSON values are searched. If` json_scope`isn't provided, this is used by default.\n \n \n - ` 'JSON_KEYS'`: Only the JSON keys are searched.\n \n \n - ` 'JSON_KEYS_AND_VALUES'`: The JSON keys and values are searched.\n \n \n- ` analyzer`: Optional mandatory-named argument that takes\none of the following values to indicate the text analyzer to use:\n \n \n - ` 'LOG_ANALYZER'`(default): Breaks the input into tokens when delimiters\nare encountered and then normalizes the tokens.\nFor more information, see[LOG_ANALYZER](/bigquery/docs/reference/standard-sql/text-analysis#log_analyzer).\n \n \n - ` 'NO_OP_ANALYZER'`: Extracts the text as a single token, but\ndoesn't apply normalization. For more information about this analyzer,\nsee[NO_OP_ANALYZER](/bigquery/docs/reference/standard-sql/text-analysis#no_op_analyzer).\n \n \n - ` 'PATTERN_ANALYZER'`: Breaks the input into tokens that match a\nregular expression. For more information, see[PATTERN_ANALYZER text analyzer](/bigquery/docs/reference/standard-sql/text-analysis#pattern_analyzer).\n \n \n- ` analyzer_options`: Optional mandatory-named argument that takes a list of\ntext analysis rules as a JSON-formatted` STRING`. For more information,\nsee[Text analyzer options](/bigquery/docs/reference/standard-sql/text-analysis#text_analyzer_options).\n \n \n\n **Details** \n\nThe`SEARCH`function is designed to work with[search indexes](/bigquery/docs/search-index)to\noptimize point lookups. Although the`SEARCH`function works for\ntables that aren't indexed, its performance will be greatly improved with a\nsearch index. If both the analyzer and analyzer options match the one used\nto create the index, the search index will be used.\n\n<span id=\"text_analyzer\"></span>\n\n<span id=\"search_query_rules\"></span>\n\n **Rules for`search_query`** \n\nA search query is initially broken down into terms (subqueries) using the\nwhite spaces in the search query. Each term is then further broken down into\nzero or more searchable tokens based on the text analyzer. This section contains\nthe rules for how different types of terms are analyzed and evaluated.\n\nRules for backticks in[search_query](#search_query_arg):\n\n- If the` LOG_ANALYZER`text analyzer is used, text enclosed in backticks\nforces an exact match.\n \n For example,` `Hello World` happy days`becomes` Hello World`,` happy`,\nand` days`.\n \n \n- Search terms enclosed in backticks must match exactly in` data_to_search`,\nsubject to the following conditions:\n \n \n - It appears at the start of` data_to_search`or is immediately preceded\nby a delimiter.\n \n \n - It appears at the end of` data_to_search`or is immediately followed by\na delimiter.\n \n For example,` SEARCH('foo.bar', '`foo.`')`returns` FALSE`because the\ntext enclosed in the backticks` foo.`is immediately followed by the\ncharacter` b`in the search data` foo.bar`, rather than by a delimiter or\nthe end of the string. However,` SEARCH('foo..bar', '`foo.`')`returns` TRUE`because` foo.`is immediately followed by the delimiter` .`in the\nsearch data.\n \n \n- The backtick itself can be escaped using a backslash,\nas in` \\`foobar\\``.\n \n \n- The following are reserved words and must be enclosed\nin backticks:\n \n ` AND`,` NOT`,` OR`,` IN`, and` NEAR`\n \n \n\nRules for reserved characters in[search_query](#search_query_arg):\n\n- Text not enclosed in backticks requires the following\nreserved characters to be escaped by a double backslash` \\\\`:\n \n \n - ` [ ] < > ( ) { } | ! ' \" * & ? + / : = - \\ ~ ^`\n \n \n - If the quoted string is preceded by the character` r`or` R`, such as` r\"my\\+string\"`, then it is treated as a raw string and only a single\nbackslash is required to escape the reserved characters. For more\ninformation about raw strings and escape\nsequences, see[String and byte literals](/bigquery/docs/reference/standard-sql/lexical#literals).\n \n \n\nRules for phrases in[search_query](#search_query_arg):\n\n **Preview** \n\nThis product or feature is subject to the \"Pre-GA Offerings Terms\"\n in the General Service Terms section of the[Service Specific Terms](/terms/service-terms).\n Pre-GA products and features are available \"as is\" and might have\n limited support. For more information, see the[launch stage descriptions](/products#product-launch-stages).\n\n **Note:** To provide feedback or request support for this feature, send an email to[bq-search-team@google.com](mailto:bq-search-team@google.com).- A phrase is a type of term. If text is enclosed in double quotes and the` analyzer`is` LOG_ANALYZER`,` PATTERN_ANALYZER`, or not set\n(` LOG_ANALYZER`by default), the term represents a phrase.\n- When a phrase is analyzed, a subset of tokens is created for that phrase.\nFor example, from the phrase` \"foo baz.bar\"`, the analyzer called` LOG_ANALYZER`generates the phrase-specific tokens` foo`,` baz`, and` bar`.\n- The order of terms in a phrase matters. A match is only returned if\nthe tokens that were produced for the phrase are next to each other and in\nthe same order as the tokens for[data_to_search](#data_to_search_arg).\n \n For example:\n \n \n ```\n -- FALSE because 'foo' and 'bar' are not next to each other in -- 'foo baz.bar'. SEARCH('foo baz.bar', '\"foo bar\"')\n ```\n \n \n ```\n -- TRUE because 'foo' and 'baz' are next to each other in -- 'foo baz.bar'. SEARCH('foo baz.bar', '\"foo baz\"')\n ```\n \n \n- A single quote inside of the phrase is analyzed as a special character.\n \n \n- An escaped double quote (double quote after a backslash) is analyzed\nas a double quote character.\n \n \n\n<span id=\"search_query_to_search_query\"></span>\n\n **How`search_query`is broken into searchable tokens** \n\nThe following table shows how[search_query](#search_query_arg)is broken into\nsearchable tokens by the`LOG_ANALYZER`text analyzer. All entries are strings.\n\n| search_query | searchable tokens |\n| --- | --- |\n| 127.0.0.1 | 127 \n0 \n1 \n127.0.0.1 \n.\n127.0.0 \n127.0 \n0.0 \n0.0.1 \n0.1 |\n| foobar@example.com | foobar \nexample \ncom \nfoobar@example \nexample.com \nfoobar@example.com |\n| The fox. | the \nfox \nThe \nThe fox \nThe fox. \nfox \nfox. |\n\nThe following table shows how`search_query`is broken into searchable tokens\nby the`LOG_ANALYZER`text analyzer. All entries are strings.\n\n| search_query | searchable tokens |\n| --- | --- |\n| 127.0.0.1 | 127 \n0 \n1 \n |\n| `127.0.0.1` | 127.0.0.1 |\n| foobar@example.com | foobar \nexample \ncom |\n| `foobar@example.com` | foobar@example.com |\n\n<span id=\"data_to_search_rules\"></span>\n\n **Rules for`data_to_search`** \n\nGeneral rules for[data_to_search](#data_to_search_arg):\n\n- ` data_to_search`must contain all tokens produced for` search_query`for the function to return` TRUE`.\n- To perform a cross-field search,` data_to_search`must be a` STRUCT`,` ARRAY`, or` JSON`data type.\n- Each` STRING`field in a compound data type is individually\nsearched for terms.\n- If at least one field in` data_to_search`includes all search terms\nproduced by` search_query`,` SEARCH`returns` TRUE`. Otherwise it has the\nfollowing behavior:\n \n \n - If at least one` STRING`field is` NULL`,` SEARCH`returns` NULL`.\n \n \n - Otherwise,` SEARCH`returns` FALSE`.\n \n \n\n **Return type** \n\n`BOOL`\n\n **Examples** \n\nThe following queries show how tokens in`search_query`are analyzed\nby a`SEARCH`function call using the default analyzer,`LOG_ANALYZER`:\n\n```\nSELECT\n -- ERROR: `search_query` is NULL.\n SEARCH('foobarexample', NULL) AS a,\n\n -- ERROR: `search_query` contains no tokens.\n SEARCH('foobarexample', '') AS b,\n```\n\n```\nSELECT\n -- TRUE: '-' and ' ' are delimiters.\n SEARCH('foobar-example', 'foobar example') AS a,\n\n -- TRUE: The search query is a constant expression evaluated to 'foobar'.\n SEARCH('foobar-example', CONCAT('foo', 'bar')) AS b,\n\n -- FALSE: The search_query is not split.\n SEARCH('foobar-example', 'foobarexample') AS c,\n\n -- TRUE: The double backslash escapes the ampersand which is a delimiter.\n SEARCH('foobar-example', 'foobar\\\\&example') AS d,\n\n -- TRUE: The single backslash escapes the ampersand in a raw string.\n SEARCH('foobar-example', R'foobar\\&example')AS e,\n\n -- FALSE: The backticks indicate that there must be an exact match for\n -- foobar&example.\n SEARCH('foobar-example', '`foobar&example`') AS f,\n\n -- TRUE: An exact match is found.\n SEARCH('foobar&example', '`foobar&example`') AS g\n\n/*-------+-------+-------+-------+-------+-------+-------*\n | a | b | c | d | e | f | g |\n +-------+-------+-------+-------+-------+-------+-------+\n | true | true | false | true | true | false | true |\n *-------+-------+-------+-------+-------+-------+-------*/\n```\n\n```\nSELECT\n -- TRUE: The order of terms doesn't matter.\n SEARCH('foobar-example', 'example foobar') AS a,\n\n -- TRUE: Tokens are made lower-case.\n SEARCH('foobar-example', 'Foobar Example') AS b,\n\n -- TRUE: An exact match is found.\n SEARCH('foobar-example', '`foobar-example`') AS c,\n\n -- FALSE: Backticks preserve capitalization.\n SEARCH('foobar-example', '`Foobar`') AS d,\n\n -- FALSE: Backticks don't have special meaning for search_data and are\n -- not delimiters in the default LOG_ANALYZER.\n SEARCH('`foobar-example`', '`foobar-example`') AS e,\n\n -- TRUE: An exact match is found after the delimiter in search_data.\n SEARCH('foobar@example.com', '`example.com`') AS f,\n\n -- TRUE: An exact match is found between the space delimiters.\n SEARCH('a foobar-example b', '`foobar-example`') AS g;\n\n/*-------+-------+-------+-------+-------+-------+-------*\n | a | b | c | d | e | f | g |\n +-------+-------+-------+-------+-------+-------+-------+\n | true | true | true | false | false | true | true |\n *-------+-------+-------+-------+-------+-------+-------*/\n```\n\n```\nSELECT\n -- FALSE: No single array entry matches all search terms.\n SEARCH(['foobar', 'example'], 'foobar example') AS a,\n\n -- FALSE: The search query is equivalent to foobar\\\\=.\n SEARCH('foobar=', '`foobar\\\\=`') AS b,\n\n -- FALSE: This is equivalent to the previous example.\n SEARCH('foobar=', R'`\\foobar=`') AS c,\n\n -- TRUE: The equals sign is a delimiter in the data and query.\n SEARCH('foobar=', 'foobar\\\\=') AS d,\n\n -- TRUE: This is equivalent to the previous example.\n SEARCH('foobar=', R'foobar\\=') AS e,\n\n -- TRUE: An exact match is found.\n SEARCH('foobar.example', '`foobar`') AS f,\n\n -- FALSE: `foobar.\\` is not analyzed because of backticks; it is not\n -- followed by a delimiter in search_data 'foobar.example'.\n SEARCH('foobar.example', '`foobar.\\`') AS g,\n\n -- TRUE: `foobar.` is not analyzed because of backticks; it is\n -- followed by the delimiter '.' in search_data 'foobar..example'.\n SEARCH('foobar..example', '`foobar.`') AS h;\n\n/*-------+-------+-------+-------+-------+-------+-------+-------*\n | a | b | c | d | e | f | g | h |\n +-------+-------+-------+-------+-------+-------+-------+-------+\n | false | false | false | true | true | true | false | true |\n *-------+-------+-------+-------+-------+-------+-------+-------*/\n```\n\nThe following queries show how phrases in`search_query`are analyzed\nby a`SEARCH`function call:\n\n```\nSELECT\n -- TRUE: The phrase `foo bar` is in `foo bar baz`.\n -- The tokens in `data_to_search` are `foo`, `bar`, and `baz`.\n -- The searchable tokens in `query_string` are `foo` and `bar`\n -- and because they appear in that exact order in `data_to_search`,\n -- the function returns TRUE.\n SEARCH(R'foo bar baz', R'\"foo bar\"') AS a,\n\n -- TRUE: Case is ignored.\n -- The tokens in `data_to_search` are `foo`, `bar`, and `baz`.\n -- The searchable tokens in `query_string` are `foo` and `bar`\n -- and because they appear in that exact order in `data_to_search`,\n -- the function return TRUE.\n SEARCH(R'Foo bar baz', R'\"foo Bar\"') AS b,\n\n -- TRUE: Both `-` and `&` are delimiters used during tokenization.\n -- The tokens in `data_to_search` are `foo`, `bar`, and `baz`.\n -- The searchable tokens in `query_string` are `foo` and `bar`\n -- and because they appear in that exact order in `data_to_search`,\n -- the function returns TRUE.\n SEARCH(R'foo-bar baz', R'\"foo&bar\"') AS c,\n\n -- FALSE: Backticks in a phrase are treated as normal characters.\n -- The tokens in `data_to_search` are `foo`, `bar`, and `baz`.\n -- The searchable tokens in `query_string` are:\n -- `foo\n -- bar`\n -- Because these searchable tokens don't appear in `data_to_search`,\n -- the function returns FALSE.\n SEARCH(R'foo bar baz', R'\"`foo bar`\"') AS d,\n\n -- FALSE: `foo bar` is not in `foo else bar`.\n -- The tokens in `data_to_search` are `foo`, `else`, and `bar`.\n -- The searchable tokens in `query_string` are `foo` and `bar`.\n -- Even though they appear in `data_to_search`, but because they\n -- do not appear in that exact order (`foo` before `bar`),\n -- the function returns FALSE.\n SEARCH(R'foo else bar', R'\"foo bar\"') AS e,\n\n -- FALSE: `foo baz` is not in `foo bar baz`.\n -- The `search_query` produces two terms. The first term is `bar`, which\n -- matches with the similar token in `data_to_search`. However, the second\n -- term is the phrase \"foo&baz\" with two tokens, `foo` and `baz`. Because\n -- `foo` and `baz` do not appear next to each other in `data_to_search`\n -- (`bar` is in between), the function returns FALSE.\n SEARCH(R'foo-bar-baz', R'bar \"foo&baz\"') AS f;\n\n/*-------+-------+-------+-------+-------+-------*\n | a | b | c | d | e | f |\n +-------+-------+-------+-------+-------+-------+\n | true | true | false | false | false | false |\n *-------+-------+-------+-------+-------+-------*/\n```\n\n```\nSELECT\n -- FALSE: Only double quotes need to be escaped in a phrase.\n -- The tokens in `data_to_search` are `foo`, `bar`, and `baz`.\n -- The searchable tokens in `query_string` are `foo\\` and `bar` and they\n -- must appear in that exact order in `data_to_search`, but don't.\n SEARCH(\n R'foo bar baz',\n R'\"foo\\ bar\"',\n analyzer_options=>'{\"delimiters\": [\" \"]}') AS a,\n\n -- TRUE: `foo bar` is in `foo bar baz` after tokenization with the given\n -- delimiters.\n -- The tokens in `data_to_search` are `foo`, `bar`, and `baz`.\n -- The searchable tokens in `query_string` are `foo` and `bar` and they\n -- must appear in that exact order in `data_to_search`.\n SEARCH(\n R'foo bar baz',\n R'\"foo? bar\"',\n analyzer_options=>'{\"delimiters\": [\" \", \"?\"]}') AS b,\n\n -- TRUE: `read book` is in `read book now` after `the` is ignored.\n -- The tokens in `data_to_search` are `read`, `book`, and `now`.\n -- The searchable tokens in `query_string` are `read` and `book` and they\n -- must appear in that exact order in `data_to_search`.\n SEARCH(\n 'read the book now',\n R'\"read the book\"',\n analyzer_options => '{ \"token_filters\": [{\"stop_words\": [\"the\"]}] }') AS c,\n\n -- FALSE: `c d` is not in `a`, `b`, `cd`, `e` or `f` after tokenization with\n -- the given pattern.\n -- The tokens in `data_to_search` are `a`, `b`, `cd`, `e` and `f`.\n -- The searchable tokens in `query_string` are `c` and `d` and they\n -- must appear in that exact order in `data_to_search`. `data_to_search`\n -- contains a `cd` token, but not a `c` or `d` token.\n SEARCH(\n R'abcdef',\n R'\"c d\"',\n analyzer=>'PATTERN_ANALYZER',\n analyzer_options=>'{\"patterns\": [\"(?:cd)|[a-z]\"]}') AS d,\n\n -- TRUE: `ant apple` is in `ant apple avocado` after tokenization with\n -- the given pattern.\n -- The tokens in `data_to_search` are `ant`, `apple`, and `avocado`.\n -- The searchable tokens in `query_string` are `ant` and `apple` and they\n -- must appear in that exact order in `data_to_search`.\n SEARCH(\n R'ant orange apple avocado',\n R'\"ant apple\"',\n analyzer=>'PATTERN_ANALYZER',\n analyzer_options=>'{\"patterns\": [\"a[a-z]\"]}') AS e;\n\n/*-------+-------+-------+-------+-------*\n | a | b | c | d | e |\n +-------+-------+-------+-------+-------+\n | false | true | true | false | true |\n *-------+-------+-------+-------+-------*/\n```\n\nThe following query shows examples of calls to the`SEARCH`function using the`NO_OP_ANALYZER`text analyzer and reasons for various return values:\n\n```\nSELECT\n -- TRUE: exact match\n SEARCH('foobar', 'foobar', analyzer=>'NO_OP_ANALYZER') AS a,\n\n -- FALSE: Backticks are not special characters for `NO_OP_ANALYZER`.\n SEARCH('foobar', '\\`foobar\\`', analyzer=>'NO_OP_ANALYZER') AS b,\n\n -- FALSE: The capitalization does not match.\n SEARCH('foobar', 'Foobar', analyzer=>'NO_OP_ANALYZER') AS c,\n\n -- FALSE: There are no delimiters for `NO_OP_ANALYZER`.\n SEARCH('foobar example', 'foobar', analyzer=>'NO_OP_ANALYZER') AS d,\n\n -- TRUE: An exact match is found.\n SEARCH('', '', analyzer=>'NO_OP_ANALYZER') AS e,\n\n -- FALSE: 'foo bar' and \"foo bar\" are not considered an exact match.\n SEARCH( R'foo bar baz', R'\"foo bar\"', analyzer=>'NO_OP_ANALYZER') AS f,\n\n -- TRUE: \"foo bar\" and \"foo Bar\" are considered an exact match because the\n -- analysis is case-insensitive.\n SEARCH( R'\"foo bar\"', R'\"foo Bar\"', analyzer=>'NO_OP_ANALYZER') AS g;\n\n/*-------+-------+-------+-------+-------+-------+-------*\n | a | b | c | d | e | f | g |\n +-------+-------+-------+-------+-------+-------+-------+\n | true | false | false | false | true | false | true |\n *-------+-------+-------+-------+-------+-------+-------*/\n```\n\nConsider the following table called`meals`with columns`breakfast`,`lunch`,\nand`dinner`:\n\n```\n/*-------------------+-------------------------+------------------*\n | breakfast | lunch | dinner |\n +-------------------+-------------------------+------------------+\n | Potato pancakes | Toasted cheese sandwich | Beef soup |\n | Avocado toast | Tomato soup | Chicken soup |\n *-------------------+-------------------------+------------------*/\n```\n\nThe following query shows how to search single columns, multiple columns, and\nwhole tables, using the default[LOG_ANALYZER](/bigquery/docs/reference/standard-sql/text-analysis#log_analyzer)text analyzer\nwith the default analyzer options:\n\n```\nWITH\n meals AS (\n SELECT\n 'Potato pancakes' AS breakfast,\n 'Toasted cheese sandwich' AS lunch,\n 'Beef soup' AS dinner\n UNION ALL\n SELECT\n 'Avocado toast' AS breakfast,\n 'Tomato soup' AS lunch,\n 'Chicken soup' AS dinner\n )\nSELECT\n SEARCH(lunch, 'soup') AS lunch_soup,\n SEARCH((breakfast, dinner), 'soup') AS breakfast_or_dinner_soup,\n SEARCH(meals, 'soup') AS anytime_soup\nFROM meals;\n\n/*------------+--------------------------+--------------*\n | lunch_soup | breakfast_or_dinner_soup | anytime_soup |\n +------------+--------------------------+--------------+\n | false | true | true |\n | true | true | true |\n *------------+--------------------------+--------------*/\n```\n\nThe following query shows additional ways to search, using the\ndefault[LOG_ANALYZER](/bigquery/docs/reference/standard-sql/text-analysis#log_analyzer)text analyzer with\ndefault analyzer options:\n\n```\nWITH data AS ( SELECT 'Please use foobar@example.com as your email.' AS email )\nSELECT\n SEARCH(email, 'exam') AS a,\n SEARCH(email, 'foobar') AS b,\n SEARCH(email, 'example.com') AS c,\n SEARCH(email, R'\"please use\"') AS d,\n SEARCH(email, R'\"as email\"') AS e\nFROM data;\n\n/*-------+-------+-------+-------+-------*\n | a | b | c | d | e |\n +-------+-------+-------+-------+-------+\n | false | true | true | true | false |\n *-------+-------+-------+-------+-------*/\n```\n\nThe following query shows additional ways to search, using the\ndefault[LOG_ANALYZER](/bigquery/docs/reference/standard-sql/text-analysis#log_analyzer)text analyzer with custom\nanalyzer options. Terms are only split when a space or`@`symbol is\nencountered.\n\n```\nWITH data AS ( SELECT 'Please use foobar@example.com as your email.' AS email )\nSELECT\n SEARCH(email, 'foobar', analyzer_options=>'{\"delimiters\": [\" \", \"@\"]}') AS a,\n SEARCH(email, 'example', analyzer_options=>'{\"delimiters\": [\" \", \"@\"]}') AS b,\n SEARCH(email, 'example.com', analyzer_options=>'{\"delimiters\": [\" \", \"@\"]}') AS c,\n SEARCH(email, 'foobar@example.com', analyzer_options=>'{\"delimiters\": [\" \", \"@\"]}') AS d,\n SEARCH(email, R'use \"foobar example.com\" \"as your\"', analyzer_options=>'{\"delimiters\": [\" \", \"@\"]}') AS e\nFROM data;\n\n/*-------+-------+-------+-------+-------*\n | a | b | c | d | e |\n +-------+-------+-------+-------+-------+\n | true | false | true | true | true |\n *-------+-------+-------+-------+-------*/\n```\n\nThe following query shows how to search, using the[NO_OP_ANALYZER](/bigquery/docs/reference/standard-sql/text-analysis#no_op_analyzer)text analyzer:\n\n```\nWITH meals AS ( SELECT 'Tomato soup' AS lunch )\nSELECT\n SEARCH(lunch, 'Tomato soup', analyzer=>'NO_OP_ANALYZER') AS a,\n SEARCH(lunch, 'soup', analyzer=>'NO_OP_ANALYZER') AS b,\n SEARCH(lunch, 'tomato soup', analyzer=>'NO_OP_ANALYZER') AS c,\n SEARCH(lunch, R'\"Tomato soup\"', analyzer=>'NO_OP_ANALYZER') AS d\nFROM meals;\n\n/*-------+-------+-------+-------*\n | a | b | c | d |\n +-------+-------+-------+-------+\n | true | false | false | false |\n *-------+-------+-------+-------*/\n```\n\nThe following query shows how to use the[PATTERN_ANALYZER](/bigquery/docs/reference/standard-sql/text-analysis#pattern_analyzer)text analyzer with default analyzer options:\n\n```\nWITH data AS ( SELECT 'Please use foobar@example.com as your email.' AS email )\nSELECT\n SEARCH(email, 'exam', analyzer=>'PATTERN_ANALYZER') AS a,\n SEARCH(email, 'foobar', analyzer=>'PATTERN_ANALYZER') AS b,\n SEARCH(email, 'example.com', analyzer=>'PATTERN_ANALYZER') AS c,\n SEARCH(email, R'foobar \"EXAMPLE.com as\" email', analyzer=>'PATTERN_ANALYZER') AS d\nFROM data;\n\n/*-------+-------+-------+-------*\n | a | b | c | d |\n +-------+-------+-------+-------+\n | false | true | true | true |\n *-------+-------+-------+-------*/\n```\n\nThe following query shows additional ways to search, using the[PATTERN_ANALYZER](/bigquery/docs/reference/standard-sql/text-analysis#pattern_analyzer)text analyzer with\ncustom analyzer options:\n\n```\nWITH data AS ( SELECT 'Please use foobar@EXAMPLE.com as your email.' AS email )\nSELECT\n SEARCH(email, 'EXAMPLE', analyzer=>'PATTERN_ANALYZER', analyzer_options=>'{\"patterns\": [\"[A-Z]*\"]}') AS a,\n SEARCH(email, 'example', analyzer=>'PATTERN_ANALYZER', analyzer_options=>'{\"patterns\": [\"[a-z]*\"]}') AS b,\n SEARCH(email, 'example.com', analyzer=>'PATTERN_ANALYZER', analyzer_options=>'{\"patterns\": [\"[a-z]*\"]}') AS c,\n SEARCH(email, 'example.com', analyzer=>'PATTERN_ANALYZER', analyzer_options=>'{\"patterns\": [\"[a-zA-Z.]*\"]}') AS d\nFROM data;\n\n/*-------+-------+-------+-------+-------*\n | a | b | c | d | e |\n +-------+-------+-------+-------+-------+\n | true | false | false | true | false |\n *-------+-------+-------+-------+-------*/\n```\n\nFor additional examples that include analyzer options,\nsee the[Text analysis](/bigquery/docs/reference/standard-sql/text-analysis)reference guide.\n\nFor helpful analyzer recipes that you can use to enhance\nanalyzer-supported queries, see the[Search with text analyzers](/bigquery/docs/text-analysis-search)user guide.\n\n\n\n"
},
{
"name": "SEC",
"arguments": [],
"category": "Mathematical",
"description_markdown": "```\nSEC(X)\n```\n\n **Description** \n\nComputes the secant for the angle of`X`, where`X`is specified in radians.`X`can be any data type\nthat[coerces to FLOAT64](/bigquery/docs/reference/standard-sql/conversion_rules#conversion_rules).\n\n| X | SEC(X) |\n| --- | --- |\n| `+inf` | `NaN` |\n| `-inf` | `NaN` |\n| `NaN` | `NaN` |\n| `NULL` | `NULL` |\n\n **Return Data Type** \n\n`FLOAT64`\n\n **Example** \n\n```\nSELECT SEC(100) AS a, SEC(-1) AS b;\n\n/*----------------+---------------*\n | a | b |\n +----------------+---------------+\n | 1.159663822905 | 1.85081571768 |\n *----------------+---------------*/\n```\n\n\n"
},
{
"name": "SECH",
"arguments": [],
"category": "Mathematical",
"description_markdown": "```\nSECH(X)\n```\n\n **Description** \n\nComputes the hyperbolic secant for the angle of`X`, where`X`is specified\nin radians.`X`can be any data type\nthat[coerces to FLOAT64](/bigquery/docs/reference/standard-sql/conversion_rules#conversion_rules).\nNever produces an error.\n\n| X | SECH(X) |\n| --- | --- |\n| `+inf` | `0` |\n| `-inf` | `0` |\n| `NaN` | `NaN` |\n| `NULL` | `NULL` |\n\n **Return Data Type** \n\n`FLOAT64`\n\n **Example** \n\n```\nSELECT SECH(0.5) AS a, SECH(-2) AS b, SECH(100) AS c;\n\n/*----------------+----------------+---------------------*\n | a | b | c |\n +----------------+----------------+---------------------+\n | 0.88681888397 | 0.265802228834 | 7.4401519520417E-44 |\n *----------------+----------------+---------------------*/\n```\n\n\n"
},
{
"name": "SESSION_USER",
"arguments": [],
"category": "Security",
"description_markdown": "```\nSESSION_USER()\n```\n\n **Description** \n\nFor first-party users, returns the email address of the user that is running the\nquery.\nFor third-party users, returns the[principal identifier](https://cloud.google.com/iam/docs/principal-identifiers)of the user that is running the query.\nFor more information about identities, see[Principals](https://cloud.google.com/docs/authentication#principal).\n\n **Return Data Type** \n\n`STRING`\n\n **Example** \n\n```\nSELECT SESSION_USER() as user;\n\n/*----------------------*\n | user |\n +----------------------+\n | jdoe@example.com |\n *----------------------*/\n```\n\n\n<span id=\"statistical_aggregate_functions\">\n## Statistical aggregate functions\n\n</span>\nGoogleSQL for BigQuery supports statistical aggregate functions.\nTo learn about the syntax for aggregate function calls, see[Aggregate function calls](/bigquery/docs/reference/standard-sql/aggregate-function-calls).\n\n\n\n"
},
{
"name": "SHA1",
"arguments": [],
"category": "Hash",
"description_markdown": "```\nSHA1(input)\n```\n\n **Description** \n\nComputes the hash of the input using the[SHA-1 algorithm](https://en.wikipedia.org/wiki/SHA-1). The input can either be`STRING`or`BYTES`. The string version treats the input as an array of bytes.\n\nThis function returns 20 bytes.\n\n **Warning:** SHA1 is no longer considered secure.\nFor increased security, use another hashing function. **Return type** \n\n`BYTES`\n\n **Example** \n\n```\nSELECT SHA1(\"Hello World\") as sha1;\n\n-- Note that the result of SHA1 is of type BYTES, displayed as a base64-encoded string.\n/*------------------------------*\n | sha1 |\n +------------------------------+\n | Ck1VqNd45QIvq3AZd8XYQLvEhtA= |\n *------------------------------*/\n```\n\n\n"
},
{
"name": "SHA256",
"arguments": [],
"category": "Hash",
"description_markdown": "```\nSHA256(input)\n```\n\n **Description** \n\nComputes the hash of the input using the[SHA-256 algorithm](https://en.wikipedia.org/wiki/SHA-2). The input can either be`STRING`or`BYTES`. The string version treats the input as an array of bytes.\n\nThis function returns 32 bytes.\n\n **Return type** \n\n`BYTES`\n\n **Example** \n\n```\nSELECT SHA256(\"Hello World\") as sha256;\n```\n\n\n"
},
{
"name": "SHA512",
"arguments": [],
"category": "Hash",
"description_markdown": "```\nSHA512(input)\n```\n\n **Description** \n\nComputes the hash of the input using the[SHA-512 algorithm](https://en.wikipedia.org/wiki/SHA-2). The input can either be`STRING`or`BYTES`. The string version treats the input as an array of bytes.\n\nThis function returns 64 bytes.\n\n **Return type** \n\n`BYTES`\n\n **Example** \n\n```\nSELECT SHA512(\"Hello World\") as sha512;\n```\n\n\n<span id=\"hll_functions\">\n## HyperLogLog++ functions\n\n</span>\nThe[HyperLogLog++ algorithm (HLL++)](/bigquery/docs/sketches#sketches_hll)estimates[cardinality](https://en.wikipedia.org/wiki/Cardinality)from[sketches](/bigquery/docs/sketches#sketches_hll).\n\nHLL++ functions are approximate aggregate functions.\nApproximate aggregation typically requires less\nmemory than exact aggregation functions,\nlike[COUNT(DISTINCT)](#count), but also introduces statistical error.\nThis makes HLL++ functions appropriate for large data streams for\nwhich linear memory usage is impractical, as well as for data that is\nalready approximate.\n\nIf you do not need materialized sketches, you can alternatively use an[approximate aggregate function with system-defined precision](#approximate_aggregate_functions),\nsuch as[APPROX_COUNT_DISTINCT](#approx-count-distinct). However,`APPROX_COUNT_DISTINCT`does not allow partial aggregations, re-aggregations,\nand custom precision.\n\nGoogleSQL for BigQuery supports the following HLL++ functions:\n\n\n\n"
},
{
"name": "SIGN",
"arguments": [],
"category": "Mathematical",
"description_markdown": "```\nSIGN(X)\n```\n\n **Description** \n\nReturns`-1`,`0`, or`+1`for negative, zero and positive arguments\nrespectively. For floating point arguments, this function does not distinguish\nbetween positive and negative zero.\n\n| X | SIGN(X) |\n| --- | --- |\n| 25 | +1 |\n| 0 | 0 |\n| -25 | -1 |\n| NaN | NaN |\n\n **Return Data Type** \n\n| INPUT | `INT64` | `NUMERIC` | `BIGNUMERIC` | `FLOAT64` |\n| --- | --- | --- | --- | --- |\n| OUTPUT | `INT64` | `NUMERIC` | `BIGNUMERIC` | `FLOAT64` |\n\n\n\n"
},
{
"name": "SIN",
"arguments": [],
"category": "Mathematical",
"description_markdown": "```\nSIN(X)\n```\n\n **Description** \n\nComputes the sine of X where X is specified in radians. Never fails.\n\n| X | SIN(X) |\n| --- | --- |\n| `+inf` | `NaN` |\n| `-inf` | `NaN` |\n| `NaN` | `NaN` |\n\n\n\n"
},
{
"name": "SINH",
"arguments": [],
"category": "Mathematical",
"description_markdown": "```\nSINH(X)\n```\n\n **Description** \n\nComputes the hyperbolic sine of X where X is specified in radians. Generates\nan error if overflow occurs.\n\n| X | SINH(X) |\n| --- | --- |\n| `+inf` | `+inf` |\n| `-inf` | `-inf` |\n| `NaN` | `NaN` |\n\n\n\n"
},
{
"name": "SOUNDEX",
"arguments": [],
"category": "String",
"description_markdown": "```\nSOUNDEX(value)\n```\n\n **Description** \n\nReturns a`STRING`that represents the[Soundex](https://en.wikipedia.org/wiki/Soundex)code for`value`.\n\nSOUNDEX produces a phonetic representation of a string. It indexes words by\nsound, as pronounced in English. It is typically used to help determine whether\ntwo strings, such as the family names *Levine* and *Lavine* , or the words *to* and *too* , have similar English-language pronunciation.\n\nThe result of the SOUNDEX consists of a letter followed by 3 digits. Non-latin\ncharacters are ignored. If the remaining string is empty after removing\nnon-Latin characters, an empty`STRING`is returned.\n\n **Return type** \n\n`STRING`\n\n **Examples** \n\n```\nWITH example AS (\n SELECT 'Ashcraft' AS value UNION ALL\n SELECT 'Raven' AS value UNION ALL\n SELECT 'Ribbon' AS value UNION ALL\n SELECT 'apple' AS value UNION ALL\n SELECT 'Hello world!' AS value UNION ALL\n SELECT ' H3##!@llo w00orld!' AS value UNION ALL\n SELECT '#1' AS value UNION ALL\n SELECT NULL AS value\n)\nSELECT value, SOUNDEX(value) AS soundex\nFROM example;\n\n/*----------------------+---------*\n | value | soundex |\n +----------------------+---------+\n | Ashcraft | A261 |\n | Raven | R150 |\n | Ribbon | R150 |\n | apple | a140 |\n | Hello world! | H464 |\n | H3##!@llo w00orld! | H464 |\n | #1 | |\n | NULL | NULL |\n *----------------------+---------*/\n```\n\n\n"
},
{
"name": "SPLIT",
"arguments": [],
"category": "String",
"description_markdown": "```\nSPLIT(value[, delimiter])\n```\n\n **Description** \n\nSplits`value`using the`delimiter`argument.\n\nFor`STRING`, the default delimiter is the comma`,`.\n\nFor`BYTES`, you must specify a delimiter.\n\nSplitting on an empty delimiter produces an array of UTF-8 characters for`STRING`values, and an array of`BYTES`for`BYTES`values.\n\nSplitting an empty`STRING`returns an`ARRAY`with a single empty`STRING`.\n\nThis function supports specifying[collation](/bigquery/docs/reference/standard-sql/collation-concepts#collate_about).\n\n **Return type** \n\n`ARRAY<STRING>`or`ARRAY<BYTES>`\n\n **Examples** \n\n```\nWITH letters AS\n (SELECT '' as letter_group\n UNION ALL\n SELECT 'a' as letter_group\n UNION ALL\n SELECT 'b c d' as letter_group)\n\nSELECT SPLIT(letter_group, ' ') as example\nFROM letters;\n\n/*----------------------*\n | example |\n +----------------------+\n | [] |\n | [a] |\n | [b, c, d] |\n *----------------------*/\n```\n\n\n"
},
{
"name": "SQRT",
"arguments": [],
"category": "Mathematical",
"description_markdown": "```\nSQRT(X)\n```\n\n **Description** \n\nComputes the square root of X. Generates an error if X is less than 0.\n\n| X | SQRT(X) |\n| --- | --- |\n| `25.0` | `5.0` |\n| `+inf` | `+inf` |\n| `X < 0` | Error |\n\n **Return Data Type** \n\n| INPUT | `INT64` | `NUMERIC` | `BIGNUMERIC` | `FLOAT64` |\n| --- | --- | --- | --- | --- |\n| OUTPUT | `FLOAT64` | `NUMERIC` | `BIGNUMERIC` | `FLOAT64` |\n\n\n\n"
},
{
"name": "STARTS_WITH",
"arguments": [],
"category": "String",
"description_markdown": "```\nSTARTS_WITH(value, prefix)\n```\n\n **Description** \n\nTakes two`STRING`or`BYTES`values. Returns`TRUE`if`prefix`is a\nprefix of`value`.\n\nThis function supports specifying[collation](/bigquery/docs/reference/standard-sql/collation-concepts#collate_about).\n\n **Return type** \n\n`BOOL`\n\n **Examples** \n\n```\nWITH items AS\n (SELECT 'foo' as item\n UNION ALL\n SELECT 'bar' as item\n UNION ALL\n SELECT 'baz' as item)\n\nSELECT\n STARTS_WITH(item, 'b') as example\nFROM items;\n\n/*---------*\n | example |\n +---------+\n | False |\n | True |\n | True |\n *---------*/\n```\n\n\n"
},
{
"name": "STDDEV",
"arguments": [],
"category": "Statistical_aggregate",
"description_markdown": "```\nSTDDEV(\n [ DISTINCT ]\n expression\n)\n[ OVER over_clause ]\n\nover_clause:\n { named_window | ( [ window_specification ] ) }\n\nwindow_specification:\n [ named_window ]\n [ PARTITION BY partition_expression [, ...] ]\n [ ORDER BY expression [ { ASC | DESC } ] [, ...] ]\n [ window_frame_clause ]\n```\n\n **Description** \n\nAn alias of[STDDEV_SAMP](#stddev_samp).\n\n\n\n"
},
{
"name": "STDDEV_POP",
"arguments": [],
"category": "Statistical_aggregate",
"description_markdown": "```\nSTDDEV_POP(\n [ DISTINCT ]\n expression\n)\n[ OVER over_clause ]\n\nover_clause:\n { named_window | ( [ window_specification ] ) }\n\nwindow_specification:\n [ named_window ]\n [ PARTITION BY partition_expression [, ...] ]\n [ ORDER BY expression [ { ASC | DESC } ] [, ...] ]\n [ window_frame_clause ]\n```\n\n **Description** \n\nReturns the population (biased) standard deviation of the values. The return\nresult is between`0`and`+Inf`.\n\nAll numeric types are supported. If the\ninput is`NUMERIC`or`BIGNUMERIC`then the internal aggregation is\nstable with the final output converted to a`FLOAT64`.\nOtherwise the input is converted to a`FLOAT64`before aggregation, resulting in a potentially unstable result.\n\nThis function ignores any`NULL`inputs. If all inputs are ignored, this\nfunction returns`NULL`. If this function receives a single non-`NULL`input,\nit returns`0`.\n\n`NaN`is produced if:\n\n- Any input value is` NaN`\n- Any input value is positive infinity or negative infinity.\n\nTo learn more about the optional aggregate clauses that you can pass\ninto this function, see[Aggregate function calls](/bigquery/docs/reference/standard-sql/aggregate-function-calls).\n\nThis function can be used with the[AGGREGATION_THRESHOLD clause](/bigquery/docs/reference/standard-sql/query-syntax#agg_threshold_clause).\n\nIf this function is used with the`OVER`clause, it's part of a\nwindow function call. In a window function call,\naggregate function clauses can't be used.\nTo learn more about the`OVER`clause and how to use it, see[Window function calls](/bigquery/docs/reference/standard-sql/window-function-calls).\n\n **Return Data Type** \n\n`FLOAT64`\n\n **Examples** \n\n```\nSELECT STDDEV_POP(x) AS results FROM UNNEST([10, 14, 18]) AS x\n\n/*-------------------*\n | results |\n +-------------------+\n | 3.265986323710904 |\n *-------------------*/\n```\n\n```\nSELECT STDDEV_POP(x) AS results FROM UNNEST([10, 14, NULL]) AS x\n\n/*---------*\n | results |\n +---------+\n | 2 |\n *---------*/\n```\n\n```\nSELECT STDDEV_POP(x) AS results FROM UNNEST([10, NULL]) AS x\n\n/*---------*\n | results |\n +---------+\n | 0 |\n *---------*/\n```\n\n```\nSELECT STDDEV_POP(x) AS results FROM UNNEST([NULL]) AS x\n\n/*---------*\n | results |\n +---------+\n | NULL |\n *---------*/\n```\n\n```\nSELECT STDDEV_POP(x) AS results FROM UNNEST([10, 14, CAST('Infinity' as FLOAT64)]) AS x\n\n/*---------*\n | results |\n +---------+\n | NaN |\n *---------*/\n```\n\n\n"
},
{
"name": "STDDEV_SAMP",
"arguments": [],
"category": "Statistical_aggregate",
"description_markdown": "```\nSTDDEV_SAMP(\n [ DISTINCT ]\n expression\n)\n[ OVER over_clause ]\n\nover_clause:\n { named_window | ( [ window_specification ] ) }\n\nwindow_specification:\n [ named_window ]\n [ PARTITION BY partition_expression [, ...] ]\n [ ORDER BY expression [ { ASC | DESC } ] [, ...] ]\n [ window_frame_clause ]\n```\n\n **Description** \n\nReturns the sample (unbiased) standard deviation of the values. The return\nresult is between`0`and`+Inf`.\n\nAll numeric types are supported. If the\ninput is`NUMERIC`or`BIGNUMERIC`then the internal aggregation is\nstable with the final output converted to a`FLOAT64`.\nOtherwise the input is converted to a`FLOAT64`before aggregation, resulting in a potentially unstable result.\n\nThis function ignores any`NULL`inputs. If there are fewer than two non-`NULL`inputs, this function returns`NULL`.\n\n`NaN`is produced if:\n\n- Any input value is` NaN`\n- Any input value is positive infinity or negative infinity.\n\nTo learn more about the optional aggregate clauses that you can pass\ninto this function, see[Aggregate function calls](/bigquery/docs/reference/standard-sql/aggregate-function-calls).\n\nThis function can be used with the[AGGREGATION_THRESHOLD clause](/bigquery/docs/reference/standard-sql/query-syntax#agg_threshold_clause).\n\nIf this function is used with the`OVER`clause, it's part of a\nwindow function call. In a window function call,\naggregate function clauses can't be used.\nTo learn more about the`OVER`clause and how to use it, see[Window function calls](/bigquery/docs/reference/standard-sql/window-function-calls).\n\n **Return Data Type** \n\n`FLOAT64`\n\n **Examples** \n\n```\nSELECT STDDEV_SAMP(x) AS results FROM UNNEST([10, 14, 18]) AS x\n\n/*---------*\n | results |\n +---------+\n | 4 |\n *---------*/\n```\n\n```\nSELECT STDDEV_SAMP(x) AS results FROM UNNEST([10, 14, NULL]) AS x\n\n/*--------------------*\n | results |\n +--------------------+\n | 2.8284271247461903 |\n *--------------------*/\n```\n\n```\nSELECT STDDEV_SAMP(x) AS results FROM UNNEST([10, NULL]) AS x\n\n/*---------*\n | results |\n +---------+\n | NULL |\n *---------*/\n```\n\n```\nSELECT STDDEV_SAMP(x) AS results FROM UNNEST([NULL]) AS x\n\n/*---------*\n | results |\n +---------+\n | NULL |\n *---------*/\n```\n\n```\nSELECT STDDEV_SAMP(x) AS results FROM UNNEST([10, 14, CAST('Infinity' as FLOAT64)]) AS x\n\n/*---------*\n | results |\n +---------+\n | NaN |\n *---------*/\n```\n\n\n"
},
{
"name": "STRING",
"arguments": [],
"category": "JSON",
"description_markdown": "```\nSTRING(json_expr)\n```\n\n **Description** \n\nConverts a JSON string to a SQL`STRING`value.\n\nArguments:\n\n- ` json_expr`: JSON. For example:\n \n \n ```\n JSON '\"purple\"'\n ```\n \n If the JSON value is not a string, an error is produced. If the expression\nis SQL` NULL`, the function returns SQL` NULL`.\n \n \n\n **Return type** \n\n`STRING`\n\n **Examples** \n\n```\nSELECT STRING(JSON '\"purple\"') AS color;\n\n/*--------*\n | color |\n +--------+\n | purple |\n *--------*/\n```\n\n```\nSELECT STRING(JSON_QUERY(JSON '{\"name\": \"sky\", \"color\": \"blue\"}', \"$.color\")) AS color;\n\n/*-------*\n | color |\n +-------+\n | blue |\n *-------*/\n```\n\nThe following examples show how invalid requests are handled:\n\n```\n-- An error is thrown if the JSON is not of type string.\nSELECT STRING(JSON '123') AS result; -- Throws an error\nSELECT STRING(JSON 'null') AS result; -- Throws an error\nSELECT SAFE.STRING(JSON '123') AS result; -- Returns a SQL NULL\n```\n\n\n"
},
{
"name": "STRING_AGG",
"arguments": [],
"category": "Aggregate",
"description_markdown": "```\nSTRING_AGG(\n [ DISTINCT ]\n expression [, delimiter]\n [ ORDER BY key [ { ASC | DESC } ] [, ... ] ]\n [ LIMIT n ]\n)\n[ OVER over_clause ]\n\nover_clause:\n { named_window | ( [ window_specification ] ) }\n\nwindow_specification:\n [ named_window ]\n [ PARTITION BY partition_expression [, ...] ]\n [ ORDER BY expression [ { ASC | DESC } ] [, ...] ]\n [ window_frame_clause ]\n```\n\n **Description** \n\nReturns a value (either`STRING`or`BYTES`) obtained by concatenating\nnon-`NULL`values. Returns`NULL`if there are zero input rows or`expression`evaluates to`NULL`for all rows.\n\nIf a`delimiter`is specified, concatenated values are separated by that\ndelimiter; otherwise, a comma is used as a delimiter.\n\nTo learn more about the optional aggregate clauses that you can pass\ninto this function, see[Aggregate function calls](/bigquery/docs/reference/standard-sql/aggregate-function-calls).\n\nIf this function is used with the`OVER`clause, it's part of a\nwindow function call. In a window function call,\naggregate function clauses can't be used.\nTo learn more about the`OVER`clause and how to use it, see[Window function calls](/bigquery/docs/reference/standard-sql/window-function-calls).\n\n **Supported Argument Types** \n\nEither`STRING`or`BYTES`.\n\n **Return Data Types** \n\nEither`STRING`or`BYTES`.\n\n **Examples** \n\n```\nSELECT STRING_AGG(fruit) AS string_agg\nFROM UNNEST([\"apple\", NULL, \"pear\", \"banana\", \"pear\"]) AS fruit;\n\n/*------------------------*\n | string_agg |\n +------------------------+\n | apple,pear,banana,pear |\n *------------------------*/\n```\n\n```\nSELECT STRING_AGG(fruit, \" & \") AS string_agg\nFROM UNNEST([\"apple\", \"pear\", \"banana\", \"pear\"]) AS fruit;\n\n/*------------------------------*\n | string_agg |\n +------------------------------+\n | apple & pear & banana & pear |\n *------------------------------*/\n```\n\n```\nSELECT STRING_AGG(DISTINCT fruit, \" & \") AS string_agg\nFROM UNNEST([\"apple\", \"pear\", \"banana\", \"pear\"]) AS fruit;\n\n/*-----------------------*\n | string_agg |\n +-----------------------+\n | apple & pear & banana |\n *-----------------------*/\n```\n\n```\nSELECT STRING_AGG(fruit, \" & \" ORDER BY LENGTH(fruit)) AS string_agg\nFROM UNNEST([\"apple\", \"pear\", \"banana\", \"pear\"]) AS fruit;\n\n/*------------------------------*\n | string_agg |\n +------------------------------+\n | pear & pear & apple & banana |\n *------------------------------*/\n```\n\n```\nSELECT STRING_AGG(fruit, \" & \" LIMIT 2) AS string_agg\nFROM UNNEST([\"apple\", \"pear\", \"banana\", \"pear\"]) AS fruit;\n\n/*--------------*\n | string_agg |\n +--------------+\n | apple & pear |\n *--------------*/\n```\n\n```\nSELECT STRING_AGG(DISTINCT fruit, \" & \" ORDER BY fruit DESC LIMIT 2) AS string_agg\nFROM UNNEST([\"apple\", \"pear\", \"banana\", \"pear\"]) AS fruit;\n\n/*---------------*\n | string_agg |\n +---------------+\n | pear & banana |\n *---------------*/\n```\n\n```\nSELECT\n fruit,\n STRING_AGG(fruit, \" & \") OVER (ORDER BY LENGTH(fruit)) AS string_agg\nFROM UNNEST([\"apple\", NULL, \"pear\", \"banana\", \"pear\"]) AS fruit;\n\n/*--------+------------------------------*\n | fruit | string_agg |\n +--------+------------------------------+\n | NULL | NULL |\n | pear | pear & pear |\n | pear | pear & pear |\n | apple | pear & pear & apple |\n | banana | pear & pear & apple & banana |\n *--------+------------------------------*/\n```\n\n\n"
},
{
"name": "STRPOS",
"arguments": [],
"category": "String",
"description_markdown": "```\nSTRPOS(value, subvalue)\n```\n\n **Description** \n\nTakes two`STRING`or`BYTES`values. Returns the 1-based position of the first\noccurrence of`subvalue`inside`value`. Returns`0`if`subvalue`is not found.\n\nThis function supports specifying[collation](/bigquery/docs/reference/standard-sql/collation-concepts#collate_about).\n\n **Return type** \n\n`INT64`\n\n **Examples** \n\n```\nWITH email_addresses AS\n (SELECT\n 'foo@example.com' AS email_address\n UNION ALL\n SELECT\n 'foobar@example.com' AS email_address\n UNION ALL\n SELECT\n 'foobarbaz@example.com' AS email_address\n UNION ALL\n SELECT\n 'quxexample.com' AS email_address)\n\nSELECT\n STRPOS(email_address, '@') AS example\nFROM email_addresses;\n\n/*---------*\n | example |\n +---------+\n | 4 |\n | 7 |\n | 10 |\n | 0 |\n *---------*/\n```\n\n\n"
},
{
"name": "ST_ANGLE",
"arguments": [],
"category": "Geography",
"description_markdown": "```\nST_ANGLE(point_geography_1, point_geography_2, point_geography_3)\n```\n\n **Description** \n\nTakes three point`GEOGRAPHY`values, which represent two intersecting lines.\nReturns the angle between these lines. Point 2 and point 1 represent the first\nline and point 2 and point 3 represent the second line. The angle between\nthese lines is in radians, in the range`[0, 2pi)`. The angle is measured\nclockwise from the first line to the second line.\n\n`ST_ANGLE`has the following edge cases:\n\n- If points 2 and 3 are the same, returns` NULL`.\n- If points 2 and 1 are the same, returns` NULL`.\n- If points 2 and 3 are exactly antipodal, returns` NULL`.\n- If points 2 and 1 are exactly antipodal, returns` NULL`.\n- If any of the input geographies are not single points or are the empty\ngeography, then throws an error.\n\n **Return type** \n\n`FLOAT64`\n\n **Example** \n\n```\nWITH geos AS (\n SELECT 1 id, ST_GEOGPOINT(1, 0) geo1, ST_GEOGPOINT(0, 0) geo2, ST_GEOGPOINT(0, 1) geo3 UNION ALL\n SELECT 2 id, ST_GEOGPOINT(0, 0), ST_GEOGPOINT(1, 0), ST_GEOGPOINT(0, 1) UNION ALL\n SELECT 3 id, ST_GEOGPOINT(1, 0), ST_GEOGPOINT(0, 0), ST_GEOGPOINT(1, 0) UNION ALL\n SELECT 4 id, ST_GEOGPOINT(1, 0) geo1, ST_GEOGPOINT(0, 0) geo2, ST_GEOGPOINT(0, 0) geo3 UNION ALL\n SELECT 5 id, ST_GEOGPOINT(0, 0), ST_GEOGPOINT(-30, 0), ST_GEOGPOINT(150, 0) UNION ALL\n SELECT 6 id, ST_GEOGPOINT(0, 0), NULL, NULL UNION ALL\n SELECT 7 id, NULL, ST_GEOGPOINT(0, 0), NULL UNION ALL\n SELECT 8 id, NULL, NULL, ST_GEOGPOINT(0, 0))\nSELECT ST_ANGLE(geo1,geo2,geo3) AS angle FROM geos ORDER BY id;\n\n/*---------------------*\n | angle |\n +---------------------+\n | 4.71238898038469 |\n | 0.78547432161873854 |\n | 0 |\n | NULL |\n | NULL |\n | NULL |\n | NULL |\n | NULL |\n *---------------------*/\n```\n\n\n"
},
{
"name": "ST_AREA",
"arguments": [],
"category": "Geography",
"description_markdown": "```\nST_AREA(geography_expression[, use_spheroid])\n```\n\n **Description** \n\nReturns the area in square meters covered by the polygons in the input`GEOGRAPHY`.\n\nIf`geography_expression`is a point or a line, returns zero. If`geography_expression`is a collection, returns the area of the polygons in the\ncollection; if the collection does not contain polygons, returns zero.\n\nThe optional`use_spheroid`parameter determines how this function measures\ndistance. If`use_spheroid`is`FALSE`, the function measures distance on the\nsurface of a perfect sphere.\n\nThe`use_spheroid`parameter currently only supports\nthe value`FALSE`. The default value of`use_spheroid`is`FALSE`.\n\n **Return type** \n\n`FLOAT64`\n\n\n\n"
},
{
"name": "ST_ASBINARY",
"arguments": [],
"category": "Geography",
"description_markdown": "```\nST_ASBINARY(geography_expression)\n```\n\n **Description** \n\nReturns the[WKB](https://en.wikipedia.org/wiki/Well-known_text#Well-known_binary)representation of an input`GEOGRAPHY`.\n\nSee[ST_GEOGFROMWKB](#st_geogfromwkb)to construct a`GEOGRAPHY`from WKB.\n\n **Return type** \n\n`BYTES`\n\n\n\n"
},
{
"name": "ST_ASGEOJSON",
"arguments": [],
"category": "Geography",
"description_markdown": "```\nST_ASGEOJSON(geography_expression)\n```\n\n **Description** \n\nReturns the[RFC 7946](https://tools.ietf.org/html/rfc7946)compliant[GeoJSON](https://en.wikipedia.org/wiki/GeoJSON)representation of the input`GEOGRAPHY`.\n\nA GoogleSQL`GEOGRAPHY`has spherical\ngeodesic edges, whereas a GeoJSON`Geometry`object explicitly has planar edges.\nTo convert between these two types of edges, GoogleSQL adds additional\npoints to the line where necessary so that the resulting sequence of edges\nremains within 10 meters of the original edge.\n\nSee[ST_GEOGFROMGEOJSON](#st_geogfromgeojson)to construct a`GEOGRAPHY`from GeoJSON.\n\n **Return type** \n\n`STRING`\n\n\n\n"
},
{
"name": "ST_ASTEXT",
"arguments": [],
"category": "Geography",
"description_markdown": "```\nST_ASTEXT(geography_expression)\n```\n\n **Description** \n\nReturns the[WKT](https://en.wikipedia.org/wiki/Well-known_text)representation of an input`GEOGRAPHY`.\n\nSee[ST_GEOGFROMTEXT](#st_geogfromtext)to construct a`GEOGRAPHY`from WKT.\n\n **Return type** \n\n`STRING`\n\n\n\n"
},
{
"name": "ST_AZIMUTH",
"arguments": [],
"category": "Geography",
"description_markdown": "```\nST_AZIMUTH(point_geography_1, point_geography_2)\n```\n\n **Description** \n\nTakes two point`GEOGRAPHY`values, and returns the azimuth of the line segment\nformed by points 1 and 2. The azimuth is the angle in radians measured between\nthe line from point 1 facing true North to the line segment from point 1 to\npoint 2.\n\nThe positive angle is measured clockwise on the surface of a sphere. For\nexample, the azimuth for a line segment:\n\n- Pointing North is` 0`\n- Pointing East is` PI/2`\n- Pointing South is` PI`\n- Pointing West is` 3PI/2`\n\n`ST_AZIMUTH`has the following edge cases:\n\n- If the two input points are the same, returns` NULL`.\n- If the two input points are exactly antipodal, returns` NULL`.\n- If either of the input geographies are not single points or are the empty\ngeography, throws an error.\n\n **Return type** \n\n`FLOAT64`\n\n **Example** \n\n```\nWITH geos AS (\n SELECT 1 id, ST_GEOGPOINT(1, 0) AS geo1, ST_GEOGPOINT(0, 0) AS geo2 UNION ALL\n SELECT 2, ST_GEOGPOINT(0, 0), ST_GEOGPOINT(1, 0) UNION ALL\n SELECT 3, ST_GEOGPOINT(0, 0), ST_GEOGPOINT(0, 1) UNION ALL\n -- identical\n SELECT 4, ST_GEOGPOINT(0, 0), ST_GEOGPOINT(0, 0) UNION ALL\n -- antipode\n SELECT 5, ST_GEOGPOINT(-30, 0), ST_GEOGPOINT(150, 0) UNION ALL\n -- nulls\n SELECT 6, ST_GEOGPOINT(0, 0), NULL UNION ALL\n SELECT 7, NULL, ST_GEOGPOINT(0, 0))\nSELECT ST_AZIMUTH(geo1, geo2) AS azimuth FROM geos ORDER BY id;\n\n/*--------------------*\n | azimuth |\n +--------------------+\n | 4.71238898038469 |\n | 1.5707963267948966 |\n | 0 |\n | NULL |\n | NULL |\n | NULL |\n | NULL |\n *--------------------*/\n```\n\n\n"
},
{
"name": "ST_BOUNDARY",
"arguments": [],
"category": "Geography",
"description_markdown": "```\nST_BOUNDARY(geography_expression)\n```\n\n **Description** \n\nReturns a single`GEOGRAPHY`that contains the union\nof the boundaries of each component in the given input`GEOGRAPHY`.\n\nThe boundary of each component of a`GEOGRAPHY`is\ndefined as follows:\n\n- The boundary of a point is empty.\n- The boundary of a linestring consists of the endpoints of the linestring.\n- The boundary of a polygon consists of the linestrings that form the polygon\nshell and each of the polygon's holes.\n\n **Return type** \n\n`GEOGRAPHY`\n\n\n\n"
},
{
"name": "ST_BOUNDINGBOX",
"arguments": [],
"category": "Geography",
"description_markdown": "```\nST_BOUNDINGBOX(geography_expression)\n```\n\n **Description** \n\nReturns a`STRUCT`that represents the bounding box for the specified geography.\nThe bounding box is the minimal rectangle that encloses the geography. The edges\nof the rectangle follow constant lines of longitude and latitude.\n\nCaveats:\n\n- Returns` NULL`if the input is` NULL`or an empty geography.\n- The bounding box might cross the antimeridian if this allows for a smaller\nrectangle. In this case, the bounding box has one of its longitudinal bounds\noutside of the [-180, 180] range, so that` xmin`is smaller than the eastmost\nvalue` xmax`.\n\n **Return type** \n\n`STRUCT<xmin FLOAT64, ymin FLOAT64, xmax FLOAT64, ymax FLOAT64>`.\n\nBounding box parts:\n\n- ` xmin`: The westmost constant longitude line that bounds the rectangle.\n- ` xmax`: The eastmost constant longitude line that bounds the rectangle.\n- ` ymin`: The minimum constant latitude line that bounds the rectangle.\n- ` ymax`: The maximum constant latitude line that bounds the rectangle.\n\n **Example** \n\n```\nWITH data AS (\n SELECT 1 id, ST_GEOGFROMTEXT('POLYGON((-125 48, -124 46, -117 46, -117 49, -125 48))') g\n UNION ALL\n SELECT 2 id, ST_GEOGFROMTEXT('POLYGON((172 53, -130 55, -141 70, 172 53))') g\n UNION ALL\n SELECT 3 id, ST_GEOGFROMTEXT('POINT EMPTY') g\n UNION ALL\n SELECT 4 id, ST_GEOGFROMTEXT('POLYGON((172 53, -141 70, -130 55, 172 53))', oriented => TRUE)\n)\nSELECT id, ST_BOUNDINGBOX(g) AS box\nFROM data\n\n/*----+------------------------------------------*\n | id | box |\n +----+------------------------------------------+\n | 1 | {xmin:-125, ymin:46, xmax:-117, ymax:49} |\n | 2 | {xmin:172, ymin:53, xmax:230, ymax:70} |\n | 3 | NULL |\n | 4 | {xmin:-180, ymin:-90, xmax:180, ymax:90} |\n *----+------------------------------------------*/\n```\n\nSee[ST_EXTENT](#st_extent)for the aggregate version of`ST_BOUNDINGBOX`.\n\n\n\n"
},
{
"name": "ST_BUFFER",
"arguments": [],
"category": "Geography",
"description_markdown": "```\nST_BUFFER(\n geography,\n buffer_radius\n [, num_seg_quarter_circle => num_segments]\n [, use_spheroid => boolean_expression]\n [, endcap => endcap_style]\n [, side => line_side])\n```\n\n **Description** \n\nReturns a`GEOGRAPHY`that represents the buffer around the input`GEOGRAPHY`.\nThis function is similar to[ST_BUFFERWITHTOLERANCE](#st_bufferwithtolerance),\nbut you specify the number of segments instead of providing tolerance to\ndetermine how much the resulting geography can deviate from the ideal\nbuffer radius.\n\n- ` geography`: The input` GEOGRAPHY`to encircle with the buffer radius.\n- ` buffer_radius`:` FLOAT64`that represents the radius of the\nbuffer around the input geography. The radius is in meters. Note that\npolygons contract when buffered with a negative` buffer_radius`. Polygon\nshells and holes that are contracted to a point are discarded.\n- ` num_seg_quarter_circle`: (Optional)` FLOAT64`specifies the\nnumber of segments that are used to approximate a quarter circle. The\ndefault value is` 8.0`. Naming this argument is optional.\n- ` endcap`: (Optional)` STRING`allows you to specify one of two endcap\nstyles:` ROUND`and` FLAT`. The default value is` ROUND`. This option only\naffects the endcaps of buffered linestrings.\n- ` side`: (Optional)` STRING`allows you to specify one of three possibilities\nfor lines:` BOTH`,` LEFT`, and` RIGHT`. The default is` BOTH`. This option\nonly affects how linestrings are buffered.\n- ` use_spheroid`: (Optional)` BOOL`determines how this function measures\ndistance. If` use_spheroid`is` FALSE`, the function measures distance on\nthe surface of a perfect sphere. The` use_spheroid`parameter\ncurrently only supports the value` FALSE`. The default value of` use_spheroid`is` FALSE`.\n\n **Return type** \n\nPolygon`GEOGRAPHY`\n\n **Example** \n\nThe following example shows the result of`ST_BUFFER`on a point. A buffered\npoint is an approximated circle. When`num_seg_quarter_circle = 2`, there are\ntwo line segments in a quarter circle, and therefore the buffered circle has\neight sides and[ST_NUMPOINTS](#st_numpoints)returns nine vertices. When`num_seg_quarter_circle = 8`, there are eight line segments in a quarter circle,\nand therefore the buffered circle has thirty-two sides and[ST_NUMPOINTS](#st_numpoints)returns thirty-three vertices.\n\n```\nSELECT\n -- num_seg_quarter_circle=2\n ST_NUMPOINTS(ST_BUFFER(ST_GEOGFROMTEXT('POINT(1 2)'), 50, 2)) AS eight_sides,\n -- num_seg_quarter_circle=8, since 8 is the default\n ST_NUMPOINTS(ST_BUFFER(ST_GEOGFROMTEXT('POINT(100 2)'), 50)) AS thirty_two_sides;\n\n/*-------------+------------------*\n | eight_sides | thirty_two_sides |\n +-------------+------------------+\n | 9 | 33 |\n *-------------+------------------*/\n```\n\n\n"
},
{
"name": "ST_BUFFERWITHTOLERANCE",
"arguments": [],
"category": "Geography",
"description_markdown": "```\nST_BUFFERWITHTOLERANCE(\n geography,\n buffer_radius,\n tolerance_meters => tolerance\n [, use_spheroid => boolean_expression]\n [, endcap => endcap_style]\n [, side => line_side])\n```\n\nReturns a`GEOGRAPHY`that represents the buffer around the input`GEOGRAPHY`.\nThis function is similar to[ST_BUFFER](#st_buffer),\nbut you provide tolerance instead of segments to determine how much the\nresulting geography can deviate from the ideal buffer radius.\n\n- ` geography`: The input` GEOGRAPHY`to encircle with the buffer radius.\n- ` buffer_radius`:` FLOAT64`that represents the radius of the\nbuffer around the input geography. The radius is in meters. Note that\npolygons contract when buffered with a negative` buffer_radius`. Polygon\nshells and holes that are contracted to a point are discarded.\n- ` tolerance_meters`:` FLOAT64`specifies a tolerance in\nmeters with which the shape is approximated. Tolerance determines how much a\npolygon can deviate from the ideal radius. Naming this argument is optional.\n- ` endcap`: (Optional)` STRING`allows you to specify one of two endcap\nstyles:` ROUND`and` FLAT`. The default value is` ROUND`. This option only\naffects the endcaps of buffered linestrings.\n- ` side`: (Optional)` STRING`allows you to specify one of three possible line\nstyles:` BOTH`,` LEFT`, and` RIGHT`. The default is` BOTH`. This option only\naffects the endcaps of buffered linestrings.\n- ` use_spheroid`: (Optional)` BOOL`determines how this function measures\ndistance. If` use_spheroid`is` FALSE`, the function measures distance on\nthe surface of a perfect sphere. The` use_spheroid`parameter\ncurrently only supports the value` FALSE`. The default value of` use_spheroid`is` FALSE`.\n\n **Return type** \n\nPolygon`GEOGRAPHY`\n\n **Example** \n\nThe following example shows the results of`ST_BUFFERWITHTOLERANCE`on a point,\ngiven two different values for tolerance but with the same buffer radius of`100`. A buffered point is an approximated circle. When`tolerance_meters=25`,\nthe tolerance is a large percentage of the buffer radius, and therefore only\nfive segments are used to approximate a circle around the input point. When`tolerance_meters=1`, the tolerance is a much smaller percentage of the buffer\nradius, and therefore twenty-four edges are used to approximate a circle around\nthe input point.\n\n```\nSELECT\n -- tolerance_meters=25, or 25% of the buffer radius.\n ST_NumPoints(ST_BUFFERWITHTOLERANCE(ST_GEOGFROMTEXT('POINT(1 2)'), 100, 25)) AS five_sides,\n -- tolerance_meters=1, or 1% of the buffer radius.\n st_NumPoints(ST_BUFFERWITHTOLERANCE(ST_GEOGFROMTEXT('POINT(100 2)'), 100, 1)) AS twenty_four_sides;\n\n/*------------+-------------------*\n | five_sides | twenty_four_sides |\n +------------+-------------------+\n | 6 | 24 |\n *------------+-------------------*/\n```\n\n\n"
},
{
"name": "ST_CENTROID",
"arguments": [],
"category": "Geography",
"description_markdown": "```\nST_CENTROID(geography_expression)\n```\n\n **Description** \n\nReturns the *centroid* of the input`GEOGRAPHY`as a single point`GEOGRAPHY`.\n\nThe *centroid* of a`GEOGRAPHY`is the weighted average of the centroids of the\nhighest-dimensional components in the`GEOGRAPHY`. The centroid for components\nin each dimension is defined as follows:\n\n- The centroid of points is the arithmetic mean of the input coordinates.\n- The centroid of linestrings is the centroid of all the edges weighted by\nlength. The centroid of each edge is the geodesic midpoint of the edge.\n- The centroid of a polygon is its center of mass.\n\nIf the input`GEOGRAPHY`is empty, an empty`GEOGRAPHY`is returned.\n\n **Constraints** \n\nIn the unlikely event that the centroid of a`GEOGRAPHY`cannot be defined by a\nsingle point on the surface of the Earth, a deterministic but otherwise\narbitrary point is returned. This can only happen if the centroid is exactly at\nthe center of the Earth, such as the centroid for a pair of antipodal points,\nand the likelihood of this happening is vanishingly small.\n\n **Return type** \n\nPoint`GEOGRAPHY`\n\n\n\n"
},
{
"name": "ST_CENTROID_AGG",
"arguments": [],
"category": "Geography",
"description_markdown": "```\nST_CENTROID_AGG(geography)\n```\n\n **Description** \n\nComputes the centroid of the set of input`GEOGRAPHY`s as a single point`GEOGRAPHY`.\n\nThe *centroid* over the set of input`GEOGRAPHY`s is the weighted average of the\ncentroid of each individual`GEOGRAPHY`. Only the`GEOGRAPHY`s with the highest\ndimension present in the input contribute to the centroid of the entire set.\nFor example, if the input contains both`GEOGRAPHY`s with lines and`GEOGRAPHY`s with only points,`ST_CENTROID_AGG`returns the weighted average\nof the`GEOGRAPHY`s with lines, since those have maximal dimension. In this\nexample,`ST_CENTROID_AGG`ignores`GEOGRAPHY`s with only points when\ncalculating the aggregate centroid.\n\n`ST_CENTROID_AGG`ignores`NULL`input`GEOGRAPHY`values.\n\nSee[ST_CENTROID](#st_centroid)for the non-aggregate version of`ST_CENTROID_AGG`and the definition of centroid for an individual`GEOGRAPHY`value.\n\n **Return type** \n\nPoint`GEOGRAPHY`\n\n **Example** \n\nThe following queries compute the aggregate centroid over a set of`GEOGRAPHY`values. The input to the first query\ncontains only points, and therefore each value contribute to the aggregate\ncentroid. Also notice that`ST_CENTROID_AGG`is *not* equivalent to calling`ST_CENTROID`on the result of`ST_UNION_AGG`; duplicates are removed by the\nunion, unlike`ST_CENTROID_AGG`. The input to the second query has mixed\ndimensions, and only values with the highest dimension in the set, the lines,\naffect the aggregate centroid.\n\n```\nSELECT ST_CENTROID_AGG(points) AS st_centroid_agg,\nST_CENTROID(ST_UNION_AGG(points)) AS centroid_of_union\nFROM UNNEST([ST_GEOGPOINT(1, 5),\n ST_GEOGPOINT(1, 2),\n ST_GEOGPOINT(1, -1),\n ST_GEOGPOINT(1, -1)]) points;\n\n/*---------------------------+-------------------*\n | st_centroid_agg | centroid_of_union |\n +---------------------------+-------------------+\n | POINT(1 1.24961422620969) | POINT(1 2) |\n *---------------------------+-------------------*/\n```\n\n```\nSELECT ST_CENTROID_AGG(points) AS st_centroid_agg\nFROM UNNEST([ST_GEOGPOINT(50, 26),\n ST_GEOGPOINT(34, 33.3),\n ST_GEOGFROMTEXT('LINESTRING(0 -1, 0 1)'),\n ST_GEOGFROMTEXT('LINESTRING(0 1, 0 3)')]) points;\n\n/*-----------------*\n | st_centroid_agg |\n +-----------------+\n | POINT(0 1) |\n *-----------------*/\n```\n\n\n"
},
{
"name": "ST_CLOSESTPOINT",
"arguments": [],
"category": "Geography",
"description_markdown": "```\nST_CLOSESTPOINT(geography_1, geography_2[, use_spheroid])\n```\n\n **Description** \n\nReturns a`GEOGRAPHY`containing a point on`geography_1`with the smallest possible distance to`geography_2`. This implies\nthat the distance between the point returned by`ST_CLOSESTPOINT`and`geography_2`is less than or equal to the distance between any other point on`geography_1`and`geography_2`.\n\nIf either of the input`GEOGRAPHY`s is empty,`ST_CLOSESTPOINT`returns`NULL`.\n\nThe optional`use_spheroid`parameter determines how this function measures\ndistance. If`use_spheroid`is`FALSE`, the function measures distance on the\nsurface of a perfect sphere.\n\nThe`use_spheroid`parameter currently only supports\nthe value`FALSE`. The default value of`use_spheroid`is`FALSE`.\n\n **Return type** \n\nPoint`GEOGRAPHY`\n\n\n\n"
},
{
"name": "ST_CLUSTERDBSCAN",
"arguments": [],
"category": "Geography",
"description_markdown": "```\nST_CLUSTERDBSCAN(geography_column, epsilon, minimum_geographies)\nOVER over_clause\n\nover_clause:\n { named_window | ( [ window_specification ] ) }\n\nwindow_specification:\n [ named_window ]\n [ PARTITION BY partition_expression [, ...] ]\n [ ORDER BY expression [ { ASC | DESC } ] [, ...] ]\n```\n\nPerforms[DBSCAN clustering](https://en.wikipedia.org/wiki/DBSCAN)on a column of geographies. Returns a\n0-based cluster number.\n\nTo learn more about the`OVER`clause and how to use it, see[Window function calls](/bigquery/docs/reference/standard-sql/window-function-calls).\n\n **Input parameters** \n\n- ` geography_column`: A column of` GEOGRAPHY`s that\nis clustered.\n- ` epsilon`: The epsilon that specifies the radius, measured in meters, around\na core value. Non-negative` FLOAT64`value.\n- ` minimum_geographies`: Specifies the minimum number of geographies in a\nsingle cluster. Only dense input forms a cluster, otherwise it is classified\nas noise. Non-negative` INT64`value.\n\n **Geography types and the DBSCAN algorithm** \n\nThe DBSCAN algorithm identifies high-density clusters of data and marks outliers\nin low-density areas of noise. Geographies passed in through`geography_column`are classified in one of three ways by the DBSCAN algorithm:\n\n- Core value: A geography is a core value if it is within` epsilon`distance\nof` minimum_geographies`geographies, including itself. The core value\nstarts a new cluster, or is added to the same cluster as a core value within` epsilon`distance. Core values are grouped in a cluster together with all\nother core and border values that are within` epsilon`distance.\n- Border value: A geography is a border value if it is within epsilon distance\nof a core value. It is added to the same cluster as a core value within` epsilon`distance. A border value may be within` epsilon`distance of more\nthan one cluster. In this case, it may be arbitrarily assigned to either\ncluster and the function will produce the same result in subsequent calls.\n- Noise: A geography is noise if it is neither a core nor a border value.\nNoise values are assigned to a` NULL`cluster. An empty` GEOGRAPHY`is always classified as noise.\n\n **Constraints** \n\n- The argument` minimum_geographies`is a non-negative` INT64`and` epsilon`is a non-negative` FLOAT64`.\n- An empty geography cannot join any cluster.\n- Multiple clustering assignments could be possible for a border value. If a\ngeography is a border value,` ST_CLUSTERDBSCAN`will assign it to an\narbitrary valid cluster.\n\n **Return type** \n\n`INT64`for each geography in the geography column.\n\n **Examples** \n\nThis example performs DBSCAN clustering with a radius of 100,000 meters with a`minimum_geographies`argument of 1. The geographies being analyzed are a\nmixture of points, lines, and polygons.\n\n```\nWITH Geos as\n (SELECT 1 as row_id, ST_GEOGFROMTEXT('POINT EMPTY') as geo UNION ALL\n SELECT 2, ST_GEOGFROMTEXT('MULTIPOINT(1 1, 2 2, 4 4, 5 2)') UNION ALL\n SELECT 3, ST_GEOGFROMTEXT('POINT(14 15)') UNION ALL\n SELECT 4, ST_GEOGFROMTEXT('LINESTRING(40 1, 42 34, 44 39)') UNION ALL\n SELECT 5, ST_GEOGFROMTEXT('POLYGON((40 2, 40 1, 41 2, 40 2))'))\nSELECT row_id, geo, ST_CLUSTERDBSCAN(geo, 1e5, 1) OVER () AS cluster_num FROM\nGeos ORDER BY row_id\n\n/*--------+-----------------------------------+-------------*\n | row_id | geo | cluster_num |\n +--------+-----------------------------------+-------------+\n | 1 | GEOMETRYCOLLECTION EMPTY | NULL |\n | 2 | MULTIPOINT(1 1, 2 2, 5 2, 4 4) | 0 |\n | 3 | POINT(14 15) | 1 |\n | 4 | LINESTRING(40 1, 42 34, 44 39) | 2 |\n | 5 | POLYGON((40 2, 40 1, 41 2, 40 2)) | 2 |\n *--------+-----------------------------------+-------------*/\n```\n\n\n"
},
{
"name": "ST_CONTAINS",
"arguments": [],
"category": "Geography",
"description_markdown": "```\nST_CONTAINS(geography_1, geography_2)\n```\n\n **Description** \n\nReturns`TRUE`if no point of`geography_2`is outside`geography_1`, and\nthe interiors intersect; returns`FALSE`otherwise.\n\nNOTE: A`GEOGRAPHY` *does not* contain its own\nboundary. Compare with[ST_COVERS](#st_covers).\n\n **Return type** \n\n`BOOL`\n\n **Example** \n\nThe following query tests whether the polygon`POLYGON((1 1, 20 1, 10 20, 1 1))`contains each of the three points`(0, 0)`,`(1, 1)`, and`(10, 10)`, which lie\non the exterior, the boundary, and the interior of the polygon respectively.\n\n```\nSELECT\n ST_GEOGPOINT(i, i) AS p,\n ST_CONTAINS(ST_GEOGFROMTEXT('POLYGON((1 1, 20 1, 10 20, 1 1))'),\n ST_GEOGPOINT(i, i)) AS `contains`\nFROM UNNEST([0, 1, 10]) AS i;\n\n/*--------------+----------*\n | p | contains |\n +--------------+----------+\n | POINT(0 0) | FALSE |\n | POINT(1 1) | FALSE |\n | POINT(10 10) | TRUE |\n *--------------+----------*/\n```\n\n\n"
},
{
"name": "ST_CONVEXHULL",
"arguments": [],
"category": "Geography",
"description_markdown": "```\nST_CONVEXHULL(geography_expression)\n```\n\n **Description** \n\nReturns the convex hull for the input`GEOGRAPHY`. The convex hull is the\nsmallest convex`GEOGRAPHY`that covers the input. A`GEOGRAPHY`is convex if\nfor every pair of points in the`GEOGRAPHY`, the geodesic edge connecting the\npoints are also contained in the same`GEOGRAPHY`.\n\nIn most cases, the convex hull consists of a single polygon. Notable edge cases\ninclude the following:\n\n- The convex hull of a single point is also a point.\n- The convex hull of two or more collinear points is a linestring as long as\nthat linestring is convex.\n- If the input` GEOGRAPHY`spans more than a\nhemisphere, the convex hull is the full globe. This includes any input that\ncontains a pair of antipodal points.\n- ` ST_CONVEXHULL`returns` NULL`if the input is either` NULL`or the empty` GEOGRAPHY`.\n\n **Return type** \n\n`GEOGRAPHY`\n\n **Examples** \n\nThe convex hull returned by`ST_CONVEXHULL`can be a point, linestring, or a\npolygon, depending on the input.\n\n```\nWITH Geographies AS\n (SELECT ST_GEOGFROMTEXT('POINT(1 1)') AS g UNION ALL\n SELECT ST_GEOGFROMTEXT('LINESTRING(1 1, 2 2)') AS g UNION ALL\n SELECT ST_GEOGFROMTEXT('MULTIPOINT(2 11, 4 12, 0 15, 1 9, 1 12)') AS g)\nSELECT\n g AS input_geography,\n ST_CONVEXHULL(g) AS convex_hull\nFROM Geographies;\n\n/*-----------------------------------------+--------------------------------------------------------*\n | input_geography | convex_hull |\n +-----------------------------------------+--------------------------------------------------------+\n | POINT(1 1) | POINT(0.999999999999943 1) |\n | LINESTRING(1 1, 2 2) | LINESTRING(2 2, 1.49988573656168 1.5000570914792, 1 1) |\n | MULTIPOINT(1 9, 4 12, 2 11, 1 12, 0 15) | POLYGON((1 9, 4 12, 0 15, 1 9)) |\n *-----------------------------------------+--------------------------------------------------------*/\n```\n\n\n"
},
{
"name": "ST_COVEREDBY",
"arguments": [],
"category": "Geography",
"description_markdown": "```\nST_COVEREDBY(geography_1, geography_2)\n```\n\n **Description** \n\nReturns`FALSE`if`geography_1`or`geography_2`is empty. Returns`TRUE`if no\npoints of`geography_1`lie in the exterior of`geography_2`.\n\nGiven two`GEOGRAPHY`s`a`and`b`,`ST_COVEREDBY(a, b)`returns the same result as[ST_COVERS](#st_covers)`(b, a)`. Note the opposite order of arguments.\n\n **Return type** \n\n`BOOL`\n\n\n\n"
},
{
"name": "ST_COVERS",
"arguments": [],
"category": "Geography",
"description_markdown": "```\nST_COVERS(geography_1, geography_2)\n```\n\n **Description** \n\nReturns`FALSE`if`geography_1`or`geography_2`is empty.\nReturns`TRUE`if no points of`geography_2`lie in the exterior of`geography_1`.\n\n **Return type** \n\n`BOOL`\n\n **Example** \n\nThe following query tests whether the polygon`POLYGON((1 1, 20 1, 10 20, 1 1))`covers each of the three points`(0, 0)`,`(1, 1)`, and`(10, 10)`, which lie\non the exterior, the boundary, and the interior of the polygon respectively.\n\n```\nSELECT\n ST_GEOGPOINT(i, i) AS p,\n ST_COVERS(ST_GEOGFROMTEXT('POLYGON((1 1, 20 1, 10 20, 1 1))'),\n ST_GEOGPOINT(i, i)) AS `covers`\nFROM UNNEST([0, 1, 10]) AS i;\n\n/*--------------+--------*\n | p | covers |\n +--------------+--------+\n | POINT(0 0) | FALSE |\n | POINT(1 1) | TRUE |\n | POINT(10 10) | TRUE |\n *--------------+--------*/\n```\n\n\n"
},
{
"name": "ST_DIFFERENCE",
"arguments": [],
"category": "Geography",
"description_markdown": "```\nST_DIFFERENCE(geography_1, geography_2)\n```\n\n **Description** \n\nReturns a`GEOGRAPHY`that represents the point set\ndifference of`geography_1`and`geography_2`. Therefore, the result consists of\nthe part of`geography_1`that does not intersect with`geography_2`.\n\nIf`geometry_1`is completely contained in`geometry_2`, then`ST_DIFFERENCE`returns an empty`GEOGRAPHY`.\n\n **Constraints** \n\nThe underlying geometric objects that a GoogleSQL`GEOGRAPHY`represents correspond to a *closed* point\nset. Therefore,`ST_DIFFERENCE`is the closure of the point set difference of`geography_1`and`geography_2`. This implies that if`geography_1`and`geography_2`intersect, then a portion of the boundary of`geography_2`could\nbe in the difference.\n\n **Return type** \n\n`GEOGRAPHY`\n\n **Example** \n\nThe following query illustrates the difference between`geog1`, a larger polygon`POLYGON((0 0, 10 0, 10 10, 0 0))`and`geog1`, a smaller polygon`POLYGON((4 2, 6 2, 8 6, 4 2))`that intersects with`geog1`. The result is`geog1`with a hole where`geog2`intersects with it.\n\n```\nSELECT\n ST_DIFFERENCE(\n ST_GEOGFROMTEXT('POLYGON((0 0, 10 0, 10 10, 0 0))'),\n ST_GEOGFROMTEXT('POLYGON((4 2, 6 2, 8 6, 4 2))')\n );\n\n/*--------------------------------------------------------*\n | difference_of_geog1_and_geog2 |\n +--------------------------------------------------------+\n | POLYGON((0 0, 10 0, 10 10, 0 0), (8 6, 6 2, 4 2, 8 6)) |\n *--------------------------------------------------------*/\n```\n\n\n"
},
{
"name": "ST_DIMENSION",
"arguments": [],
"category": "Geography",
"description_markdown": "```\nST_DIMENSION(geography_expression)\n```\n\n **Description** \n\nReturns the dimension of the highest-dimensional element in the input`GEOGRAPHY`.\n\nThe dimension of each possible element is as follows:\n\n- The dimension of a point is` 0`.\n- The dimension of a linestring is` 1`.\n- The dimension of a polygon is` 2`.\n\nIf the input`GEOGRAPHY`is empty,`ST_DIMENSION`returns`-1`.\n\n **Return type** \n\n`INT64`\n\n\n\n"
},
{
"name": "ST_DISJOINT",
"arguments": [],
"category": "Geography",
"description_markdown": "```\nST_DISJOINT(geography_1, geography_2)\n```\n\n **Description** \n\nReturns`TRUE`if the intersection of`geography_1`and`geography_2`is empty,\nthat is, no point in`geography_1`also appears in`geography_2`.\n\n`ST_DISJOINT`is the logical negation of[ST_INTERSECTS](#st_intersects).\n\n **Return type** \n\n`BOOL`\n\n\n\n"
},
{
"name": "ST_DISTANCE",
"arguments": [],
"category": "Geography",
"description_markdown": "```\nST_DISTANCE(geography_1, geography_2[, use_spheroid])\n```\n\n **Description** \n\nReturns the shortest distance in meters between two non-empty`GEOGRAPHY`s.\n\nIf either of the input`GEOGRAPHY`s is empty,`ST_DISTANCE`returns`NULL`.\n\nThe optional`use_spheroid`parameter determines how this function measures\ndistance. If`use_spheroid`is`FALSE`, the function measures distance on the\nsurface of a perfect sphere. If`use_spheroid`is`TRUE`, the function measures\ndistance on the surface of the[WGS84](https://en.wikipedia.org/wiki/World_Geodetic_System)spheroid. The default value\nof`use_spheroid`is`FALSE`.\n\n **Return type** \n\n`FLOAT64`\n\n\n\n"
},
{
"name": "ST_DUMP",
"arguments": [],
"category": "Geography",
"description_markdown": "```\nST_DUMP(geography[, dimension])\n```\n\n **Description** \n\nReturns an`ARRAY`of simple`GEOGRAPHY`s where each element is a component of\nthe input`GEOGRAPHY`. A simple`GEOGRAPHY`consists of a single point, linestring,\nor polygon. If the input`GEOGRAPHY`is simple, the\nresult is a single element. When the input`GEOGRAPHY`is a collection,`ST_DUMP`returns an`ARRAY`with one simple`GEOGRAPHY`for each component in the collection.\n\nIf`dimension`is provided, the function only returns`GEOGRAPHY`s of the corresponding dimension. A\ndimension of -1 is equivalent to omitting`dimension`.\n\n **Return Type** \n\n`ARRAY<GEOGRAPHY>`\n\n **Examples** \n\nThe following example shows how`ST_DUMP`returns the simple geographies within\na complex geography.\n\n```\nWITH example AS (\n SELECT ST_GEOGFROMTEXT('POINT(0 0)') AS geography\n UNION ALL\n SELECT ST_GEOGFROMTEXT('MULTIPOINT(0 0, 1 1)') AS geography\n UNION ALL\n SELECT ST_GEOGFROMTEXT('GEOMETRYCOLLECTION(POINT(0 0), LINESTRING(1 2, 2 1))'))\nSELECT\n geography AS original_geography,\n ST_DUMP(geography) AS dumped_geographies\nFROM example\n\n/*-------------------------------------+------------------------------------*\n | original_geographies | dumped_geographies |\n +-------------------------------------+------------------------------------+\n | POINT(0 0) | [POINT(0 0)] |\n | MULTIPOINT(0 0, 1 1) | [POINT(0 0), POINT(1 1)] |\n | GEOMETRYCOLLECTION(POINT(0 0), | [POINT(0 0), LINESTRING(1 2, 2 1)] |\n | LINESTRING(1 2, 2 1)) | |\n *-------------------------------------+------------------------------------*/\n```\n\nThe following example shows how`ST_DUMP`with the dimension argument only\nreturns simple geographies of the given dimension.\n\n```\nWITH example AS (\n SELECT ST_GEOGFROMTEXT('GEOMETRYCOLLECTION(POINT(0 0), LINESTRING(1 2, 2 1))') AS geography)\nSELECT\n geography AS original_geography,\n ST_DUMP(geography, 1) AS dumped_geographies\nFROM example\n\n/*-------------------------------------+------------------------------*\n | original_geographies | dumped_geographies |\n +-------------------------------------+------------------------------+\n | GEOMETRYCOLLECTION(POINT(0 0), | [LINESTRING(1 2, 2 1)] |\n | LINESTRING(1 2, 2 1)) | |\n *-------------------------------------+------------------------------*/\n```\n\n\n"
},
{
"name": "ST_DWITHIN",
"arguments": [],
"category": "Geography",
"description_markdown": "```\nST_DWITHIN(geography_1, geography_2, distance[, use_spheroid])\n```\n\n **Description** \n\nReturns`TRUE`if the distance between at least one point in`geography_1`and\none point in`geography_2`is less than or equal to the distance given by the`distance`argument; otherwise, returns`FALSE`. If either input`GEOGRAPHY`is empty,`ST_DWithin`returns`FALSE`. The\ngiven`distance`is in meters on the surface of the Earth.\n\nThe optional`use_spheroid`parameter determines how this function measures\ndistance. If`use_spheroid`is`FALSE`, the function measures distance on the\nsurface of a perfect sphere.\n\nThe`use_spheroid`parameter currently only supports\nthe value`FALSE`. The default value of`use_spheroid`is`FALSE`.\n\n **Return type** \n\n`BOOL`\n\n\n\n"
},
{
"name": "ST_ENDPOINT",
"arguments": [],
"category": "Geography",
"description_markdown": "```\nST_ENDPOINT(linestring_geography)\n```\n\n **Description** \n\nReturns the last point of a linestring geography as a point geography. Returns\nan error if the input is not a linestring or if the input is empty. Use the`SAFE`prefix to obtain`NULL`for invalid input instead of an error.\n\n **Return Type** \n\nPoint`GEOGRAPHY`\n\n **Example** \n\n```\nSELECT ST_ENDPOINT(ST_GEOGFROMTEXT('LINESTRING(1 1, 2 1, 3 2, 3 3)')) last\n\n/*--------------*\n | last |\n +--------------+\n | POINT(3 3) |\n *--------------*/\n```\n\n\n"
},
{
"name": "ST_EQUALS",
"arguments": [],
"category": "Geography",
"description_markdown": "```\nST_EQUALS(geography_1, geography_2)\n```\n\n **Description** \n\nReturns`TRUE`if`geography_1`and`geography_2`represent the same\n\n`GEOGRAPHY`value. More precisely, this means that\none of the following conditions holds:\n+`ST_COVERS(geography_1, geography_2) = TRUE`and`ST_COVERS(geography_2,\n geography_1) = TRUE`+ Both`geography_1`and`geography_2`are empty.\n\nTherefore, two`GEOGRAPHY`s may be equal even if the\nordering of points or vertices differ, as long as they still represent the same\ngeometric structure.\n\n **Constraints** \n\n`ST_EQUALS`is not guaranteed to be a transitive function.\n\n **Return type** \n\n`BOOL`\n\n\n\n"
},
{
"name": "ST_EXTENT",
"arguments": [],
"category": "Geography",
"description_markdown": "```\nST_EXTENT(geography_expression)\n```\n\n **Description** \n\nReturns a`STRUCT`that represents the bounding box for the set of input`GEOGRAPHY`values. The bounding box is the minimal rectangle that encloses the\ngeography. The edges of the rectangle follow constant lines of longitude and\nlatitude.\n\nCaveats:\n\n- Returns` NULL`if all the inputs are` NULL`or empty geographies.\n- The bounding box might cross the antimeridian if this allows for a smaller\nrectangle. In this case, the bounding box has one of its longitudinal bounds\noutside of the [-180, 180] range, so that` xmin`is smaller than the eastmost\nvalue` xmax`.\n- If the longitude span of the bounding box is larger than or equal to 180\ndegrees, the function returns the bounding box with the longitude range of\n[-180, 180].\n\n **Return type** \n\n`STRUCT<xmin FLOAT64, ymin FLOAT64, xmax FLOAT64, ymax FLOAT64>`.\n\nBounding box parts:\n\n- ` xmin`: The westmost constant longitude line that bounds the rectangle.\n- ` xmax`: The eastmost constant longitude line that bounds the rectangle.\n- ` ymin`: The minimum constant latitude line that bounds the rectangle.\n- ` ymax`: The maximum constant latitude line that bounds the rectangle.\n\n **Example** \n\n```\nWITH data AS (\n SELECT 1 id, ST_GEOGFROMTEXT('POLYGON((-125 48, -124 46, -117 46, -117 49, -125 48))') g\n UNION ALL\n SELECT 2 id, ST_GEOGFROMTEXT('POLYGON((172 53, -130 55, -141 70, 172 53))') g\n UNION ALL\n SELECT 3 id, ST_GEOGFROMTEXT('POINT EMPTY') g\n)\nSELECT ST_EXTENT(g) AS box\nFROM data\n\n/*----------------------------------------------*\n | box |\n +----------------------------------------------+\n | {xmin:172, ymin:46, xmax:243, ymax:70} |\n *----------------------------------------------*/\n```\n\n[ST_BOUNDINGBOX](#st_boundingbox)for the non-aggregate version of`ST_EXTENT`.\n\n\n\n"
},
{
"name": "ST_EXTERIORRING",
"arguments": [],
"category": "Geography",
"description_markdown": "```\nST_EXTERIORRING(polygon_geography)\n```\n\n **Description** \n\nReturns a linestring geography that corresponds to the outermost ring of a\npolygon geography.\n\n- If the input geography is a polygon, gets the outermost ring of the polygon\ngeography and returns the corresponding linestring.\n- If the input is the full` GEOGRAPHY`, returns an empty geography.\n- Returns an error if the input is not a single polygon.\n\nUse the`SAFE`prefix to return`NULL`for invalid input instead of an error.\n\n **Return type** \n\n- Linestring` GEOGRAPHY`\n- Empty` GEOGRAPHY`\n\n **Examples** \n\n```\nWITH geo as\n (SELECT ST_GEOGFROMTEXT('POLYGON((0 0, 1 4, 2 2, 0 0))') AS g UNION ALL\n SELECT ST_GEOGFROMTEXT('''POLYGON((1 1, 1 10, 5 10, 5 1, 1 1),\n (2 2, 3 4, 2 4, 2 2))''') as g)\nSELECT ST_EXTERIORRING(g) AS ring FROM geo;\n\n/*---------------------------------------*\n | ring |\n +---------------------------------------+\n | LINESTRING(2 2, 1 4, 0 0, 2 2) |\n | LINESTRING(5 1, 5 10, 1 10, 1 1, 5 1) |\n *---------------------------------------*/\n```\n\n\n"
},
{
"name": "ST_GEOGFROM",
"arguments": [],
"category": "Geography",
"description_markdown": "```\nST_GEOGFROM(expression)\n```\n\n **Description** \n\nConverts an expression for a`STRING`or`BYTES`value into a`GEOGRAPHY`value.\n\nIf`expression`represents a`STRING`value, it must be a valid`GEOGRAPHY`representation in one of the following formats:\n\n- WKT format. To learn more about this format and the requirements to use it,\nsee[ST_GEOGFROMTEXT](#st_geogfromtext).\n- WKB in hexadecimal text format. To learn more about this format and the\nrequirements to use it, see[ST_GEOGFROMWKB](#st_geogfromwkb).\n- GeoJSON format. To learn more about this format and the\nrequirements to use it, see[ST_GEOGFROMGEOJSON](#st_geogfromgeojson).\n\nIf`expression`represents a`BYTES`value, it must be a valid`GEOGRAPHY`binary expression in WKB format. To learn more about this format and the\nrequirements to use it, see[ST_GEOGFROMWKB](#st_geogfromwkb).\n\nIf`expression`is`NULL`, the output is`NULL`.\n\n **Return type** \n\n`GEOGRAPHY`\n\n **Examples** \n\nThis takes a WKT-formatted string and returns a`GEOGRAPHY`polygon:\n\n```\nSELECT ST_GEOGFROM('POLYGON((0 0, 0 2, 2 2, 2 0, 0 0))') AS WKT_format;\n\n/*------------------------------------*\n | WKT_format |\n +------------------------------------+\n | POLYGON((2 0, 2 2, 0 2, 0 0, 2 0)) |\n *------------------------------------*/\n```\n\nThis takes a WKB-formatted hexadecimal-encoded string and returns a`GEOGRAPHY`point:\n\n```\nSELECT ST_GEOGFROM(FROM_HEX('010100000000000000000000400000000000001040')) AS WKB_format;\n\n/*----------------*\n | WKB_format |\n +----------------+\n | POINT(2 4) |\n *----------------*/\n```\n\nThis takes WKB-formatted bytes and returns a`GEOGRAPHY`point:\n\n```\nSELECT ST_GEOGFROM('010100000000000000000000400000000000001040') AS WKB_format;\n\n/*----------------*\n | WKB_format |\n +----------------+\n | POINT(2 4) |\n *----------------*/\n```\n\nThis takes a GeoJSON-formatted string and returns a`GEOGRAPHY`polygon:\n\n```\nSELECT ST_GEOGFROM(\n '{ \"type\": \"Polygon\", \"coordinates\": [ [ [2, 0], [2, 2], [1, 2], [0, 2], [0, 0], [2, 0] ] ] }'\n) AS GEOJSON_format;\n\n/*-----------------------------------------*\n | GEOJSON_format |\n +-----------------------------------------+\n | POLYGON((2 0, 2 2, 1 2, 0 2, 0 0, 2 0)) |\n *-----------------------------------------*/\n```\n\n\n"
},
{
"name": "ST_GEOGFROMGEOJSON",
"arguments": [],
"category": "Geography",
"description_markdown": "```\nST_GEOGFROMGEOJSON(geojson_string [, make_valid => constant_expression])\n```\n\n **Description** \n\nReturns a`GEOGRAPHY`value that corresponds to the\ninput[GeoJSON](https://en.wikipedia.org/wiki/GeoJSON)representation.\n\n`ST_GEOGFROMGEOJSON`accepts input that is[RFC 7946](https://tools.ietf.org/html/rfc7946)compliant.\n\nIf the parameter`make_valid`is set to`TRUE`, the function attempts to repair\npolygons that don't conform to[Open Geospatial Consortium](https://www.ogc.org/standards/sfa)semantics.\nThis parameter uses named argument syntax, and should be specified using`make_valid => argument_value`syntax.\n\nA GoogleSQL`GEOGRAPHY`has spherical\ngeodesic edges, whereas a GeoJSON`Geometry`object explicitly has planar edges.\nTo convert between these two types of edges, GoogleSQL adds additional\npoints to the line where necessary so that the resulting sequence of edges\nremains within 10 meters of the original edge.\n\nSee[ST_ASGEOJSON](#st_asgeojson)to format a`GEOGRAPHY`as GeoJSON.\n\n **Constraints** \n\nThe JSON input is subject to the following constraints:\n\n- ` ST_GEOGFROMGEOJSON`only accepts JSON geometry fragments and cannot be used\nto ingest a whole JSON document.\n- The input JSON fragment must consist of a GeoJSON geometry type, which\nincludes` Point`,` MultiPoint`,` LineString`,` MultiLineString`,` Polygon`,` MultiPolygon`, and` GeometryCollection`. Any other GeoJSON type such as` Feature`or` FeatureCollection`will result in an error.\n- A position in the` coordinates`member of a GeoJSON geometry type must\nconsist of exactly two elements. The first is the longitude and the second\nis the latitude. Therefore,` ST_GEOGFROMGEOJSON`does not support the\noptional third element for a position in the` coordinates`member.\n\n **Return type** \n\n`GEOGRAPHY`\n\n\n\n"
},
{
"name": "ST_GEOGFROMTEXT",
"arguments": [],
"category": "Geography",
"description_markdown": "<span id=\"st_geogfromtext_signature1\"></span><span id=\"st_geogfromtext_signature2\"></span>\n\n```\nST_GEOGFROMTEXT(\n wkt_string\n [ , oriented => value ]\n [ , planar => value ]\n [ , make_valid => value ]\n)\n```\n\n **Description** \n\nConverts a`STRING`[WKT](https://en.wikipedia.org/wiki/Well-known_text)geometry value into a`GEOGRAPHY`value.\n\nTo format`GEOGRAPHY`value as WKT, use[ST_ASTEXT](#st_astext).\n\n **Definitions** \n\n- ` wkt_string`: A` STRING`value that contains the[WKT](https://en.wikipedia.org/wiki/Well-known_text)format.\n- ` oriented`: A named argument with a` BOOL`literal.\n \n \n - If the value is` TRUE`, any polygons in the input are assumed to be\noriented as follows: when traveling along the boundary of the polygon\nin the order of the input vertices, the interior of the polygon is on\nthe left. This allows WKT to represent polygons larger than a\nhemisphere. See also[ST_MAKEPOLYGONORIENTED](#st_makepolygonoriented),\nwhich is similar to` ST_GEOGFROMTEXT`with` oriented=TRUE`.\n \n \n - If the value is` FALSE`or omitted, this function returns the polygon\nwith the smaller area.\n \n \n- ` planar`: A named argument with a` BOOL`literal. If the value\nis` TRUE`, the edges of the linestrings and polygons are assumed to use\nplanar map semantics, rather than GoogleSQL default spherical\ngeodesics semantics. For more information about the\ndifferences between spherical geodesics and planar lines, see[Coordinate systems and edges](/bigquery/docs/gis-data#coordinate_systems_and_edges).\n \n \n- ` make_valid`: A named argument with a` BOOL`literal. If the\nvalue is` TRUE`, the function attempts to repair polygons that don't\nconform to[Open Geospatial Consortium](https://www.ogc.org/standards/sfa)semantics.\n \n \n\n **Details** \n\n- The function does not support three-dimensional geometries that have a` Z`suffix, nor does it support linear referencing system geometries with an` M`suffix.\n- ` oriented`and` planar`can't be` TRUE`at the same time.\n- ` oriented`and` make_valid`can't be` TRUE`at the same time.\n\n **Example** \n\nThe following query reads the WKT string`POLYGON((0 0, 0 2, 2 2, 0 2, 0 0))`both as a non-oriented polygon and as an oriented polygon, and checks whether\neach result contains the point`(1, 1)`.\n\n```\nWITH polygon AS (SELECT 'POLYGON((0 0, 0 2, 2 2, 2 0, 0 0))' AS p)\nSELECT\n ST_CONTAINS(ST_GEOGFROMTEXT(p), ST_GEOGPOINT(1, 1)) AS fromtext_default,\n ST_CONTAINS(ST_GEOGFROMTEXT(p, oriented => FALSE), ST_GEOGPOINT(1, 1)) AS non_oriented,\n ST_CONTAINS(ST_GEOGFROMTEXT(p, oriented => TRUE), ST_GEOGPOINT(1, 1)) AS oriented\nFROM polygon;\n\n/*-------------------+---------------+-----------*\n | fromtext_default | non_oriented | oriented |\n +-------------------+---------------+-----------+\n | TRUE | TRUE | FALSE |\n *-------------------+---------------+-----------*/\n```\n\nThe following query converts a WKT string with an invalid polygon to`GEOGRAPHY`. The WKT string violates two properties\nof a valid polygon - the loop describing the polygon is not closed, and it\ncontains self-intersection. With the`make_valid`option,`ST_GEOGFROMTEXT`successfully converts it to a multipolygon shape.\n\n```\nWITH data AS (\n SELECT 'POLYGON((0 -1, 2 1, 2 -1, 0 1))' wkt)\nSELECT\n SAFE.ST_GEOGFROMTEXT(wkt) as geom,\n SAFE.ST_GEOGFROMTEXT(wkt, make_valid => TRUE) as valid_geom\nFROM data\n\n/*------+-----------------------------------------------------------------*\n | geom | valid_geom |\n +------+-----------------------------------------------------------------+\n | NULL | MULTIPOLYGON(((0 -1, 1 0, 0 1, 0 -1)), ((1 0, 2 -1, 2 1, 1 0))) |\n *------+-----------------------------------------------------------------*/\n```\n\n\n"
},
{
"name": "ST_GEOGFROMWKB",
"arguments": [],
"category": "Geography",
"description_markdown": "```\nST_GEOGFROMWKB(\n wkb_bytes_expression\n [ , oriented => value ]\n [ , planar => value ]\n [ , make_valid => value ]\n)\n```\n\n```\nST_GEOGFROMWKB(\n wkb_hex_string_expression\n [ , oriented => value ]\n [ , planar => value ]\n [ , make_valid => value ]\n)\n```\n\n **Description** \n\nConverts an expression from a hexadecimal-text`STRING`or`BYTES`value into a`GEOGRAPHY`value. The expression must be in[WKB](https://en.wikipedia.org/wiki/Well-known_text#Well-known_binary)format.\n\nTo format`GEOGRAPHY`as WKB, use[ST_ASBINARY](#st_asbinary).\n\n **Definitions** \n\n- ` wkb_bytes_expression`: A` BYTES`value that contains the[WKB](https://en.wikipedia.org/wiki/Well-known_text#Well-known_binary)format.\n- ` wkb_hex_string_expression`: A` STRING`value that contains the\nhexadecimal-encoded[WKB](https://en.wikipedia.org/wiki/Well-known_text#Well-known_binary)format.\n- ` oriented`: A named argument with a` BOOL`literal.\n \n \n - If the value is` TRUE`, any polygons in the input are assumed to be\noriented as follows: when traveling along the boundary of the polygon\nin the order of the input vertices, the interior of the polygon is on\nthe left. This allows WKB to represent polygons larger than a\nhemisphere. See also[ST_MAKEPOLYGONORIENTED](#st_makepolygonoriented),\nwhich is similar to` ST_GEOGFROMWKB`with` oriented=TRUE`.\n \n \n - If the value is` FALSE`or omitted, this function returns the polygon\nwith the smaller area.\n \n \n- ` planar`: A named argument with a` BOOL`literal. If the value\nis` TRUE`, the edges of the linestrings and polygons are assumed to use\nplanar map semantics, rather than GoogleSQL default spherical\ngeodesics semantics. For more information about the\ndifferences between spherical geodesics and planar lines, see[Coordinate systems and edges](/bigquery/docs/gis-data#coordinate_systems_and_edges).\n \n \n- ` make_valid`: A named argument with a` BOOL`literal. If the\nvalue is` TRUE`, the function attempts to repair polygons that\ndon't conform to[Open Geospatial Consortium](https://www.ogc.org/standards/sfa)semantics.\n \n \n\n **Details** \n\n- The function does not support three-dimensional geometries that have a` Z`suffix, nor does it support linear referencing system geometries with an` M`suffix.\n- ` oriented`and` planar`can't be` TRUE`at the same time.\n- ` oriented`and` make_valid`can't be` TRUE`at the same time.\n\n **Return type** \n\n`GEOGRAPHY`\n\n **Example** \n\nThe following query reads the hex-encoded WKB data containing`LINESTRING(1 1, 3 2)`and uses it with planar and geodesic semantics. When\nplanar is used, the function approximates the planar input line using\nline that contains a chain of geodesic segments.\n\n```\nWITH wkb_data AS (\n SELECT '010200000002000000feffffffffffef3f000000000000f03f01000000000008400000000000000040' geo\n)\nSELECT\n ST_GeogFromWkb(geo, planar=>TRUE) AS from_planar,\n ST_GeogFromWkb(geo, planar=>FALSE) AS from_geodesic,\nFROM wkb_data\n\n/*---------------------------------------+----------------------*\n | from_planar | from_geodesic |\n +---------------------------------------+----------------------+\n | LINESTRING(1 1, 2 1.5, 2.5 1.75, 3 2) | LINESTRING(1 1, 3 2) |\n *---------------------------------------+----------------------*/\n```\n\n\n"
},
{
"name": "ST_GEOGPOINT",
"arguments": [],
"category": "Geography",
"description_markdown": "```\nST_GEOGPOINT(longitude, latitude)\n```\n\n **Description** \n\nCreates a`GEOGRAPHY`with a single point.`ST_GEOGPOINT`creates a point from\nthe specified`FLOAT64`longitude (in degrees,\nnegative west of the Prime Meridian, positive east) and latitude (in degrees,\npositive north of the Equator, negative south) parameters and returns that point\nin a`GEOGRAPHY`value.\n\nNOTE: Some systems present latitude first; take care with argument order.\n\n **Constraints** \n\n- Longitudes outside the range [-180, 180] are allowed;` ST_GEOGPOINT`uses\nthe input longitude modulo 360 to obtain a longitude within [-180, 180].\n- Latitudes must be in the range [-90, 90]. Latitudes outside this range\nwill result in an error.\n\n **Return type** \n\nPoint`GEOGRAPHY`\n\n\n\n"
},
{
"name": "ST_GEOGPOINTFROMGEOHASH",
"arguments": [],
"category": "Geography",
"description_markdown": "```\nST_GEOGPOINTFROMGEOHASH(geohash)\n```\n\n **Description** \n\nReturns a`GEOGRAPHY`value that corresponds to a\npoint in the middle of a bounding box defined in the[GeoHash](https://en.wikipedia.org/wiki/Geohash).\n\n **Return type** \n\nPoint`GEOGRAPHY`\n\n\n\n"
},
{
"name": "ST_GEOHASH",
"arguments": [],
"category": "Geography",
"description_markdown": "```\nST_GEOHASH(geography_expression[, maxchars])\n```\n\n **Description** \n\nTakes a single-point`GEOGRAPHY`and returns a[GeoHash](https://en.wikipedia.org/wiki/Geohash)representation of that`GEOGRAPHY`object.\n\n- ` geography_expression`: Represents a` GEOGRAPHY`object. Only a` GEOGRAPHY`object that represents a single point is supported. If` ST_GEOHASH`is used\nover an empty` GEOGRAPHY`object, returns` NULL`.\n- ` maxchars`: This optional` INT64`parameter specifies the maximum number of\ncharacters the hash will contain. Fewer characters corresponds to lower\nprecision (or, described differently, to a bigger bounding box).` maxchars`defaults to 20 if not explicitly specified. A valid` maxchars`value is 1\nto 20. Any value below or above is considered unspecified and the default of\n20 is used.\n\n **Return type** \n\n`STRING`\n\n **Example** \n\nReturns a GeoHash of the Seattle Center with 10 characters of precision.\n\n```\nSELECT ST_GEOHASH(ST_GEOGPOINT(-122.35, 47.62), 10) geohash\n\n/*--------------*\n | geohash |\n +--------------+\n | c22yzugqw7 |\n *--------------*/\n```\n\n\n"
},
{
"name": "ST_GEOMETRYTYPE",
"arguments": [],
"category": "Geography",
"description_markdown": "```\nST_GEOMETRYTYPE(geography_expression)\n```\n\n **Description** \n\nReturns the[Open Geospatial Consortium](https://www.ogc.org/standards/sfa)(OGC) geometry type that\ndescribes the input`GEOGRAPHY`. The OGC geometry type matches the\ntypes that are used in[WKT](https://en.wikipedia.org/wiki/Well-known_text)and[GeoJSON](https://en.wikipedia.org/wiki/GeoJSON)formats and\nprinted for[ST_ASTEXT](#st_astext)and[ST_ASGEOJSON](#st_asgeojson).`ST_GEOMETRYTYPE`returns the OGC geometry type with the \"ST_\" prefix.\n\n`ST_GEOMETRYTYPE`returns the following given the type on the input:\n\n- Single point geography: Returns` ST_Point`.\n- Collection of only points: Returns` ST_MultiPoint`.\n- Single linestring geography: Returns` ST_LineString`.\n- Collection of only linestrings: Returns` ST_MultiLineString`.\n- Single polygon geography: Returns` ST_Polygon`.\n- Collection of only polygons: Returns` ST_MultiPolygon`.\n- Collection with elements of different dimensions, or the input is the empty\ngeography: Returns` ST_GeometryCollection`.\n\n **Return type** \n\n`STRING`\n\n **Example** \n\nThe following example shows how`ST_GEOMETRYTYPE`takes geographies and returns\nthe names of their OGC geometry types.\n\n```\nWITH example AS(\n SELECT ST_GEOGFROMTEXT('POINT(0 1)') AS geography\n UNION ALL\n SELECT ST_GEOGFROMTEXT('MULTILINESTRING((2 2, 3 4), (5 6, 7 7))')\n UNION ALL\n SELECT ST_GEOGFROMTEXT('GEOMETRYCOLLECTION(MULTIPOINT(-1 2, 0 12), LINESTRING(-2 4, 0 6))')\n UNION ALL\n SELECT ST_GEOGFROMTEXT('GEOMETRYCOLLECTION EMPTY'))\nSELECT\n geography AS WKT,\n ST_GEOMETRYTYPE(geography) AS geometry_type_name\nFROM example;\n\n/*-------------------------------------------------------------------+-----------------------*\n | WKT | geometry_type_name |\n +-------------------------------------------------------------------+-----------------------+\n | POINT(0 1) | ST_Point |\n | MULTILINESTRING((2 2, 3 4), (5 6, 7 7)) | ST_MultiLineString |\n | GEOMETRYCOLLECTION(MULTIPOINT(-1 2, 0 12), LINESTRING(-2 4, 0 6)) | ST_GeometryCollection |\n | GEOMETRYCOLLECTION EMPTY | ST_GeometryCollection |\n *-------------------------------------------------------------------+-----------------------*/\n```\n\n\n"
},
{
"name": "ST_HAUSDORFFDISTANCE",
"arguments": [],
"category": "Geography",
"description_markdown": "```\nST_HAUSDORFFDISTANCE(geography_1, geography_2)\n```\n\n```\nST_HAUSDORFFDISTANCE(geography_1, geography_2, directed=>{ TRUE | FALSE })\n```\n\n **Description** \n\nGets the discrete[Hausdorff distance](http://en.wikipedia.org/wiki/Hausdorff_distance), which is the greatest of all\nthe distances from a discrete point in one geography to the closest\ndiscrete point in another geography.\n\n **Definitions** \n\n- ` geography_1`: A` GEOGRAPHY`value that represents the first geography.\n- ` geography_2`: A` GEOGRAPHY`value that represents the second geography.\n- ` directed`: Optional, required named argument that represents the type of\ncomputation to use on the input geographies. If this argument is not\nspecified,` directed=>FALSE`is used by default.\n \n \n - ` FALSE`: The largest Hausdorff distance found in\n(` geography_1`,` geography_2`) and\n(` geography_2`,` geography_1`).\n \n \n - ` TRUE`(default): The Hausdorff distance for\n(` geography_1`,` geography_2`).\n \n \n\n **Details** \n\nIf an input geography is`NULL`, the function returns`NULL`.\n\n **Return type** \n\n`FLOAT64`\n\n **Example** \n\nThe following query gets the Hausdorff distance between`geo1`and`geo2`:\n\n```\nWITH data AS (\n SELECT\n ST_GEOGFROMTEXT('LINESTRING(20 70, 70 60, 10 70, 70 70)') AS geo1,\n ST_GEOGFROMTEXT('LINESTRING(20 90, 30 90, 60 10, 90 10)') AS geo2\n)\nSELECT ST_HAUSDORFFDISTANCE(geo1, geo2, directed=>TRUE) AS distance\nFROM data;\n\n/*--------------------+\n | distance |\n +--------------------+\n | 1688933.9832041925 |\n +--------------------*/\n```\n\nThe following query gets the Hausdorff distance between`geo2`and`geo1`:\n\n```\nWITH data AS (\n SELECT\n ST_GEOGFROMTEXT('LINESTRING(20 70, 70 60, 10 70, 70 70)') AS geo1,\n ST_GEOGFROMTEXT('LINESTRING(20 90, 30 90, 60 10, 90 10)') AS geo2\n)\nSELECT ST_HAUSDORFFDISTANCE(geo2, geo1, directed=>TRUE) AS distance\nFROM data;\n\n/*--------------------+\n | distance |\n +--------------------+\n | 5802892.745488612 |\n +--------------------*/\n```\n\nThe following query gets the largest Hausdorff distance between\n(`geo1`and`geo2`) and (`geo2`and`geo1`):\n\n```\nWITH data AS (\n SELECT\n ST_GEOGFROMTEXT('LINESTRING(20 70, 70 60, 10 70, 70 70)') AS geo1,\n ST_GEOGFROMTEXT('LINESTRING(20 90, 30 90, 60 10, 90 10)') AS geo2\n)\nSELECT ST_HAUSDORFFDISTANCE(geo1, geo2, directed=>FALSE) AS distance\nFROM data;\n\n/*--------------------+\n | distance |\n +--------------------+\n | 5802892.745488612 |\n +--------------------*/\n```\n\nThe following query produces the same results as the previous query because`ST_HAUSDORFFDISTANCE`uses`directed=>FALSE`by default.\n\n```\nWITH data AS (\n SELECT\n ST_GEOGFROMTEXT('LINESTRING(20 70, 70 60, 10 70, 70 70)') AS geo1,\n ST_GEOGFROMTEXT('LINESTRING(20 90, 30 90, 60 10, 90 10)') AS geo2\n)\nSELECT ST_HAUSDORFFDISTANCE(geo1, geo2) AS distance\nFROM data;\n```\n\n\n"
},
{
"name": "ST_INTERIORRINGS",
"arguments": [],
"category": "Geography",
"description_markdown": "```\nST_INTERIORRINGS(polygon_geography)\n```\n\n **Description** \n\nReturns an array of linestring geographies that corresponds to the interior\nrings of a polygon geography. Each interior ring is the border of a hole within\nthe input polygon.\n\n- If the input geography is a polygon, excludes the outermost ring of the\npolygon geography and returns the linestrings corresponding to the interior\nrings.\n- If the input is the full` GEOGRAPHY`, returns an empty array.\n- If the input polygon has no holes, returns an empty array.\n- Returns an error if the input is not a single polygon.\n\nUse the`SAFE`prefix to return`NULL`for invalid input instead of an error.\n\n **Return type** \n\n`ARRAY<LineString GEOGRAPHY>`\n\n **Examples** \n\n```\nWITH geo AS (\n SELECT ST_GEOGFROMTEXT('POLYGON((0 0, 1 1, 1 2, 0 0))') AS g UNION ALL\n SELECT ST_GEOGFROMTEXT('POLYGON((1 1, 1 10, 5 10, 5 1, 1 1), (2 2, 3 4, 2 4, 2 2))') UNION ALL\n SELECT ST_GEOGFROMTEXT('POLYGON((1 1, 1 10, 5 10, 5 1, 1 1), (2 2.5, 3.5 3, 2.5 2, 2 2.5), (3.5 7, 4 6, 3 3, 3.5 7))') UNION ALL\n SELECT ST_GEOGFROMTEXT('fullglobe') UNION ALL\n SELECT NULL)\nSELECT ST_INTERIORRINGS(g) AS rings FROM geo;\n\n/*----------------------------------------------------------------------------*\n | rings |\n +----------------------------------------------------------------------------+\n | [] |\n | [LINESTRING(2 2, 3 4, 2 4, 2 2)] |\n | [LINESTRING(2.5 2, 3.5 3, 2 2.5, 2.5 2), LINESTRING(3 3, 4 6, 3.5 7, 3 3)] |\n | [] |\n | NULL |\n *----------------------------------------------------------------------------*/\n```\n\n\n"
},
{
"name": "ST_INTERSECTION",
"arguments": [],
"category": "Geography",
"description_markdown": "```\nST_INTERSECTION(geography_1, geography_2)\n```\n\n **Description** \n\nReturns a`GEOGRAPHY`that represents the point set\nintersection of the two input`GEOGRAPHY`s. Thus,\nevery point in the intersection appears in both`geography_1`and`geography_2`.\n\nIf the two input`GEOGRAPHY`s are disjoint, that is,\nthere are no points that appear in both input`geometry_1`and`geometry_2`,\nthen an empty`GEOGRAPHY`is returned.\n\nSee[ST_INTERSECTS](#st_intersects),[ST_DISJOINT](#st_disjoint)for related\npredicate functions.\n\n **Return type** \n\n`GEOGRAPHY`\n\n\n\n"
},
{
"name": "ST_INTERSECTS",
"arguments": [],
"category": "Geography",
"description_markdown": "```\nST_INTERSECTS(geography_1, geography_2)\n```\n\n **Description** \n\nReturns`TRUE`if the point set intersection of`geography_1`and`geography_2`is non-empty. Thus, this function returns`TRUE`if there is at least one point\nthat appears in both input`GEOGRAPHY`s.\n\nIf`ST_INTERSECTS`returns`TRUE`, it implies that[ST_DISJOINT](#st_disjoint)returns`FALSE`.\n\n **Return type** \n\n`BOOL`\n\n\n\n"
},
{
"name": "ST_INTERSECTSBOX",
"arguments": [],
"category": "Geography",
"description_markdown": "```\nST_INTERSECTSBOX(geography, lng1, lat1, lng2, lat2)\n```\n\n **Description** \n\nReturns`TRUE`if`geography`intersects the rectangle between`[lng1, lng2]`and`[lat1, lat2]`. The edges of the rectangle follow constant lines of\nlongitude and latitude.`lng1`and`lng2`specify the westmost and eastmost\nconstant longitude lines that bound the rectangle, and`lat1`and`lat2`specify\nthe minimum and maximum constant latitude lines that bound the rectangle.\n\nSpecify all longitude and latitude arguments in degrees.\n\n **Constraints** \n\nThe input arguments are subject to the following constraints:\n\n- Latitudes should be in the` [-90, 90]`degree range.\n- Longitudes should follow either of the following rules:\n - Both longitudes are in the` [-180, 180]`degree range.\n - One of the longitudes is in the` [-180, 180]`degree range, and` lng2 - lng1`is in the` [0, 360]`interval.\n\n **Return type** \n\n`BOOL`\n\n **Example** \n\n```\nSELECT p, ST_INTERSECTSBOX(p, -90, 0, 90, 20) AS box1,\n ST_INTERSECTSBOX(p, 90, 0, -90, 20) AS box2\nFROM UNNEST([ST_GEOGPOINT(10, 10), ST_GEOGPOINT(170, 10),\n ST_GEOGPOINT(30, 30)]) p\n\n/*----------------+--------------+--------------*\n | p | box1 | box2 |\n +----------------+--------------+--------------+\n | POINT(10 10) | TRUE | FALSE |\n | POINT(170 10) | FALSE | TRUE |\n | POINT(30 30) | FALSE | FALSE |\n *----------------+--------------+--------------*/\n```\n\n\n"
},
{
"name": "ST_ISCLOSED",
"arguments": [],
"category": "Geography",
"description_markdown": "```\nST_ISCLOSED(geography_expression)\n```\n\n **Description** \n\nReturns`TRUE`for a non-empty Geography, where each element in the Geography\nhas an empty boundary. The boundary for each element can be defined with[ST_BOUNDARY](#st_boundary).\n\n- A point is closed.\n- A linestring is closed if the start and end points of the linestring are\nthe same.\n- A polygon is closed only if it is a full polygon.\n- A collection is closed if and only if every element in the collection is\nclosed.\n\nAn empty`GEOGRAPHY`is not closed.\n\n **Return type** \n\n`BOOL`\n\n **Example** \n\n```\nWITH example AS(\n SELECT ST_GEOGFROMTEXT('POINT(5 0)') AS geography\n UNION ALL\n SELECT ST_GEOGFROMTEXT('LINESTRING(0 1, 4 3, 2 6, 0 1)') AS geography\n UNION ALL\n SELECT ST_GEOGFROMTEXT('LINESTRING(2 6, 1 3, 3 9)') AS geography\n UNION ALL\n SELECT ST_GEOGFROMTEXT('GEOMETRYCOLLECTION(POINT(0 0), LINESTRING(1 2, 2 1))') AS geography\n UNION ALL\n SELECT ST_GEOGFROMTEXT('GEOMETRYCOLLECTION EMPTY'))\nSELECT\n geography,\n ST_ISCLOSED(geography) AS is_closed,\nFROM example;\n\n/*------------------------------------------------------+-----------*\n | geography | is_closed |\n +------------------------------------------------------+-----------+\n | POINT(5 0) | TRUE |\n | LINESTRING(0 1, 4 3, 2 6, 0 1) | TRUE |\n | LINESTRING(2 6, 1 3, 3 9) | FALSE |\n | GEOMETRYCOLLECTION(POINT(0 0), LINESTRING(1 2, 2 1)) | FALSE |\n | GEOMETRYCOLLECTION EMPTY | FALSE |\n *------------------------------------------------------+-----------*/\n```\n\n\n"
},
{
"name": "ST_ISCOLLECTION",
"arguments": [],
"category": "Geography",
"description_markdown": "```\nST_ISCOLLECTION(geography_expression)\n```\n\n **Description** \n\nReturns`TRUE`if the total number of points, linestrings, and polygons is\ngreater than one.\n\nAn empty`GEOGRAPHY`is not a collection.\n\n **Return type** \n\n`BOOL`\n\n\n\n"
},
{
"name": "ST_ISEMPTY",
"arguments": [],
"category": "Geography",
"description_markdown": "```\nST_ISEMPTY(geography_expression)\n```\n\n **Description** \n\nReturns`TRUE`if the given`GEOGRAPHY`is empty; that is, the`GEOGRAPHY`does\nnot contain any points, lines, or polygons.\n\nNOTE: An empty`GEOGRAPHY`is not associated with a particular geometry shape.\nFor example, the results of expressions`ST_GEOGFROMTEXT('POINT EMPTY')`and`ST_GEOGFROMTEXT('GEOMETRYCOLLECTION EMPTY')`are identical.\n\n **Return type** \n\n`BOOL`\n\n\n\n"
},
{
"name": "ST_ISRING",
"arguments": [],
"category": "Geography",
"description_markdown": "```\nST_ISRING(geography_expression)\n```\n\n **Description** \n\nReturns`TRUE`if the input`GEOGRAPHY`is a linestring and if the\nlinestring is both[ST_ISCLOSED](#st_isclosed)and\nsimple. A linestring is considered simple if it does not pass through the\nsame point twice (with the exception of the start and endpoint, which may\noverlap to form a ring).\n\nAn empty`GEOGRAPHY`is not a ring.\n\n **Return type** \n\n`BOOL`\n\n\n\n"
},
{
"name": "ST_LENGTH",
"arguments": [],
"category": "Geography",
"description_markdown": "```\nST_LENGTH(geography_expression[, use_spheroid])\n```\n\n **Description** \n\nReturns the total length in meters of the lines in the input`GEOGRAPHY`.\n\nIf`geography_expression`is a point or a polygon, returns zero. If`geography_expression`is a collection, returns the length of the lines in the\ncollection; if the collection does not contain lines, returns zero.\n\nThe optional`use_spheroid`parameter determines how this function measures\ndistance. If`use_spheroid`is`FALSE`, the function measures distance on the\nsurface of a perfect sphere.\n\nThe`use_spheroid`parameter currently only supports\nthe value`FALSE`. The default value of`use_spheroid`is`FALSE`.\n\n **Return type** \n\n`FLOAT64`\n\n\n\n"
},
{
"name": "ST_LINEINTERPOLATEPOINT",
"arguments": [],
"category": "Geography",
"description_markdown": "```\nST_LINEINTERPOLATEPOINT(linestring_geography, fraction)\n```\n\n **Description** \n\nGets a point at a specific fraction in a linestring`GEOGRAPHY`value.\n\n **Definitions** \n\n- ` linestring_geography`: A linestring` GEOGRAPHY`on which the target point\nis located.\n- ` fraction`: A` FLOAT64`value that represents a fraction\nalong the linestring` GEOGRAPHY`where the target point is located.\nThis should be an inclusive value between` 0`(start of the\nlinestring) and` 1`(end of the linestring).\n\n **Details** \n\n- Returns` NULL`if any input argument is` NULL`.\n- Returns an empty geography if` linestring_geography`is an empty geography.\n- Returns an error if` linestring_geography`is not a linestring or an empty\ngeography, or if` fraction`is outside the` [0, 1]`range.\n\n **Return Type** \n\n`GEOGRAPHY`\n\n **Example** \n\nThe following query returns a few points on a linestring. Notice that the\n midpoint of the linestring`LINESTRING(1 1, 5 5)`is slightly different from`POINT(3 3)`because the`GEOGRAPHY`type uses geodesic line segments.\n\n```\nWITH fractions AS (\n SELECT 0 AS fraction UNION ALL\n SELECT 0.5 UNION ALL\n SELECT 1 UNION ALL\n SELECT NULL\n )\nSELECT\n fraction,\n ST_LINEINTERPOLATEPOINT(ST_GEOGFROMTEXT('LINESTRING(1 1, 5 5)'), fraction)\n AS point\nFROM fractions\n\n/*-------------+-------------------------------------------*\n | fraction | point |\n +-------------+-------------------------------------------+\n | 0 | POINT(1 1) |\n | 0.5 | POINT(2.99633827268976 3.00182528336078) |\n | 1 | POINT(5 5) |\n | NULL | NULL |\n *-------------+-------------------------------------------*/\n```\n\n\n"
},
{
"name": "ST_LINELOCATEPOINT",
"arguments": [],
"category": "Geography",
"description_markdown": "```\nST_LINELOCATEPOINT(linestring_geography, point_geography)\n```\n\n **Description** \n\nGets a section of a linestring between the start point and a selected point (a\npoint on the linestring closest to the`point_geography`argument). Returns the\npercentage that this section represents in the linestring.\n\nDetails:\n\n- To select a point on the linestring` GEOGRAPHY`(` linestring_geography`),\nthis function takes a point` GEOGRAPHY`(` point_geography`) and finds the[closest point](#st_closestpoint)to it on the linestring.\n- If two points on` linestring_geography`are an equal distance away from` point_geography`, it is not guaranteed which one will be selected.\n- The return value is an inclusive value between 0 and 1 (0-100%).\n- If the selected point is the start point on the linestring, function returns\n0 (0%).\n- If the selected point is the end point on the linestring, function returns 1\n(100%).\n\n`NULL`and error handling:\n\n- Returns` NULL`if any input argument is` NULL`.\n- Returns an error if` linestring_geography`is not a linestring or if` point_geography`is not a point. Use the` SAFE`prefix\nto obtain` NULL`for invalid input instead of an error.\n\n **Return Type** \n\n`FLOAT64`\n\n **Examples** \n\n```\nWITH geos AS (\n SELECT ST_GEOGPOINT(0, 0) AS point UNION ALL\n SELECT ST_GEOGPOINT(1, 0) UNION ALL\n SELECT ST_GEOGPOINT(1, 1) UNION ALL\n SELECT ST_GEOGPOINT(2, 2) UNION ALL\n SELECT ST_GEOGPOINT(3, 3) UNION ALL\n SELECT ST_GEOGPOINT(4, 4) UNION ALL\n SELECT ST_GEOGPOINT(5, 5) UNION ALL\n SELECT ST_GEOGPOINT(6, 5) UNION ALL\n SELECT NULL\n )\nSELECT\n point AS input_point,\n ST_LINELOCATEPOINT(ST_GEOGFROMTEXT('LINESTRING(1 1, 5 5)'), point)\n AS percentage_from_beginning\nFROM geos\n\n/*-------------+---------------------------*\n | input_point | percentage_from_beginning |\n +-------------+---------------------------+\n | POINT(0 0) | 0 |\n | POINT(1 0) | 0 |\n | POINT(1 1) | 0 |\n | POINT(2 2) | 0.25015214685147907 |\n | POINT(3 3) | 0.5002284283637185 |\n | POINT(4 4) | 0.7501905913884388 |\n | POINT(5 5) | 1 |\n | POINT(6 5) | 1 |\n | NULL | NULL |\n *-------------+---------------------------*/\n```\n\n\n"
},
{
"name": "ST_LINESUBSTRING",
"arguments": [],
"category": "Geography",
"description_markdown": "```\nST_LINESUBSTRING(linestring_geography, start_fraction, end_fraction);\n```\n\n **Description** \n\nGets a segment of a linestring at a specific starting and ending fraction.\n\n **Definitions** \n\n- ` linestring_geography`: The LineString` GEOGRAPHY`value that represents the\nlinestring from which to extract a segment.\n- ` start_fraction`:` FLOAT64`value that represents\nthe starting fraction of the total length of` linestring_geography`.\nThis must be an inclusive value between 0 and 1 (0-100%).\n- ` end_fraction`:` FLOAT64`value that represents\nthe ending fraction of the total length of` linestring_geography`.\nThis must be an inclusive value between 0 and 1 (0-100%).\n\n **Details** \n\n`end_fraction`must be greater than or equal to`start_fraction`.\n\nIf`start_fraction`and`end_fraction`are equal, a linestring with only\none point is produced.\n\n **Return type** \n\n- LineString` GEOGRAPHY`if the resulting geography has more than one point.\n- Point` GEOGRAPHY`if the resulting geography has only one point.\n\n **Example** \n\nThe following query returns the second half of the linestring:\n\n```\nWITH data AS (\n SELECT ST_GEOGFROMTEXT('LINESTRING(20 70, 70 60, 10 70, 70 70)') AS geo1\n)\nSELECT ST_LINESUBSTRING(geo1, 0.5, 1) AS segment\nFROM data;\n\n/*-------------------------------------------------------------+\n | segment |\n +-------------------------------------------------------------+\n | LINESTRING(49.4760661523471 67.2419539103851, 10 70, 70 70) |\n +-------------------------------------------------------------*/\n```\n\nThe following query returns a linestring that only contains one point:\n\n```\nWITH data AS (\n SELECT ST_GEOGFROMTEXT('LINESTRING(20 70, 70 60, 10 70, 70 70)') AS geo1\n)\nSELECT ST_LINESUBSTRING(geo1, 0.5, 0.5) AS segment\nFROM data;\n\n/*------------------------------------------+\n | segment |\n +------------------------------------------+\n | POINT(49.4760661523471 67.2419539103851) |\n +------------------------------------------*/\n```\n\n\n"
},
{
"name": "ST_MAKELINE",
"arguments": [],
"category": "Geography",
"description_markdown": "```\nST_MAKELINE(geography_1, geography_2)\n```\n\n```\nST_MAKELINE(array_of_geography)\n```\n\n **Description** \n\nCreates a`GEOGRAPHY`with a single linestring by\nconcatenating the point or line vertices of each of the input`GEOGRAPHY`s in the order they are given.\n\n`ST_MAKELINE`comes in two variants. For the first variant, input must be two`GEOGRAPHY`s. For the second, input must be an`ARRAY`of type`GEOGRAPHY`. In\neither variant, each input`GEOGRAPHY`must consist of one of the following\nvalues:\n\n- Exactly one point.\n- Exactly one linestring.\n\nFor the first variant of`ST_MAKELINE`, if either input`GEOGRAPHY`is`NULL`,`ST_MAKELINE`returns`NULL`. For the second variant, if input`ARRAY`or any\nelement in the input`ARRAY`is`NULL`,`ST_MAKELINE`returns`NULL`.\n\n **Constraints** \n\nEvery edge must span strictly less than 180 degrees.\n\nNOTE: The GoogleSQL snapping process may discard sufficiently short\nedges and snap the two endpoints together. For instance, if two input`GEOGRAPHY`s each contain a point and the two points are separated by a distance\nless than the snap radius, the points will be snapped together. In such a case\nthe result will be a`GEOGRAPHY`with exactly one point.\n\n **Return type** \n\nLineString`GEOGRAPHY`\n\n\n\n"
},
{
"name": "ST_MAKEPOLYGON",
"arguments": [],
"category": "Geography",
"description_markdown": "```\nST_MAKEPOLYGON(polygon_shell[, array_of_polygon_holes])\n```\n\n **Description** \n\nCreates a`GEOGRAPHY`containing a single polygon\nfrom linestring inputs, where each input linestring is used to construct a\npolygon ring.\n\n`ST_MAKEPOLYGON`comes in two variants. For the first variant, the input\nlinestring is provided by a single`GEOGRAPHY`containing exactly one\nlinestring. For the second variant, the input consists of a single`GEOGRAPHY`and an array of`GEOGRAPHY`s, each containing exactly one linestring.\n\nThe first`GEOGRAPHY`in either variant is used to construct the polygon shell.\nAdditional`GEOGRAPHY`s provided in the input`ARRAY`specify a polygon hole.\nFor every input`GEOGRAPHY`containing exactly one linestring, the following\nmust be true:\n\n- The linestring must consist of at least three distinct vertices.\n- The linestring must be closed: that is, the first and last vertex have to be\nthe same. If the first and last vertex differ, the function constructs a\nfinal edge from the first vertex to the last.\n\nFor the first variant of`ST_MAKEPOLYGON`, if either input`GEOGRAPHY`is`NULL`,`ST_MAKEPOLYGON`returns`NULL`. For the second variant, if\ninput`ARRAY`or any element in the`ARRAY`is`NULL`,`ST_MAKEPOLYGON`returns`NULL`.\n\nNOTE:`ST_MAKEPOLYGON`accepts an empty`GEOGRAPHY`as input.`ST_MAKEPOLYGON`interprets an empty`GEOGRAPHY`as having an empty linestring, which will\ncreate a full loop: that is, a polygon that covers the entire Earth.\n\n **Constraints** \n\nTogether, the input rings must form a valid polygon:\n\n- The polygon shell must cover each of the polygon holes.\n- There can be only one polygon shell (which has to be the first input ring).\nThis implies that polygon holes cannot be nested.\n- Polygon rings may only intersect in a vertex on the boundary of both rings.\n\nEvery edge must span strictly less than 180 degrees.\n\nEach polygon ring divides the sphere into two regions. The first input linesting\nto`ST_MAKEPOLYGON`forms the polygon shell, and the interior is chosen to be\nthe smaller of the two regions. Each subsequent input linestring specifies a\npolygon hole, so the interior of the polygon is already well-defined. In order\nto define a polygon shell such that the interior of the polygon is the larger of\nthe two regions, see[ST_MAKEPOLYGONORIENTED](#st_makepolygonoriented).\n\nNOTE: The GoogleSQL snapping process may discard sufficiently\nshort edges and snap the two endpoints together. Hence, when vertices are\nsnapped together, it is possible that a polygon hole that is sufficiently small\nmay disappear, or the output`GEOGRAPHY`may contain only a line or a\npoint.\n\n **Return type** \n\n`GEOGRAPHY`\n\n\n\n"
},
{
"name": "ST_MAKEPOLYGONORIENTED",
"arguments": [],
"category": "Geography",
"description_markdown": "```\nST_MAKEPOLYGONORIENTED(array_of_geography)\n```\n\n **Description** \n\nLike`ST_MAKEPOLYGON`, but the vertex ordering of each input linestring\ndetermines the orientation of each polygon ring. The orientation of a polygon\nring defines the interior of the polygon as follows: if someone walks along the\nboundary of the polygon in the order of the input vertices, the interior of the\npolygon is on the left. This applies for each polygon ring provided.\n\nThis variant of the polygon constructor is more flexible since`ST_MAKEPOLYGONORIENTED`can construct a polygon such that the interior is on\neither side of the polygon ring. However, proper orientation of polygon rings is\ncritical in order to construct the desired polygon.\n\nIf the input`ARRAY`or any element in the`ARRAY`is`NULL`,`ST_MAKEPOLYGONORIENTED`returns`NULL`.\n\nNOTE: The input argument for`ST_MAKEPOLYGONORIENTED`may contain an empty`GEOGRAPHY`.`ST_MAKEPOLYGONORIENTED`interprets an empty`GEOGRAPHY`as having\nan empty linestring, which will create a full loop: that is, a polygon that\ncovers the entire Earth.\n\n **Constraints** \n\nTogether, the input rings must form a valid polygon:\n\n- The polygon shell must cover each of the polygon holes.\n- There must be only one polygon shell, which must to be the first input ring.\nThis implies that polygon holes cannot be nested.\n- Polygon rings may only intersect in a vertex on the boundary of both rings.\n\nEvery edge must span strictly less than 180 degrees.\n\n`ST_MAKEPOLYGONORIENTED`relies on the ordering of the input vertices of each\nlinestring to determine the orientation of the polygon. This applies to the\npolygon shell and any polygon holes.`ST_MAKEPOLYGONORIENTED`expects all\npolygon holes to have the opposite orientation of the shell. See[ST_MAKEPOLYGON](#st_makepolygon)for an alternate polygon constructor, and\nother constraints on building a valid polygon.\n\nNOTE: Due to the GoogleSQL snapping process, edges with a sufficiently\nshort length will be discarded and the two endpoints will be snapped to a single\npoint. Therefore, it is possible that vertices in a linestring may be snapped\ntogether such that one or more edge disappears. Hence, it is possible that a\npolygon hole that is sufficiently small may disappear, or the resulting`GEOGRAPHY`may contain only a line or a point.\n\n **Return type** \n\n`GEOGRAPHY`\n\n\n\n"
},
{
"name": "ST_MAXDISTANCE",
"arguments": [],
"category": "Geography",
"description_markdown": "```\nST_MAXDISTANCE(geography_1, geography_2[, use_spheroid])\n```\n\nReturns the longest distance in meters between two non-empty`GEOGRAPHY`s; that is, the distance between two\nvertices where the first vertex is in the first`GEOGRAPHY`, and the second vertex is in the second`GEOGRAPHY`. If`geography_1`and`geography_2`are the\nsame`GEOGRAPHY`, the function returns the distance\nbetween the two most distant vertices in that`GEOGRAPHY`.\n\nIf either of the input`GEOGRAPHY`s is empty,`ST_MAXDISTANCE`returns`NULL`.\n\nThe optional`use_spheroid`parameter determines how this function measures\ndistance. If`use_spheroid`is`FALSE`, the function measures distance on the\nsurface of a perfect sphere.\n\nThe`use_spheroid`parameter currently only supports\nthe value`FALSE`. The default value of`use_spheroid`is`FALSE`.\n\n **Return type** \n\n`FLOAT64`\n\n\n\n"
},
{
"name": "ST_NPOINTS",
"arguments": [],
"category": "Geography",
"description_markdown": "```\nST_NPOINTS(geography_expression)\n```\n\n **Description** \n\nAn alias of[ST_NUMPOINTS](#st_numpoints).\n\n\n\n"
},
{
"name": "ST_NUMGEOMETRIES",
"arguments": [],
"category": "Geography",
"description_markdown": "```\nST_NUMGEOMETRIES(geography_expression)\n```\n\n **Description** \n\nReturns the number of geometries in the input`GEOGRAPHY`. For a single point,\nlinestring, or polygon,`ST_NUMGEOMETRIES`returns`1`. For any collection of\ngeometries,`ST_NUMGEOMETRIES`returns the number of geometries making up the\ncollection.`ST_NUMGEOMETRIES`returns`0`if the input is the empty`GEOGRAPHY`.\n\n **Return type** \n\n`INT64`\n\n **Example** \n\nThe following example computes`ST_NUMGEOMETRIES`for a single point geography,\ntwo collections, and an empty geography.\n\n```\nWITH example AS(\n SELECT ST_GEOGFROMTEXT('POINT(5 0)') AS geography\n UNION ALL\n SELECT ST_GEOGFROMTEXT('MULTIPOINT(0 1, 4 3, 2 6)') AS geography\n UNION ALL\n SELECT ST_GEOGFROMTEXT('GEOMETRYCOLLECTION(POINT(0 0), LINESTRING(1 2, 2 1))') AS geography\n UNION ALL\n SELECT ST_GEOGFROMTEXT('GEOMETRYCOLLECTION EMPTY'))\nSELECT\n geography,\n ST_NUMGEOMETRIES(geography) AS num_geometries,\nFROM example;\n\n/*------------------------------------------------------+----------------*\n | geography | num_geometries |\n +------------------------------------------------------+----------------+\n | POINT(5 0) | 1 |\n | MULTIPOINT(0 1, 4 3, 2 6) | 3 |\n | GEOMETRYCOLLECTION(POINT(0 0), LINESTRING(1 2, 2 1)) | 2 |\n | GEOMETRYCOLLECTION EMPTY | 0 |\n *------------------------------------------------------+----------------*/\n```\n\n\n"
},
{
"name": "ST_NUMPOINTS",
"arguments": [],
"category": "Geography",
"description_markdown": "```\nST_NUMPOINTS(geography_expression)\n```\n\n **Description** \n\nReturns the number of vertices in the input`GEOGRAPHY`. This includes the number of points, the\nnumber of linestring vertices, and the number of polygon vertices.\n\nNOTE: The first and last vertex of a polygon ring are counted as distinct\nvertices.\n\n **Return type** \n\n`INT64`\n\n\n\n"
},
{
"name": "ST_PERIMETER",
"arguments": [],
"category": "Geography",
"description_markdown": "```\nST_PERIMETER(geography_expression[, use_spheroid])\n```\n\n **Description** \n\nReturns the length in meters of the boundary of the polygons in the input`GEOGRAPHY`.\n\nIf`geography_expression`is a point or a line, returns zero. If`geography_expression`is a collection, returns the perimeter of the polygons\nin the collection; if the collection does not contain polygons, returns zero.\n\nThe optional`use_spheroid`parameter determines how this function measures\ndistance. If`use_spheroid`is`FALSE`, the function measures distance on the\nsurface of a perfect sphere.\n\nThe`use_spheroid`parameter currently only supports\nthe value`FALSE`. The default value of`use_spheroid`is`FALSE`.\n\n **Return type** \n\n`FLOAT64`\n\n\n\n"
},
{
"name": "ST_POINTN",
"arguments": [],
"category": "Geography",
"description_markdown": "```\nST_POINTN(linestring_geography, index)\n```\n\n **Description** \n\nReturns the Nth point of a linestring geography as a point geography, where N is\nthe index. The index is 1-based. Negative values are counted backwards from the\nend of the linestring, so that -1 is the last point. Returns an error if the\ninput is not a linestring, if the input is empty, or if there is no vertex at\nthe given index. Use the`SAFE`prefix to obtain`NULL`for invalid input\ninstead of an error.\n\n **Return Type** \n\nPoint`GEOGRAPHY`\n\n **Example** \n\nThe following example uses`ST_POINTN`,[ST_STARTPOINT](#st_startpoint)and[ST_ENDPOINT](#st_endpoint)to extract points from a linestring.\n\n```\nWITH linestring AS (\n SELECT ST_GEOGFROMTEXT('LINESTRING(1 1, 2 1, 3 2, 3 3)') g\n)\nSELECT ST_POINTN(g, 1) AS first, ST_POINTN(g, -1) AS last,\n ST_POINTN(g, 2) AS second, ST_POINTN(g, -2) AS second_to_last\nFROM linestring;\n\n/*--------------+--------------+--------------+----------------*\n | first | last | second | second_to_last |\n +--------------+--------------+--------------+----------------+\n | POINT(1 1) | POINT(3 3) | POINT(2 1) | POINT(3 2) |\n *--------------+--------------+--------------+----------------*/\n```\n\n\n"
},
{
"name": "ST_SIMPLIFY",
"arguments": [],
"category": "Geography",
"description_markdown": "```\nST_SIMPLIFY(geography, tolerance_meters)\n```\n\n **Description** \n\nReturns a simplified version of`geography`, the given input`GEOGRAPHY`. The input`GEOGRAPHY`is simplified by replacing nearly straight\nchains of short edges with a single long edge. The input`geography`will not\nchange by more than the tolerance specified by`tolerance_meters`. Thus,\nsimplified edges are guaranteed to pass within`tolerance_meters`of the *original* positions of all vertices that were removed from that edge. The given`tolerance_meters`is in meters on the surface of the Earth.\n\nNote that`ST_SIMPLIFY`preserves topological relationships, which means that\nno new crossing edges will be created and the output will be valid. For a large\nenough tolerance, adjacent shapes may collapse into a single object, or a shape\ncould be simplified to a shape with a smaller dimension.\n\n **Constraints** \n\nFor`ST_SIMPLIFY`to have any effect,`tolerance_meters`must be non-zero.\n\n`ST_SIMPLIFY`returns an error if the tolerance specified by`tolerance_meters`is one of the following:\n\n- A negative tolerance.\n- Greater than ~7800 kilometers.\n\n **Return type** \n\n`GEOGRAPHY`\n\n **Examples** \n\nThe following example shows how`ST_SIMPLIFY`simplifies the input line`GEOGRAPHY`by removing intermediate vertices.\n\n```\nWITH example AS\n (SELECT ST_GEOGFROMTEXT('LINESTRING(0 0, 0.05 0, 0.1 0, 0.15 0, 2 0)') AS line)\nSELECT\n line AS original_line,\n ST_SIMPLIFY(line, 1) AS simplified_line\nFROM example;\n\n/*---------------------------------------------+----------------------*\n | original_line | simplified_line |\n +---------------------------------------------+----------------------+\n | LINESTRING(0 0, 0.05 0, 0.1 0, 0.15 0, 2 0) | LINESTRING(0 0, 2 0) |\n *---------------------------------------------+----------------------*/\n```\n\nThe following example illustrates how the result of`ST_SIMPLIFY`can have a\nlower dimension than the original shape.\n\n```\nWITH example AS\n (SELECT\n ST_GEOGFROMTEXT('POLYGON((0 0, 0.1 0, 0.1 0.1, 0 0))') AS polygon,\n t AS tolerance\n FROM UNNEST([1000, 10000, 100000]) AS t)\nSELECT\n polygon AS original_triangle,\n tolerance AS tolerance_meters,\n ST_SIMPLIFY(polygon, tolerance) AS simplified_result\nFROM example\n\n/*-------------------------------------+------------------+-------------------------------------*\n | original_triangle | tolerance_meters | simplified_result |\n +-------------------------------------+------------------+-------------------------------------+\n | POLYGON((0 0, 0.1 0, 0.1 0.1, 0 0)) | 1000 | POLYGON((0 0, 0.1 0, 0.1 0.1, 0 0)) |\n | POLYGON((0 0, 0.1 0, 0.1 0.1, 0 0)) | 10000 | LINESTRING(0 0, 0.1 0.1) |\n | POLYGON((0 0, 0.1 0, 0.1 0.1, 0 0)) | 100000 | POINT(0 0) |\n *-------------------------------------+------------------+-------------------------------------*/\n```\n\n\n"
},
{
"name": "ST_SNAPTOGRID",
"arguments": [],
"category": "Geography",
"description_markdown": "```\nST_SNAPTOGRID(geography_expression, grid_size)\n```\n\n **Description** \n\nReturns the input`GEOGRAPHY`, where each vertex has\nbeen snapped to a longitude/latitude grid. The grid size is determined by the`grid_size`parameter which is given in degrees.\n\n **Constraints** \n\nArbitrary grid sizes are not supported. The`grid_size`parameter is rounded so\nthat it is of the form`10^n`, where`-10 < n < 0`.\n\n **Return type** \n\n`GEOGRAPHY`\n\n\n\n"
},
{
"name": "ST_STARTPOINT",
"arguments": [],
"category": "Geography",
"description_markdown": "```\nST_STARTPOINT(linestring_geography)\n```\n\n **Description** \n\nReturns the first point of a linestring geography as a point geography. Returns\nan error if the input is not a linestring or if the input is empty. Use the`SAFE`prefix to obtain`NULL`for invalid input instead of an error.\n\n **Return Type** \n\nPoint`GEOGRAPHY`\n\n **Example** \n\n```\nSELECT ST_STARTPOINT(ST_GEOGFROMTEXT('LINESTRING(1 1, 2 1, 3 2, 3 3)')) first\n\n/*--------------*\n | first |\n +--------------+\n | POINT(1 1) |\n *--------------*/\n```\n\n\n"
},
{
"name": "ST_TOUCHES",
"arguments": [],
"category": "Geography",
"description_markdown": "```\nST_TOUCHES(geography_1, geography_2)\n```\n\n **Description** \n\nReturns`TRUE`provided the following two conditions are satisfied:\n\n1. ` geography_1`intersects` geography_2`.\n1. The interior of` geography_1`and the interior of` geography_2`are\ndisjoint.\n\n **Return type** \n\n`BOOL`\n\n\n\n"
},
{
"name": "ST_UNION",
"arguments": [],
"category": "Geography",
"description_markdown": "```\nST_UNION(geography_1, geography_2)\n```\n\n```\nST_UNION(array_of_geography)\n```\n\n **Description** \n\nReturns a`GEOGRAPHY`that represents the point set\nunion of all input`GEOGRAPHY`s.\n\n`ST_UNION`comes in two variants. For the first variant, input must be two`GEOGRAPHY`s. For the second, the input is an`ARRAY`of type`GEOGRAPHY`.\n\nFor the first variant of`ST_UNION`, if an input`GEOGRAPHY`is`NULL`,`ST_UNION`returns`NULL`.\nFor the second variant, if the input`ARRAY`value\nis`NULL`,`ST_UNION`returns`NULL`.\nFor a non-`NULL`input`ARRAY`, the union is computed\nand`NULL`elements are ignored so that they do not affect the output.\n\nSee[ST_UNION_AGG](#st_union_agg)for the aggregate version of`ST_UNION`.\n\n **Return type** \n\n`GEOGRAPHY`\n\n **Example** \n\n```\nSELECT ST_UNION(\n ST_GEOGFROMTEXT('LINESTRING(-122.12 47.67, -122.19 47.69)'),\n ST_GEOGFROMTEXT('LINESTRING(-122.12 47.67, -100.19 47.69)')\n) AS results\n\n/*---------------------------------------------------------*\n | results |\n +---------------------------------------------------------+\n | LINESTRING(-100.19 47.69, -122.12 47.67, -122.19 47.69) |\n *---------------------------------------------------------*/\n```\n\n\n"
},
{
"name": "ST_UNION_AGG",
"arguments": [],
"category": "Geography",
"description_markdown": "```\nST_UNION_AGG(geography)\n```\n\n **Description** \n\nReturns a`GEOGRAPHY`that represents the point set\nunion of all input`GEOGRAPHY`s.\n\n`ST_UNION_AGG`ignores`NULL`input`GEOGRAPHY`values.\n\nSee[ST_UNION](#st_union)for the non-aggregate version of`ST_UNION_AGG`.\n\n **Return type** \n\n`GEOGRAPHY`\n\n **Example** \n\n```\nSELECT ST_UNION_AGG(items) AS results\nFROM UNNEST([\n ST_GEOGFROMTEXT('LINESTRING(-122.12 47.67, -122.19 47.69)'),\n ST_GEOGFROMTEXT('LINESTRING(-122.12 47.67, -100.19 47.69)'),\n ST_GEOGFROMTEXT('LINESTRING(-122.12 47.67, -122.19 47.69)')]) as items;\n\n/*---------------------------------------------------------*\n | results |\n +---------------------------------------------------------+\n | LINESTRING(-100.19 47.69, -122.12 47.67, -122.19 47.69) |\n *---------------------------------------------------------*/\n```\n\n\n"
},
{
"name": "ST_WITHIN",
"arguments": [],
"category": "Geography",
"description_markdown": "```\nST_WITHIN(geography_1, geography_2)\n```\n\n **Description** \n\nReturns`TRUE`if no point of`geography_1`is outside of`geography_2`and\nthe interiors of`geography_1`and`geography_2`intersect.\n\nGiven two geographies`a`and`b`,`ST_WITHIN(a, b)`returns the same result\nas[ST_CONTAINS](#st_contains)`(b, a)`. Note the opposite order of arguments.\n\n **Return type** \n\n`BOOL`\n\n\n\n"
},
{
"name": "ST_X",
"arguments": [],
"category": "Geography",
"description_markdown": "```\nST_X(point_geography_expression)\n```\n\n **Description** \n\nReturns the longitude in degrees of the single-point input`GEOGRAPHY`.\n\nFor any input`GEOGRAPHY`that is not a single point,\nincluding an empty`GEOGRAPHY`,`ST_X`returns an\nerror. Use the`SAFE.`prefix to obtain`NULL`.\n\n **Return type** \n\n`FLOAT64`\n\n **Example** \n\nThe following example uses`ST_X`and`ST_Y`to extract coordinates from\nsingle-point geographies.\n\n```\nWITH points AS\n (SELECT ST_GEOGPOINT(i, i + 1) AS p FROM UNNEST([0, 5, 12]) AS i)\n SELECT\n p,\n ST_X(p) as longitude,\n ST_Y(p) as latitude\nFROM points;\n\n/*--------------+-----------+----------*\n | p | longitude | latitude |\n +--------------+-----------+----------+\n | POINT(0 1) | 0.0 | 1.0 |\n | POINT(5 6) | 5.0 | 6.0 |\n | POINT(12 13) | 12.0 | 13.0 |\n *--------------+-----------+----------*/\n```\n\n\n"
},
{
"name": "ST_Y",
"arguments": [],
"category": "Geography",
"description_markdown": "```\nST_Y(point_geography_expression)\n```\n\n **Description** \n\nReturns the latitude in degrees of the single-point input`GEOGRAPHY`.\n\nFor any input`GEOGRAPHY`that is not a single point,\nincluding an empty`GEOGRAPHY`,`ST_Y`returns an\nerror. Use the`SAFE.`prefix to return`NULL`instead.\n\n **Return type** \n\n`FLOAT64`\n\n **Example** \n\nSee[ST_X](#st_x)for example usage.\n\n\n<span id=\"hash_functions\">\n## Hash functions\n\n</span>\nGoogleSQL for BigQuery supports the following hash functions.\n\n\n\n"
},
{
"name": "SUBSTR",
"arguments": [],
"category": "String",
"description_markdown": "```\nSUBSTR(value, position[, length])\n```\n\n **Description** \n\nGets a portion (substring) of the supplied`STRING`or`BYTES`value.\n\nThe`position`argument is an integer specifying the starting position of the\nsubstring.\n\n- If` position`is` 1`, the substring starts from the first character or byte.\n- If` position`is` 0`or less than` -LENGTH(value)`,` position`is set to` 1`,\nand the substring starts from the first character or byte.\n- If` position`is greater than the length of` value`, the function produces\nan empty substring.\n- If` position`is negative, the function counts from the end of` value`,\nwith` -1`indicating the last character or byte.\n\nThe`length`argument specifies the maximum number of characters or bytes to\nreturn.\n\n- If` length`is not specified, the function produces a substring that starts\nat the specified position and ends at the last character or byte of` value`.\n- If` length`is` 0`, the function produces an empty substring.\n- If` length`is negative, the function produces an error.\n- The returned substring may be shorter than` length`, for example, when` length`exceeds the length of` value`, or when the starting position of the\nsubstring plus` length`is greater than the length of` value`.\n\n **Return type** \n\n`STRING`or`BYTES`\n\n **Examples** \n\n```\nWITH items AS\n (SELECT 'apple' as item\n UNION ALL\n SELECT 'banana' as item\n UNION ALL\n SELECT 'orange' as item)\n\nSELECT\n SUBSTR(item, 2) as example\nFROM items;\n\n/*---------*\n | example |\n +---------+\n | pple |\n | anana |\n | range |\n *---------*/\n```\n\n```\nWITH items AS\n (SELECT 'apple' as item\n UNION ALL\n SELECT 'banana' as item\n UNION ALL\n SELECT 'orange' as item)\n\nSELECT\n SUBSTR(item, 2, 2) as example\nFROM items;\n\n/*---------*\n | example |\n +---------+\n | pp |\n | an |\n | ra |\n *---------*/\n```\n\n```\nWITH items AS\n (SELECT 'apple' as item\n UNION ALL\n SELECT 'banana' as item\n UNION ALL\n SELECT 'orange' as item)\n\nSELECT\n SUBSTR(item, -2) as example\nFROM items;\n\n/*---------*\n | example |\n +---------+\n | le |\n | na |\n | ge |\n *---------*/\n```\n\n```\nWITH items AS\n (SELECT 'apple' as item\n UNION ALL\n SELECT 'banana' as item\n UNION ALL\n SELECT 'orange' as item)\n\nSELECT\n SUBSTR(item, 1, 123) as example\nFROM items;\n\n/*---------*\n | example |\n +---------+\n | apple |\n | banana |\n | orange |\n *---------*/\n```\n\n```\nWITH items AS\n (SELECT 'apple' as item\n UNION ALL\n SELECT 'banana' as item\n UNION ALL\n SELECT 'orange' as item)\n\nSELECT\n SUBSTR(item, 123) as example\nFROM items;\n\n/*---------*\n | example |\n +---------+\n | |\n | |\n | |\n *---------*/\n```\n\n```\nWITH items AS\n (SELECT 'apple' as item\n UNION ALL\n SELECT 'banana' as item\n UNION ALL\n SELECT 'orange' as item)\n\nSELECT\n SUBSTR(item, 123, 5) as example\nFROM items;\n\n/*---------*\n | example |\n +---------+\n | |\n | |\n | |\n *---------*/\n```\n\n\n"
},
{
"name": "SUBSTRING",
"arguments": [],
"category": "String",
"description_markdown": "```\nSUBSTRING(value, position[, length])\n```\n\nAlias for[SUBSTR](#substr).\n\n\n\n"
},
{
"name": "SUM",
"arguments": [],
"category": "Aggregate",
"description_markdown": "```\nSUM(\n [ DISTINCT ]\n expression\n)\n[ OVER over_clause ]\n\nover_clause:\n { named_window | ( [ window_specification ] ) }\n\nwindow_specification:\n [ named_window ]\n [ PARTITION BY partition_expression [, ...] ]\n [ ORDER BY expression [ { ASC | DESC } ] [, ...] ]\n [ window_frame_clause ]\n```\n\n **Description** \n\nReturns the sum of non-`NULL`values in an aggregated group.\n\nTo learn more about the optional aggregate clauses that you can pass\ninto this function, see[Aggregate function calls](/bigquery/docs/reference/standard-sql/aggregate-function-calls).\n\nThis function can be used with the[AGGREGATION_THRESHOLD clause](/bigquery/docs/reference/standard-sql/query-syntax#agg_threshold_clause).\n\nTo learn more about the`OVER`clause and how to use it, see[Window function calls](/bigquery/docs/reference/standard-sql/window-function-calls).\n\n`SUM`can be used with differential privacy. For more information, see[Differentially private aggregate functions](#aggregate-dp-functions).\n\nCaveats:\n\n- If the aggregated group is empty or the argument is` NULL`for all rows in\nthe group, returns` NULL`.\n- If the argument is` NaN`for any row in the group, returns` NaN`.\n- If the argument is` [+|-]Infinity`for any row in the group, returns either` [+|-]Infinity`or` NaN`.\n- If there is numeric overflow, produces an error.\n- If a[floating-point type](/bigquery/docs/reference/standard-sql/data-types#floating_point_types)is returned, the result is[non-deterministic](/bigquery/docs/reference/standard-sql/data-types#floating-point-semantics), which means you might receive a\ndifferent result each time you use this function.\n\n **Supported Argument Types** \n\n- Any supported numeric data type\n- ` INTERVAL`\n\n **Return Data Types** \n\n| INPUT | `INT64` | `NUMERIC` | `BIGNUMERIC` | `FLOAT64` | `INTERVAL` |\n| --- | --- | --- | --- | --- | --- |\n| OUTPUT | `INT64` | `NUMERIC` | `BIGNUMERIC` | `FLOAT64` | `INTERVAL` |\n\n **Examples** \n\n```\nSELECT SUM(x) AS sum\nFROM UNNEST([1, 2, 3, 4, 5, 4, 3, 2, 1]) AS x;\n\n/*-----*\n | sum |\n +-----+\n | 25 |\n *-----*/\n```\n\n```\nSELECT SUM(DISTINCT x) AS sum\nFROM UNNEST([1, 2, 3, 4, 5, 4, 3, 2, 1]) AS x;\n\n/*-----*\n | sum |\n +-----+\n | 15 |\n *-----*/\n```\n\n```\nSELECT\n x,\n SUM(x) OVER (PARTITION BY MOD(x, 3)) AS sum\nFROM UNNEST([1, 2, 3, 4, 5, 4, 3, 2, 1]) AS x;\n\n/*---+-----*\n | x | sum |\n +---+-----+\n | 3 | 6 |\n | 3 | 6 |\n | 1 | 10 |\n | 4 | 10 |\n | 4 | 10 |\n | 1 | 10 |\n | 2 | 9 |\n | 5 | 9 |\n | 2 | 9 |\n *---+-----*/\n```\n\n```\nSELECT\n x,\n SUM(DISTINCT x) OVER (PARTITION BY MOD(x, 3)) AS sum\nFROM UNNEST([1, 2, 3, 4, 5, 4, 3, 2, 1]) AS x;\n\n/*---+-----*\n | x | sum |\n +---+-----+\n | 3 | 3 |\n | 3 | 3 |\n | 1 | 5 |\n | 4 | 5 |\n | 4 | 5 |\n | 1 | 5 |\n | 2 | 7 |\n | 5 | 7 |\n | 2 | 7 |\n *---+-----*/\n```\n\n```\nSELECT SUM(x) AS sum\nFROM UNNEST([]) AS x;\n\n/*------*\n | sum |\n +------+\n | NULL |\n *------*/\n```\n\n\n<span id=\"approximate_aggregate_functions\">\n## Approximate aggregate functions\n\n</span>\nGoogleSQL for BigQuery supports approximate aggregate functions.\nTo learn about the syntax for aggregate function calls, see[Aggregate function calls](/bigquery/docs/reference/standard-sql/aggregate-function-calls).\n\nApproximate aggregate functions are scalable in terms of memory usage and time,\nbut produce approximate results instead of exact results. These functions\ntypically require less memory than[exact aggregation functions](#aggregate_functions)like`COUNT(DISTINCT ...)`, but also introduce statistical uncertainty.\nThis makes approximate aggregation appropriate for large data streams for\nwhich linear memory usage is impractical, as well as for data that is\nalready approximate.\n\nThe approximate aggregate functions in this section work directly on the\ninput data, rather than an intermediate estimation of the data. These functions *do not allow* users to specify the precision for the estimation with\nsketches. If you would like to specify precision with sketches, see:\n\n- [HyperLogLog++ functions](#hyperloglog_functions)to estimate cardinality.\n\n\n\n"
},
{
"name": "TAN",
"arguments": [],
"category": "Mathematical",
"description_markdown": "```\nTAN(X)\n```\n\n **Description** \n\nComputes the tangent of X where X is specified in radians. Generates an error if\noverflow occurs.\n\n| X | TAN(X) |\n| --- | --- |\n| `+inf` | `NaN` |\n| `-inf` | `NaN` |\n| `NaN` | `NaN` |\n\n\n\n"
},
{
"name": "TANH",
"arguments": [],
"category": "Mathematical",
"description_markdown": "```\nTANH(X)\n```\n\n **Description** \n\nComputes the hyperbolic tangent of X where X is specified in radians. Does not\nfail.\n\n| X | TANH(X) |\n| --- | --- |\n| `+inf` | 1.0 |\n| `-inf` | -1.0 |\n| `NaN` | `NaN` |\n\n\n\n"
},
{
"name": "TIME",
"arguments": [],
"category": "Time",
"description_markdown": "```\n1. TIME(hour, minute, second)\n2. TIME(timestamp, [time_zone])\n3. TIME(datetime)\n```\n\n **Description** \n\n1. Constructs a` TIME`object using` INT64`values representing the hour, minute, and second.\n1. Constructs a` TIME`object using a` TIMESTAMP`object. It supports an\noptional\nparameter to[specify a time zone](#timezone_definitions). If no\ntime zone is specified, the default time zone, UTC, is\nused.\n1. Constructs a` TIME`object using a` DATETIME`object.\n\n **Return Data Type** \n\n`TIME`\n\n **Example** \n\n```\nSELECT\n TIME(15, 30, 00) as time_hms,\n TIME(TIMESTAMP \"2008-12-25 15:30:00+08\", \"America/Los_Angeles\") as time_tstz;\n\n/*----------+-----------*\n | time_hms | time_tstz |\n +----------+-----------+\n | 15:30:00 | 23:30:00 |\n *----------+-----------*/\n```\n\n```\nSELECT TIME(DATETIME \"2008-12-25 15:30:00.000000\") AS time_dt;\n\n/*----------*\n | time_dt |\n +----------+\n | 15:30:00 |\n *----------*/\n```\n\n\n"
},
{
"name": "TIMESTAMP",
"arguments": [],
"category": "Timestamp",
"description_markdown": "```\nTIMESTAMP(string_expression[, time_zone])\nTIMESTAMP(date_expression[, time_zone])\nTIMESTAMP(datetime_expression[, time_zone])\n```\n\n **Description** \n\n- ` string_expression[, time_zone]`: Converts a string to a\ntimestamp.` string_expression`must include a\ntimestamp literal.\nIf` string_expression`includes a time zone in the timestamp literal, do\nnot include an explicit` time_zone`argument.\n- ` date_expression[, time_zone]`: Converts a date to a timestamp.\nThe value returned is the earliest timestamp that falls within\nthe given date.\n- ` datetime_expression[, time_zone]`: Converts a\ndatetime to a timestamp.\n\nThis function supports an optional\nparameter to[specify a time zone](#timezone_definitions). If\nno time zone is specified, the default time zone, UTC,\nis used.\n\n **Return Data Type** \n\n`TIMESTAMP`\n\n **Examples** \n\n```\nSELECT TIMESTAMP(\"2008-12-25 15:30:00+00\") AS timestamp_str;\n\n-- Display of results may differ, depending upon the environment and time zone where this query was executed.\n/*-------------------------*\n | timestamp_str |\n +-------------------------+\n | 2008-12-25 15:30:00 UTC |\n *-------------------------*/\n```\n\n```\nSELECT TIMESTAMP(\"2008-12-25 15:30:00\", \"America/Los_Angeles\") AS timestamp_str;\n\n-- Display of results may differ, depending upon the environment and time zone where this query was executed.\n/*-------------------------*\n | timestamp_str |\n +-------------------------+\n | 2008-12-25 23:30:00 UTC |\n *-------------------------*/\n```\n\n```\nSELECT TIMESTAMP(\"2008-12-25 15:30:00 UTC\") AS timestamp_str;\n\n-- Display of results may differ, depending upon the environment and time zone where this query was executed.\n/*-------------------------*\n | timestamp_str |\n +-------------------------+\n | 2008-12-25 15:30:00 UTC |\n *-------------------------*/\n```\n\n```\nSELECT TIMESTAMP(DATETIME \"2008-12-25 15:30:00\") AS timestamp_datetime;\n\n-- Display of results may differ, depending upon the environment and time zone where this query was executed.\n/*-------------------------*\n | timestamp_datetime |\n +-------------------------+\n | 2008-12-25 15:30:00 UTC |\n *-------------------------*/\n```\n\n```\nSELECT TIMESTAMP(DATE \"2008-12-25\") AS timestamp_date;\n\n-- Display of results may differ, depending upon the environment and time zone where this query was executed.\n/*-------------------------*\n | timestamp_date |\n +-------------------------+\n | 2008-12-25 00:00:00 UTC |\n *-------------------------*/\n```\n\n\n"
},
{
"name": "TIMESTAMP_ADD",
"arguments": [],
"category": "Timestamp",
"description_markdown": "```\nTIMESTAMP_ADD(timestamp_expression, INTERVAL int64_expression date_part)\n```\n\n **Description** \n\nAdds`int64_expression`units of`date_part`to the timestamp, independent of\nany time zone.\n\n`TIMESTAMP_ADD`supports the following values for`date_part`:\n\n- ` MICROSECOND`\n- ` MILLISECOND`\n- ` SECOND`\n- ` MINUTE`\n- ` HOUR`. Equivalent to 60` MINUTE`parts.\n- ` DAY`. Equivalent to 24` HOUR`parts.\n\n **Return Data Types** \n\n`TIMESTAMP`\n\n **Example** \n\n```\nSELECT\n TIMESTAMP(\"2008-12-25 15:30:00+00\") AS original,\n TIMESTAMP_ADD(TIMESTAMP \"2008-12-25 15:30:00+00\", INTERVAL 10 MINUTE) AS later;\n\n-- Display of results may differ, depending upon the environment and time zone where this query was executed.\n/*-------------------------+-------------------------*\n | original | later |\n +-------------------------+-------------------------+\n | 2008-12-25 15:30:00 UTC | 2008-12-25 15:40:00 UTC |\n *-------------------------+-------------------------*/\n```\n\n\n"
},
{
"name": "TIMESTAMP_BUCKET",
"arguments": [],
"category": "Time_series",
"description_markdown": " **Preview** \n\nThis product or feature is subject to the \"Pre-GA Offerings Terms\"\n in the General Service Terms section of the[Service Specific Terms](/terms/service-terms).\n Pre-GA products and features are available \"as is\" and might have\n limited support. For more information, see the[launch stage descriptions](/products#product-launch-stages).\n\n **Note:** To provide feedback or request support for this feature, send an email to[bigquery-time-series-preview-support@google.com](mailto:bigquery-time-series-preview-support@google.com).```\nTIMESTAMP_BUCKET(timestamp_in_bucket, bucket_width)\n```\n\n```\nTIMESTAMP_BUCKET(timestamp_in_bucket, bucket_width, bucket_origin_timestamp)\n```\n\n **Description** \n\nGets the lower bound of the timestamp bucket that contains a timestamp.\n\n **Definitions** \n\n- ` timestamp_in_bucket`: A` TIMESTAMP`value that you can use to look up a\ntimestamp bucket.\n- ` bucket_width`: An` INTERVAL`value that represents the width of\na timestamp bucket. A[single interval](/bigquery/docs/reference/standard-sql/data-types#single_datetime_part_interval)with[date and time parts](/bigquery/docs/reference/standard-sql/data-types#interval_datetime_parts)is supported.\n- ` bucket_origin_timestamp`: A` TIMESTAMP`value that represents a point in\ntime. All buckets expand left and right from this point. If this argument\nis not set,` 1950-01-01 00:00:00`is used by default.\n\n **Return type** \n\n`TIMESTAMP`\n\n **Examples** \n\nIn the following example, the origin is omitted and the default origin,`1950-01-01 00:00:00`is used. All buckets expand in both directions from the\norigin, and the size of each bucket is 12 hours. The lower bound of the bucket\nin which`my_timestamp`belongs is returned:\n\n```\nWITH some_timestamps AS (\n SELECT TIMESTAMP '1949-12-30 13:00:00.00' AS my_timestamp UNION ALL\n SELECT TIMESTAMP '1949-12-31 00:00:00.00' UNION ALL\n SELECT TIMESTAMP '1949-12-31 13:00:00.00' UNION ALL\n SELECT TIMESTAMP '1950-01-01 00:00:00.00' UNION ALL\n SELECT TIMESTAMP '1950-01-01 13:00:00.00' UNION ALL\n SELECT TIMESTAMP '1950-01-02 00:00:00.00'\n)\nSELECT TIMESTAMP_BUCKET(my_timestamp, INTERVAL 12 HOUR) AS bucket_lower_bound\nFROM some_timestamps;\n\n-- Display of results may differ, depending upon the environment and\n-- time zone where this query was executed.\n /*------------------------+\n | bucket_lower_bound |\n +-------------------------+\n | 2000-12-30 12:00:00 UTC |\n | 2000-12-31 00:00:00 UTC |\n | 2000-12-31 12:00:00 UTC |\n | 2000-01-01 00:00:00 UTC |\n | 2000-01-01 12:00:00 UTC |\n | 2000-01-01 00:00:00 UTC |\n +-------------------------*/\n\n-- Some timestamp buckets that originate from 1950-01-01 00:00:00:\n-- + Bucket: ...\n-- + Bucket: [1949-12-30 00:00:00.00 UTC, 1949-12-30 12:00:00.00 UTC)\n-- + Bucket: [1949-12-30 12:00:00.00 UTC, 1950-01-01 00:00:00.00 UTC)\n-- + Origin: [1950-01-01 00:00:00.00 UTC]\n-- + Bucket: [1950-01-01 00:00:00.00 UTC, 1950-01-01 12:00:00.00 UTC)\n-- + Bucket: [1950-01-01 12:00:00.00 UTC, 1950-02-00 00:00:00.00 UTC)\n-- + Bucket: ...\n```\n\nIn the following example, the origin has been changed to`2000-12-24 12:00:00`,\nand all buckets expand in both directions from this point. The size of each\nbucket is seven days. The lower bound of the bucket in which`my_timestamp`belongs is returned:\n\n```\nWITH some_timestamps AS (\n SELECT TIMESTAMP '2000-12-20 00:00:00.00' AS my_timestamp UNION ALL\n SELECT TIMESTAMP '2000-12-21 00:00:00.00' UNION ALL\n SELECT TIMESTAMP '2000-12-22 00:00:00.00' UNION ALL\n SELECT TIMESTAMP '2000-12-23 00:00:00.00' UNION ALL\n SELECT TIMESTAMP '2000-12-24 00:00:00.00' UNION ALL\n SELECT TIMESTAMP '2000-12-25 00:00:00.00'\n)\nSELECT TIMESTAMP_BUCKET(\n my_timestamp,\n INTERVAL 7 DAY,\n TIMESTAMP '2000-12-22 12:00:00.00') AS bucket_lower_bound\nFROM some_timestamps;\n\n-- Display of results may differ, depending upon the environment and\n-- time zone where this query was executed.\n /*------------------------+\n | bucket_lower_bound |\n +-------------------------+\n | 2000-12-15 12:00:00 UTC |\n | 2000-12-15 12:00:00 UTC |\n | 2000-12-15 12:00:00 UTC |\n | 2000-12-22 12:00:00 UTC |\n | 2000-12-22 12:00:00 UTC |\n | 2000-12-22 12:00:00 UTC |\n +-------------------------*/\n\n-- Some timestamp buckets that originate from 2000-12-22 12:00:00:\n-- + Bucket: ...\n-- + Bucket: [2000-12-08 12:00:00.00 UTC, 2000-12-15 12:00:00.00 UTC)\n-- + Bucket: [2000-12-15 12:00:00.00 UTC, 2000-12-22 12:00:00.00 UTC)\n-- + Origin: [2000-12-22 12:00:00.00 UTC]\n-- + Bucket: [2000-12-22 12:00:00.00 UTC, 2000-12-29 12:00:00.00 UTC)\n-- + Bucket: [2000-12-29 12:00:00.00 UTC, 2000-01-05 12:00:00.00 UTC)\n-- + Bucket: ...\n```\n\n\n<span id=\"timestamp_functions\">\n## Timestamp functions\n\n</span>\nGoogleSQL for BigQuery supports the following timestamp functions.\n\nIMPORTANT: Before working with these functions, you need to understand\nthe difference between the formats in which timestamps are stored and displayed,\nand how time zones are used for the conversion between these formats.\nTo learn more, see[How time zones work with timestamp functions](#timezone_definitions).\n\nNOTE: These functions return a runtime error if overflow occurs; result\nvalues are bounded by the defined[DATE range](/bigquery/docs/reference/standard-sql/data-types#date_type)and[TIMESTAMP range](/bigquery/docs/reference/standard-sql/data-types#timestamp_type).\n\n\n\n"
},
{
"name": "TIMESTAMP_DIFF",
"arguments": [],
"category": "Timestamp",
"description_markdown": "```\nTIMESTAMP_DIFF(end_timestamp, start_timestamp, granularity)\n```\n\n **Description** \n\nGets the number of unit boundaries between two`TIMESTAMP`values\n(`end_timestamp`-`start_timestamp`) at a particular time granularity.\n\n **Definitions** \n\n- ` start_timestamp`: The starting` TIMESTAMP`value.\n- ` end_timestamp`: The ending` TIMESTAMP`value.\n- ` granularity`: The timestamp part that represents the granularity.\nThis can be:\n \n \n - ` MICROSECOND`\n - ` MILLISECOND`\n - ` SECOND`\n - ` MINUTE`\n - ` HOUR`. Equivalent to 60` MINUTE`s.\n - ` DAY`. Equivalent to 24` HOUR`s.\n\n **Details** \n\nIf`end_timestamp`is earlier than`start_timestamp`, the output is negative.\nProduces an error if the computation overflows, such as if the difference\nin microseconds\nbetween the two`TIMESTAMP`values overflows.\n\n **Note:** The behavior of the this function follows the type of arguments passed in.\nFor example,`TIMESTAMP_DIFF(DATE, DATE, PART)`behaves like`DATE_DIFF(DATE, DATE, PART)`. **Return Data Type** \n\n`INT64`\n\n **Example** \n\n```\nSELECT\n TIMESTAMP(\"2010-07-07 10:20:00+00\") AS later_timestamp,\n TIMESTAMP(\"2008-12-25 15:30:00+00\") AS earlier_timestamp,\n TIMESTAMP_DIFF(TIMESTAMP \"2010-07-07 10:20:00+00\", TIMESTAMP \"2008-12-25 15:30:00+00\", HOUR) AS hours;\n\n-- Display of results may differ, depending upon the environment and time zone where this query was executed.\n/*-------------------------+-------------------------+-------*\n | later_timestamp | earlier_timestamp | hours |\n +-------------------------+-------------------------+-------+\n | 2010-07-07 10:20:00 UTC | 2008-12-25 15:30:00 UTC | 13410 |\n *-------------------------+-------------------------+-------*/\n```\n\nIn the following example, the first timestamp occurs before the\nsecond timestamp, resulting in a negative output.\n\n```\nSELECT TIMESTAMP_DIFF(TIMESTAMP \"2018-08-14\", TIMESTAMP \"2018-10-14\", DAY) AS negative_diff;\n\n/*---------------*\n | negative_diff |\n +---------------+\n | -61 |\n *---------------*/\n```\n\nIn this example, the result is 0 because only the number of whole specified`HOUR`intervals are included.\n\n```\nSELECT TIMESTAMP_DIFF(\"2001-02-01 01:00:00\", \"2001-02-01 00:00:01\", HOUR) AS diff;\n\n/*---------------*\n | diff |\n +---------------+\n | 0 |\n *---------------*/\n```\n\n\n"
},
{
"name": "TIMESTAMP_MICROS",
"arguments": [],
"category": "Timestamp",
"description_markdown": "```\nTIMESTAMP_MICROS(int64_expression)\n```\n\n **Description** \n\nInterprets`int64_expression`as the number of microseconds since 1970-01-01\n00:00:00 UTC and returns a timestamp.\n\n **Return Data Type** \n\n`TIMESTAMP`\n\n **Example** \n\n```\nSELECT TIMESTAMP_MICROS(1230219000000000) AS timestamp_value;\n\n-- Display of results may differ, depending upon the environment and time zone where this query was executed.\n/*-------------------------*\n | timestamp_value |\n +-------------------------+\n | 2008-12-25 15:30:00 UTC |\n *-------------------------*/\n```\n\n\n"
},
{
"name": "TIMESTAMP_MILLIS",
"arguments": [],
"category": "Timestamp",
"description_markdown": "```\nTIMESTAMP_MILLIS(int64_expression)\n```\n\n **Description** \n\nInterprets`int64_expression`as the number of milliseconds since 1970-01-01\n00:00:00 UTC and returns a timestamp.\n\n **Return Data Type** \n\n`TIMESTAMP`\n\n **Example** \n\n```\nSELECT TIMESTAMP_MILLIS(1230219000000) AS timestamp_value;\n\n-- Display of results may differ, depending upon the environment and time zone where this query was executed.\n/*-------------------------*\n | timestamp_value |\n +-------------------------+\n | 2008-12-25 15:30:00 UTC |\n *-------------------------*/\n```\n\n\n"
},
{
"name": "TIMESTAMP_SECONDS",
"arguments": [],
"category": "Timestamp",
"description_markdown": "```\nTIMESTAMP_SECONDS(int64_expression)\n```\n\n **Description** \n\nInterprets`int64_expression`as the number of seconds since 1970-01-01 00:00:00\nUTC and returns a timestamp.\n\n **Return Data Type** \n\n`TIMESTAMP`\n\n **Example** \n\n```\nSELECT TIMESTAMP_SECONDS(1230219000) AS timestamp_value;\n\n-- Display of results may differ, depending upon the environment and time zone where this query was executed.\n/*-------------------------*\n | timestamp_value |\n +-------------------------+\n | 2008-12-25 15:30:00 UTC |\n *-------------------------*/\n```\n\n\n"
},
{
"name": "TIMESTAMP_SUB",
"arguments": [],
"category": "Timestamp",
"description_markdown": "```\nTIMESTAMP_SUB(timestamp_expression, INTERVAL int64_expression date_part)\n```\n\n **Description** \n\nSubtracts`int64_expression`units of`date_part`from the timestamp,\nindependent of any time zone.\n\n`TIMESTAMP_SUB`supports the following values for`date_part`:\n\n- ` MICROSECOND`\n- ` MILLISECOND`\n- ` SECOND`\n- ` MINUTE`\n- ` HOUR`. Equivalent to 60` MINUTE`parts.\n- ` DAY`. Equivalent to 24` HOUR`parts.\n\n **Return Data Type** \n\n`TIMESTAMP`\n\n **Example** \n\n```\nSELECT\n TIMESTAMP(\"2008-12-25 15:30:00+00\") AS original,\n TIMESTAMP_SUB(TIMESTAMP \"2008-12-25 15:30:00+00\", INTERVAL 10 MINUTE) AS earlier;\n\n-- Display of results may differ, depending upon the environment and time zone where this query was executed.\n/*-------------------------+-------------------------*\n | original | earlier |\n +-------------------------+-------------------------+\n | 2008-12-25 15:30:00 UTC | 2008-12-25 15:20:00 UTC |\n *-------------------------+-------------------------*/\n```\n\n\n"
},
{
"name": "TIMESTAMP_TRUNC",
"arguments": [],
"category": "Timestamp",
"description_markdown": "```\nTIMESTAMP_TRUNC(timestamp_expression, date_time_part[, time_zone])\n```\n\n **Description** \n\nTruncates a timestamp to the granularity of`date_time_part`.\nThe timestamp is always rounded to the beginning of`date_time_part`,\nwhich can be one of the following:\n\n- ` MICROSECOND`: If used, nothing is truncated from the value.\n- ` MILLISECOND`: The nearest lessor or equal millisecond.\n- ` SECOND`: The nearest lessor or equal second.\n- ` MINUTE`: The nearest lessor or equal minute.\n- ` HOUR`: The nearest lessor or equal hour.\n- ` DAY`: The day in the Gregorian calendar year that contains the` TIMESTAMP`value.\n- ` WEEK`: The first day of the week in the week that contains the` TIMESTAMP`value. Weeks begin on Sundays.` WEEK`is equivalent to` WEEK(SUNDAY)`.\n- ` WEEK(WEEKDAY)`: The first day of the week in the week that contains the` TIMESTAMP`value. Weeks begin on` WEEKDAY`.` WEEKDAY`must be one of the\nfollowing:` SUNDAY`,` MONDAY`,` TUESDAY`,` WEDNESDAY`,` THURSDAY`,` FRIDAY`,\nor` SATURDAY`.\n- ` ISOWEEK`: The first day of the[ISO 8601 week](https://en.wikipedia.org/wiki/ISO_week_date)in the\nISO week that contains the` TIMESTAMP`value. The ISO week begins on\nMonday. The first ISO week of each ISO year contains the first Thursday of the\ncorresponding Gregorian calendar year.\n- ` MONTH`: The first day of the month in the month that contains the` TIMESTAMP`value.\n- ` QUARTER`: The first day of the quarter in the quarter that contains the` TIMESTAMP`value.\n- ` YEAR`: The first day of the year in the year that contains the` TIMESTAMP`value.\n- ` ISOYEAR`: The first day of the[ISO 8601](https://en.wikipedia.org/wiki/ISO_8601)week-numbering year\nin the ISO year that contains the` TIMESTAMP`value. The ISO year is the\nMonday of the first week whose Thursday belongs to the corresponding\nGregorian calendar year.\n\n`TIMESTAMP_TRUNC`function supports an optional`time_zone`parameter. This\nparameter applies to the following`date_time_part`:\n\n- ` MINUTE`\n- ` HOUR`\n- ` DAY`\n- ` WEEK`\n- ` WEEK(<WEEKDAY>)`\n- ` ISOWEEK`\n- ` MONTH`\n- ` QUARTER`\n- ` YEAR`\n- ` ISOYEAR`\n\nUse this parameter if you want to use a time zone other than the\ndefault time zone, UTC, as part of the\ntruncate operation.\n\nWhen truncating a timestamp to`MINUTE`or`HOUR`parts,`TIMESTAMP_TRUNC`determines the civil time of the\ntimestamp in the specified (or default) time zone\nand subtracts the minutes and seconds (when truncating to`HOUR`) or the seconds\n(when truncating to`MINUTE`) from that timestamp.\nWhile this provides intuitive results in most cases, the result is\nnon-intuitive near daylight savings transitions that are not hour-aligned.\n\n **Return Data Type** \n\n`TIMESTAMP`\n\n **Examples** \n\n```\nSELECT\n TIMESTAMP_TRUNC(TIMESTAMP \"2008-12-25 15:30:00+00\", DAY, \"UTC\") AS utc,\n TIMESTAMP_TRUNC(TIMESTAMP \"2008-12-25 15:30:00+00\", DAY, \"America/Los_Angeles\") AS la;\n\n-- Display of results may differ, depending upon the environment and time zone where this query was executed.\n/*-------------------------+-------------------------*\n | utc | la |\n +-------------------------+-------------------------+\n | 2008-12-25 00:00:00 UTC | 2008-12-25 08:00:00 UTC |\n *-------------------------+-------------------------*/\n```\n\nIn the following example,`timestamp_expression`has a time zone offset of +12.\nThe first column shows the`timestamp_expression`in UTC time. The second\ncolumn shows the output of`TIMESTAMP_TRUNC`using weeks that start on Monday.\nBecause the`timestamp_expression`falls on a Sunday in UTC,`TIMESTAMP_TRUNC`truncates it to the preceding Monday. The third column shows the same function\nwith the optional[Time zone definition](#timezone_definitions)argument 'Pacific/Auckland'. Here, the function truncates the`timestamp_expression`using New Zealand Daylight Time, where it falls on a\nMonday.\n\n```\nSELECT\n timestamp_value AS timestamp_value,\n TIMESTAMP_TRUNC(timestamp_value, WEEK(MONDAY), \"UTC\") AS utc_truncated,\n TIMESTAMP_TRUNC(timestamp_value, WEEK(MONDAY), \"Pacific/Auckland\") AS nzdt_truncated\nFROM (SELECT TIMESTAMP(\"2017-11-06 00:00:00+12\") AS timestamp_value);\n\n-- Display of results may differ, depending upon the environment and time zone where this query was executed.\n/*-------------------------+-------------------------+-------------------------*\n | timestamp_value | utc_truncated | nzdt_truncated |\n +-------------------------+-------------------------+-------------------------+\n | 2017-11-05 12:00:00 UTC | 2017-10-30 00:00:00 UTC | 2017-11-05 11:00:00 UTC |\n *-------------------------+-------------------------+-------------------------*/\n```\n\nIn the following example, the original`timestamp_expression`is in the\nGregorian calendar year 2015. However,`TIMESTAMP_TRUNC`with the`ISOYEAR`date\npart truncates the`timestamp_expression`to the beginning of the ISO year, not\nthe Gregorian calendar year. The first Thursday of the 2015 calendar year was\n2015-01-01, so the ISO year 2015 begins on the preceding Monday, 2014-12-29.\nTherefore the ISO year boundary preceding the`timestamp_expression`2015-06-15 00:00:00+00 is 2014-12-29.\n\n```\nSELECT\n TIMESTAMP_TRUNC(\"2015-06-15 00:00:00+00\", ISOYEAR) AS isoyear_boundary,\n EXTRACT(ISOYEAR FROM TIMESTAMP \"2015-06-15 00:00:00+00\") AS isoyear_number;\n\n-- Display of results may differ, depending upon the environment and time zone where this query was executed.\n/*-------------------------+----------------*\n | isoyear_boundary | isoyear_number |\n +-------------------------+----------------+\n | 2014-12-29 00:00:00 UTC | 2015 |\n *-------------------------+----------------*/\n```\n\n\n"
},
{
"name": "TIME_ADD",
"arguments": [],
"category": "Time",
"description_markdown": "```\nTIME_ADD(time_expression, INTERVAL int64_expression part)\n```\n\n **Description** \n\nAdds`int64_expression`units of`part`to the`TIME`object.\n\n`TIME_ADD`supports the following values for`part`:\n\n- ` MICROSECOND`\n- ` MILLISECOND`\n- ` SECOND`\n- ` MINUTE`\n- ` HOUR`\n\nThis function automatically adjusts when values fall outside of the 00:00:00 to\n24:00:00 boundary. For example, if you add an hour to`23:30:00`, the returned\nvalue is`00:30:00`.\n\n **Return Data Types** \n\n`TIME`\n\n **Example** \n\n```\nSELECT\n TIME \"15:30:00\" as original_time,\n TIME_ADD(TIME \"15:30:00\", INTERVAL 10 MINUTE) as later;\n\n/*-----------------------------+------------------------*\n | original_time | later |\n +-----------------------------+------------------------+\n | 15:30:00 | 15:40:00 |\n *-----------------------------+------------------------*/\n```\n\n\n"
},
{
"name": "TIME_DIFF",
"arguments": [],
"category": "Time",
"description_markdown": "```\nTIME_DIFF(start_time, end_time, granularity)\n```\n\n **Description** \n\nGets the number of unit boundaries between two`TIME`values (`end_time`-`start_time`) at a particular time granularity.\n\n **Definitions** \n\n- ` start_time`: The starting` TIME`value.\n- ` end_time`: The ending` TIME`value.\n- ` granularity`: The time part that represents the granularity.\nThis can be:\n \n \n - ` MICROSECOND`\n - ` MILLISECOND`\n - ` SECOND`\n - ` MINUTE`\n - ` HOUR`\n\n **Details** \n\nIf`end_time`is earlier than`start_time`, the output is negative.\nProduces an error if the computation overflows, such as if the difference\nin microseconds\nbetween the two`TIME`values overflows.\n\n **Note:** The behavior of the this function follows the type of arguments passed in.\nFor example,`TIME_DIFF(TIMESTAMP, TIMESTAMP, PART)`behaves like`TIMESTAMP_DIFF(TIMESTAMP, TIMESTAMP, PART)`. **Return Data Type** \n\n`INT64`\n\n **Example** \n\n```\nSELECT\n TIME \"15:30:00\" as first_time,\n TIME \"14:35:00\" as second_time,\n TIME_DIFF(TIME \"15:30:00\", TIME \"14:35:00\", MINUTE) as difference;\n\n/*----------------------------+------------------------+------------------------*\n | first_time | second_time | difference |\n +----------------------------+------------------------+------------------------+\n | 15:30:00 | 14:35:00 | 55 |\n *----------------------------+------------------------+------------------------*/\n```\n\n\n"
},
{
"name": "TIME_SUB",
"arguments": [],
"category": "Time",
"description_markdown": "```\nTIME_SUB(time_expression, INTERVAL int64_expression part)\n```\n\n **Description** \n\nSubtracts`int64_expression`units of`part`from the`TIME`object.\n\n`TIME_SUB`supports the following values for`part`:\n\n- ` MICROSECOND`\n- ` MILLISECOND`\n- ` SECOND`\n- ` MINUTE`\n- ` HOUR`\n\nThis function automatically adjusts when values fall outside of the 00:00:00 to\n24:00:00 boundary. For example, if you subtract an hour from`00:30:00`, the\nreturned value is`23:30:00`.\n\n **Return Data Type** \n\n`TIME`\n\n **Example** \n\n```\nSELECT\n TIME \"15:30:00\" as original_date,\n TIME_SUB(TIME \"15:30:00\", INTERVAL 10 MINUTE) as earlier;\n\n/*-----------------------------+------------------------*\n | original_date | earlier |\n +-----------------------------+------------------------+\n | 15:30:00 | 15:20:00 |\n *-----------------------------+------------------------*/\n```\n\n\n"
},
{
"name": "TIME_TRUNC",
"arguments": [],
"category": "Time",
"description_markdown": "```\nTIME_TRUNC(time_expression, time_part)\n```\n\n **Description** \n\nTruncates a`TIME`value to the granularity of`time_part`. The`TIME`value\nis always rounded to the beginning of`time_part`, which can be one of the\nfollowing:\n\n- ` MICROSECOND`: If used, nothing is truncated from the value.\n- ` MILLISECOND`: The nearest lessor or equal millisecond.\n- ` SECOND`: The nearest lessor or equal second.\n- ` MINUTE`: The nearest lessor or equal minute.\n- ` HOUR`: The nearest lessor or equal hour.\n\n **Return Data Type** \n\n`TIME`\n\n **Example** \n\n```\nSELECT\n TIME \"15:30:00\" as original,\n TIME_TRUNC(TIME \"15:30:00\", HOUR) as truncated;\n\n/*----------------------------+------------------------*\n | original | truncated |\n +----------------------------+------------------------+\n | 15:30:00 | 15:00:00 |\n *----------------------------+------------------------*/\n```\n\n\n<span id=\"time_series_functions\">\n## Time series functions\n\n</span>\nGoogleSQL for BigQuery supports the following time series functions.\n\n\n\n"
},
{
"name": "TO_BASE32",
"arguments": [],
"category": "String",
"description_markdown": "```\nTO_BASE32(bytes_expr)\n```\n\n **Description** \n\nConverts a sequence of`BYTES`into a base32-encoded`STRING`. To convert a\nbase32-encoded`STRING`into`BYTES`, use[FROM_BASE32](#from_base32).\n\n **Return type** \n\n`STRING`\n\n **Example** \n\n```\nSELECT TO_BASE32(b'abcde\\xFF') AS base32_string;\n\n/*------------------*\n | base32_string |\n +------------------+\n | MFRGGZDF74====== |\n *------------------*/\n```\n\n\n"
},
{
"name": "TO_BASE64",
"arguments": [],
"category": "String",
"description_markdown": "```\nTO_BASE64(bytes_expr)\n```\n\n **Description** \n\nConverts a sequence of`BYTES`into a base64-encoded`STRING`. To convert a\nbase64-encoded`STRING`into`BYTES`, use[FROM_BASE64](#from_base64).\n\nThere are several base64 encodings in common use that vary in exactly which\nalphabet of 65 ASCII characters are used to encode the 64 digits and padding.\nSee[RFC 4648](https://tools.ietf.org/html/rfc4648#section-4)for details. This\nfunction adds padding and uses the alphabet`[A-Za-z0-9+/=]`.\n\n **Return type** \n\n`STRING`\n\n **Example** \n\n```\nSELECT TO_BASE64(b'\\377\\340') AS base64_string;\n\n/*---------------*\n | base64_string |\n +---------------+\n | /+A= |\n *---------------*/\n```\n\nTo work with an encoding using a different base64 alphabet, you might need to\ncompose`TO_BASE64`with the`REPLACE`function. For instance, the`base64url`url-safe and filename-safe encoding commonly used in web programming\nuses`-_=`as the last characters rather than`+/=`. To encode a`base64url`-encoded string, replace`+`and`/`with`-`and`_`respectively.\n\n```\nSELECT REPLACE(REPLACE(TO_BASE64(b'\\377\\340'), '+', '-'), '/', '_') as websafe_base64;\n\n/*----------------*\n | websafe_base64 |\n +----------------+\n | _-A= |\n *----------------*/\n```\n\n\n"
},
{
"name": "TO_CODE_POINTS",
"arguments": [],
"category": "String",
"description_markdown": "```\nTO_CODE_POINTS(value)\n```\n\n **Description** \n\nTakes a`STRING`or`BYTES`value and returns an array of`INT64`values that\nrepresent code points or extended ASCII character values.\n\n- If` value`is a` STRING`, each element in the returned array represents a[code point](https://en.wikipedia.org/wiki/Code_point). Each code point falls\nwithin the range of [0, 0xD7FF] and [0xE000, 0x10FFFF].\n- If` value`is` BYTES`, each element in the array is an extended ASCII\ncharacter value in the range of [0, 255].\n\nTo convert from an array of code points to a`STRING`or`BYTES`, see[CODE_POINTS_TO_STRING](#code_points_to_string)or[CODE_POINTS_TO_BYTES](#code_points_to_bytes).\n\n **Return type** \n\n`ARRAY<INT64>`\n\n **Examples** \n\nThe following example gets the code points for each element in an array of\nwords.\n\n```\nSELECT word, TO_CODE_POINTS(word) AS code_points\nFROM UNNEST(['foo', 'bar', 'baz', 'giraffe', 'llama']) AS word;\n\n/*---------+------------------------------------*\n | word | code_points |\n +---------+------------------------------------+\n | foo | [102, 111, 111] |\n | bar | [98, 97, 114] |\n | baz | [98, 97, 122] |\n | giraffe | [103, 105, 114, 97, 102, 102, 101] |\n | llama | [108, 108, 97, 109, 97] |\n *---------+------------------------------------*/\n```\n\nThe following example converts integer representations of`BYTES`to their\ncorresponding ASCII character values.\n\n```\nSELECT word, TO_CODE_POINTS(word) AS bytes_value_as_integer\nFROM UNNEST([b'\\x00\\x01\\x10\\xff', b'\\x66\\x6f\\x6f']) AS word;\n\n/*------------------+------------------------*\n | word | bytes_value_as_integer |\n +------------------+------------------------+\n | \\x00\\x01\\x10\\xff | [0, 1, 16, 255] |\n | foo | [102, 111, 111] |\n *------------------+------------------------*/\n```\n\nThe following example demonstrates the difference between a`BYTES`result and a`STRING`result.\n\n```\nSELECT TO_CODE_POINTS(b'Ā') AS b_result, TO_CODE_POINTS('Ā') AS s_result;\n\n/*------------+----------*\n | b_result | s_result |\n +------------+----------+\n | [196, 128] | [256] |\n *------------+----------*/\n```\n\nNotice that the character, Ā, is represented as a two-byte Unicode sequence. As\na result, the`BYTES`version of`TO_CODE_POINTS`returns an array with two\nelements, while the`STRING`version returns an array with a single element.\n\n\n\n"
},
{
"name": "TO_HEX",
"arguments": [],
"category": "String",
"description_markdown": "```\nTO_HEX(bytes)\n```\n\n **Description** \n\nConverts a sequence of`BYTES`into a hexadecimal`STRING`. Converts each byte\nin the`STRING`as two hexadecimal characters in the range`(0..9, a..f)`. To convert a hexadecimal-encoded`STRING`to`BYTES`, use[FROM_HEX](#from_hex).\n\n **Return type** \n\n`STRING`\n\n **Example** \n\n```\nWITH Input AS (\n SELECT b'\\x00\\x01\\x02\\x03\\xAA\\xEE\\xEF\\xFF' AS byte_str UNION ALL\n SELECT b'foobar'\n)\nSELECT byte_str, TO_HEX(byte_str) AS hex_str\nFROM Input;\n\n/*----------------------------------+------------------*\n | byte_string | hex_string |\n +----------------------------------+------------------+\n | \\x00\\x01\\x02\\x03\\xaa\\xee\\xef\\xff | 00010203aaeeefff |\n | foobar | 666f6f626172 |\n *----------------------------------+------------------*/\n```\n\n\n"
},
{
"name": "TO_JSON",
"arguments": [],
"category": "JSON",
"description_markdown": "```\nTO_JSON(sql_value[, stringify_wide_numbers=>{ TRUE | FALSE }])\n```\n\n **Description** \n\nConverts a SQL value to a JSON value.\n\nArguments:\n\n- ` sql_value`: The SQL value to convert to a JSON value. You can review the\nGoogleSQL data types that this function supports and their\nJSON encodings[here](#json_encodings).\n- ` stringify_wide_numbers`: Optional mandatory-named argument that is either` TRUE`or` FALSE`(default).\n \n \n - If` TRUE`, numeric values outside of the` FLOAT64`type domain are encoded as strings.\n - If` FALSE`(default), numeric values outside of the` FLOAT64`type domain are not encoded as strings,\nbut are stored as JSON numbers. If a numerical value cannot be stored in\nJSON without loss of precision, an error is thrown.The following numerical data types are affected by the` stringify_wide_numbers`argument:\n \n \n- ` INT64`\n \n \n- ` NUMERIC`\n \n \n- ` BIGNUMERIC`\n \n If one of these numerical data types appears in a container data type\nsuch as an` ARRAY`or` STRUCT`, the` stringify_wide_numbers`argument is\napplied to the numerical data types in the container data type.\n \n \n\n **Return type** \n\n`JSON`\n\n **Examples** \n\nIn the following example, the query converts rows in a table to JSON values.\n\n```\nWith CoordinatesTable AS (\n (SELECT 1 AS id, [10, 20] AS coordinates) UNION ALL\n (SELECT 2 AS id, [30, 40] AS coordinates) UNION ALL\n (SELECT 3 AS id, [50, 60] AS coordinates))\nSELECT TO_JSON(t) AS json_objects\nFROM CoordinatesTable AS t;\n\n/*--------------------------------*\n | json_objects |\n +--------------------------------+\n | {\"coordinates\":[10,20],\"id\":1} |\n | {\"coordinates\":[30,40],\"id\":2} |\n | {\"coordinates\":[50,60],\"id\":3} |\n *--------------------------------*/\n```\n\nIn the following example, the query returns a large numerical value as a\nJSON string.\n\n```\nSELECT TO_JSON(9007199254740993, stringify_wide_numbers=>TRUE) as stringify_on;\n\n/*--------------------*\n | stringify_on |\n +--------------------+\n | \"9007199254740993\" |\n *--------------------*/\n```\n\nIn the following example, both queries return a large numerical value as a\nJSON number.\n\n```\nSELECT TO_JSON(9007199254740993, stringify_wide_numbers=>FALSE) as stringify_off;\nSELECT TO_JSON(9007199254740993) as stringify_off;\n\n/*------------------*\n | stringify_off |\n +------------------+\n | 9007199254740993 |\n *------------------*/\n```\n\nIn the following example, only large numeric values are converted to\nJSON strings.\n\n```\nWith T1 AS (\n (SELECT 9007199254740993 AS id) UNION ALL\n (SELECT 2 AS id))\nSELECT TO_JSON(t, stringify_wide_numbers=>TRUE) AS json_objects\nFROM T1 AS t;\n\n/*---------------------------*\n | json_objects |\n +---------------------------+\n | {\"id\":\"9007199254740993\"} |\n | {\"id\":2} |\n *---------------------------*/\n```\n\nIn this example, the values`9007199254740993`(`INT64`)\nand`2.1`(`FLOAT64`) are converted\nto the common supertype`FLOAT64`, which is not\naffected by the`stringify_wide_numbers`argument.\n\n```\nWith T1 AS (\n (SELECT 9007199254740993 AS id) UNION ALL\n (SELECT 2.1 AS id))\nSELECT TO_JSON(t, stringify_wide_numbers=>TRUE) AS json_objects\nFROM T1 AS t;\n\n/*------------------------------*\n | json_objects |\n +------------------------------+\n | {\"id\":9.007199254740992e+15} |\n | {\"id\":2.1} |\n *------------------------------*/\n```\n\n\n"
},
{
"name": "TO_JSON_STRING",
"arguments": [],
"category": "JSON",
"description_markdown": "```\nTO_JSON_STRING(value[, pretty_print])\n```\n\n **Description** \n\nConverts a SQL value to a JSON-formatted`STRING`value.\n\nArguments:\n\n- ` value`: A SQL value. You can review the GoogleSQL data types that\nthis function supports and their JSON encodings[here](#json_encodings).\n- ` pretty_print`: Optional boolean parameter. If` pretty_print`is` true`, the\n`returned value is formatted for easy readability.\n\n **Return type** \n\nA JSON-formatted`STRING`\n\n **Examples** \n\nConvert rows in a table to JSON-formatted strings.\n\n```\nWith CoordinatesTable AS (\n (SELECT 1 AS id, [10, 20] AS coordinates) UNION ALL\n (SELECT 2 AS id, [30, 40] AS coordinates) UNION ALL\n (SELECT 3 AS id, [50, 60] AS coordinates))\nSELECT id, coordinates, TO_JSON_STRING(t) AS json_data\nFROM CoordinatesTable AS t;\n\n/*----+-------------+--------------------------------*\n | id | coordinates | json_data |\n +----+-------------+--------------------------------+\n | 1 | [10, 20] | {\"id\":1,\"coordinates\":[10,20]} |\n | 2 | [30, 40] | {\"id\":2,\"coordinates\":[30,40]} |\n | 3 | [50, 60] | {\"id\":3,\"coordinates\":[50,60]} |\n *----+-------------+--------------------------------*/\n```\n\nConvert rows in a table to JSON-formatted strings that are easy to read.\n\n```\nWith CoordinatesTable AS (\n (SELECT 1 AS id, [10, 20] AS coordinates) UNION ALL\n (SELECT 2 AS id, [30, 40] AS coordinates))\nSELECT id, coordinates, TO_JSON_STRING(t, true) AS json_data\nFROM CoordinatesTable AS t;\n\n/*----+-------------+--------------------*\n | id | coordinates | json_data |\n +----+-------------+--------------------+\n | 1 | [10, 20] | { |\n | | | \"id\": 1, |\n | | | \"coordinates\": [ |\n | | | 10, |\n | | | 20 |\n | | | ] |\n | | | } |\n +----+-------------+--------------------+\n | 2 | [30, 40] | { |\n | | | \"id\": 2, |\n | | | \"coordinates\": [ |\n | | | 30, |\n | | | 40 |\n | | | ] |\n | | | } |\n *----+-------------+--------------------*/\n```\n\n\n"
},
{
"name": "TRANSLATE",
"arguments": [],
"category": "String",
"description_markdown": "```\nTRANSLATE(expression, source_characters, target_characters)\n```\n\n **Description** \n\nIn`expression`, replaces each character in`source_characters`with the\ncorresponding character in`target_characters`. All inputs must be the same\ntype, either`STRING`or`BYTES`.\n\n- Each character in` expression`is translated at most once.\n- A character in` expression`that is not present in` source_characters`is left\nunchanged in` expression`.\n- A character in` source_characters`without a corresponding character in` target_characters`is omitted from the result.\n- A duplicate character in` source_characters`results in an error.\n\n **Return type** \n\n`STRING`or`BYTES`\n\n **Examples** \n\n```\nWITH example AS (\n SELECT 'This is a cookie' AS expression, 'sco' AS source_characters, 'zku' AS\n target_characters UNION ALL\n SELECT 'A coaster' AS expression, 'co' AS source_characters, 'k' as\n target_characters\n)\nSELECT expression, source_characters, target_characters, TRANSLATE(expression,\nsource_characters, target_characters) AS translate\nFROM example;\n\n/*------------------+-------------------+-------------------+------------------*\n | expression | source_characters | target_characters | translate |\n +------------------+-------------------+-------------------+------------------+\n | This is a cookie | sco | zku | Thiz iz a kuukie |\n | A coaster | co | k | A kaster |\n *------------------+-------------------+-------------------+------------------*/\n```\n\n\n"
},
{
"name": "TRIM",
"arguments": [],
"category": "String",
"description_markdown": "```\nTRIM(value_to_trim[, set_of_characters_to_remove])\n```\n\n **Description** \n\nTakes a`STRING`or`BYTES`value to trim.\n\nIf the value to trim is a`STRING`, removes from this value all leading and\ntrailing Unicode code points in`set_of_characters_to_remove`.\nThe set of code points is optional. If it is not specified, all\nwhitespace characters are removed from the beginning and end of the\nvalue to trim.\n\nIf the value to trim is`BYTES`, removes from this value all leading and\ntrailing bytes in`set_of_characters_to_remove`. The set of bytes is required.\n\n **Return type** \n\n- ` STRING`if` value_to_trim`is a` STRING`value.\n- ` BYTES`if` value_to_trim`is a` BYTES`value.\n\n **Examples** \n\nIn the following example, all leading and trailing whitespace characters are\nremoved from`item`because`set_of_characters_to_remove`is not specified.\n\n```\nWITH items AS\n (SELECT ' apple ' as item\n UNION ALL\n SELECT ' banana ' as item\n UNION ALL\n SELECT ' orange ' as item)\n\nSELECT\n CONCAT('#', TRIM(item), '#') as example\nFROM items;\n\n/*----------*\n | example |\n +----------+\n | #apple# |\n | #banana# |\n | #orange# |\n *----------*/\n```\n\nIn the following example, all leading and trailing`*`characters are removed\nfrom`item`.\n\n```\nWITH items AS\n (SELECT '***apple***' as item\n UNION ALL\n SELECT '***banana***' as item\n UNION ALL\n SELECT '***orange***' as item)\n\nSELECT\n TRIM(item, '*') as example\nFROM items;\n\n/*---------*\n | example |\n +---------+\n | apple |\n | banana |\n | orange |\n *---------*/\n```\n\nIn the following example, all leading and trailing`x`,`y`, and`z`characters\nare removed from`item`.\n\n```\nWITH items AS\n (SELECT 'xxxapplexxx' as item\n UNION ALL\n SELECT 'yyybananayyy' as item\n UNION ALL\n SELECT 'zzzorangezzz' as item\n UNION ALL\n SELECT 'xyzpearxyz' as item)\n\nSELECT\n TRIM(item, 'xyz') as example\nFROM items;\n\n/*---------*\n | example |\n +---------+\n | apple |\n | banana |\n | orange |\n | pear |\n *---------*/\n```\n\nIn the following example, examine how`TRIM`interprets characters as\nUnicode code-points. If your trailing character set contains a combining\ndiacritic mark over a particular letter,`TRIM`might strip the\nsame diacritic mark from a different letter.\n\n```\nSELECT\n TRIM('abaW̊', 'Y̊') AS a,\n TRIM('W̊aba', 'Y̊') AS b,\n TRIM('abaŪ̊', 'Y̊') AS c,\n TRIM('Ū̊aba', 'Y̊') AS d;\n\n/*------+------+------+------*\n | a | b | c | d |\n +------+------+------+------+\n | abaW | W̊aba | abaŪ | Ūaba |\n *------+------+------+------*/\n```\n\nIn the following example, all leading and trailing`b'n'`,`b'a'`,`b'\\xab'`bytes are removed from`item`.\n\n```\nWITH items AS\n(\n SELECT b'apple' as item UNION ALL\n SELECT b'banana' as item UNION ALL\n SELECT b'\\xab\\xcd\\xef\\xaa\\xbb' as item\n)\nSELECT item, TRIM(item, b'na\\xab') AS examples\nFROM items;\n\n-- Note that the result of TRIM is of type BYTES, displayed as a base64-encoded string.\n/*----------------------+------------------*\n | item | example |\n +----------------------+------------------+\n | YXBwbGU= | cHBsZQ== |\n | YmFuYW5h | Yg== |\n | q83vqrs= | ze+quw== |\n *----------------------+------------------*/\n```\n\n\n"
},
{
"name": "TRUNC",
"arguments": [],
"category": "Mathematical",
"description_markdown": "```\nTRUNC(X [, N])\n```\n\n **Description** \n\nIf only X is present,`TRUNC`rounds X to the nearest integer whose absolute\nvalue is not greater than the absolute value of X. If N is also present,`TRUNC`behaves like`ROUND(X, N)`, but always rounds towards zero and never overflows.\n\n| X | TRUNC(X) |\n| --- | --- |\n| 2.0 | 2.0 |\n| 2.3 | 2.0 |\n| 2.8 | 2.0 |\n| 2.5 | 2.0 |\n| -2.3 | -2.0 |\n| -2.8 | -2.0 |\n| -2.5 | -2.0 |\n| 0 | 0 |\n| `+inf` | `+inf` |\n| `-inf` | `-inf` |\n| `NaN` | `NaN` |\n\n **Return Data Type** \n\n| INPUT | `INT64` | `NUMERIC` | `BIGNUMERIC` | `FLOAT64` |\n| --- | --- | --- | --- | --- |\n| OUTPUT | `FLOAT64` | `NUMERIC` | `BIGNUMERIC` | `FLOAT64` |\n\n\n<span id=\"navigation_functions\">\n## Navigation functions\n\n</span>\nGoogleSQL for BigQuery supports navigation functions.\nNavigation functions are a subset of window functions. To create a\nwindow function call and learn about the syntax for window functions,\nsee[Window function_calls](/bigquery/docs/reference/standard-sql/window-function-calls).\n\nNavigation functions generally compute some`value_expression`over a different row in the window frame from the\ncurrent row. The`OVER`clause syntax varies across navigation functions.\n\nFor all navigation functions, the result data type is the same type as`value_expression`.\n\n\n\n"
},
{
"name": "UNICODE",
"arguments": [],
"category": "String",
"description_markdown": "```\nUNICODE(value)\n```\n\n **Description** \n\nReturns the Unicode[code point](https://en.wikipedia.org/wiki/Code_point)for the first character in`value`. Returns`0`if`value`is empty, or if the resulting Unicode code\npoint is`0`.\n\n **Return type** \n\n`INT64`\n\n **Examples** \n\n```\nSELECT UNICODE('âbcd') as A, UNICODE('â') as B, UNICODE('') as C, UNICODE(NULL) as D;\n\n/*-------+-------+-------+-------*\n | A | B | C | D |\n +-------+-------+-------+-------+\n | 226 | 226 | 0 | NULL |\n *-------+-------+-------+-------*/\n```\n\n\n"
},
{
"name": "UNIX_DATE",
"arguments": [],
"category": "Date",
"description_markdown": "```\nUNIX_DATE(date_expression)\n```\n\n **Description** \n\nReturns the number of days since`1970-01-01`.\n\n **Return Data Type** \n\nINT64\n\n **Example** \n\n```\nSELECT UNIX_DATE(DATE '2008-12-25') AS days_from_epoch;\n\n/*-----------------*\n | days_from_epoch |\n +-----------------+\n | 14238 |\n *-----------------*/\n```\n\n\n<span id=\"datetime_functions\">\n## Datetime functions\n\n</span>\nGoogleSQL for BigQuery supports the following datetime functions.\n\nAll outputs are automatically formatted as per[ISO 8601](https://en.wikipedia.org/wiki/ISO_8601),\nseparating date and time with a`T`.\n\n\n\n"
},
{
"name": "UNIX_MICROS",
"arguments": [],
"category": "Timestamp",
"description_markdown": "```\nUNIX_MICROS(timestamp_expression)\n```\n\n **Description** \n\nReturns the number of microseconds since`1970-01-01 00:00:00 UTC`.\n\n **Return Data Type** \n\n`INT64`\n\n **Examples** \n\n```\nSELECT UNIX_MICROS(TIMESTAMP \"2008-12-25 15:30:00+00\") AS micros;\n\n/*------------------*\n | micros |\n +------------------+\n | 1230219000000000 |\n *------------------*/\n```\n\n\n"
},
{
"name": "UNIX_MILLIS",
"arguments": [],
"category": "Timestamp",
"description_markdown": "```\nUNIX_MILLIS(timestamp_expression)\n```\n\n **Description** \n\nReturns the number of milliseconds since`1970-01-01 00:00:00 UTC`. Truncates\nhigher levels of precision by rounding down to the beginning of the millisecond.\n\n **Return Data Type** \n\n`INT64`\n\n **Examples** \n\n```\nSELECT UNIX_MILLIS(TIMESTAMP \"2008-12-25 15:30:00+00\") AS millis;\n\n/*---------------*\n | millis |\n +---------------+\n | 1230219000000 |\n *---------------*/\n```\n\n```\nSELECT UNIX_MILLIS(TIMESTAMP \"1970-01-01 00:00:00.0018+00\") AS millis;\n\n/*---------------*\n | millis |\n +---------------+\n | 1 |\n *---------------*/\n```\n\n\n"
},
{
"name": "UNIX_SECONDS",
"arguments": [],
"category": "Timestamp",
"description_markdown": "```\nUNIX_SECONDS(timestamp_expression)\n```\n\n **Description** \n\nReturns the number of seconds since`1970-01-01 00:00:00 UTC`. Truncates higher\nlevels of precision by rounding down to the beginning of the second.\n\n **Return Data Type** \n\n`INT64`\n\n **Examples** \n\n```\nSELECT UNIX_SECONDS(TIMESTAMP \"2008-12-25 15:30:00+00\") AS seconds;\n\n/*------------*\n | seconds |\n +------------+\n | 1230219000 |\n *------------*/\n```\n\n```\nSELECT UNIX_SECONDS(TIMESTAMP \"1970-01-01 00:00:01.8+00\") AS seconds;\n\n/*------------*\n | seconds |\n +------------+\n | 1 |\n *------------*/\n```\n\n\n"
},
{
"name": "UPPER",
"arguments": [],
"category": "String",
"description_markdown": "```\nUPPER(value)\n```\n\n **Description** \n\nFor`STRING`arguments, returns the original string with all alphabetic\ncharacters in uppercase. Mapping between uppercase and lowercase is done\naccording to the[Unicode Character Database](http://unicode.org/ucd/)without taking into account language-specific mappings.\n\nFor`BYTES`arguments, the argument is treated as ASCII text, with all bytes\ngreater than 127 left intact.\n\n **Return type** \n\n`STRING`or`BYTES`\n\n **Examples** \n\n```\nWITH items AS\n (SELECT\n 'foo' as item\n UNION ALL\n SELECT\n 'bar' as item\n UNION ALL\n SELECT\n 'baz' as item)\n\nSELECT\n UPPER(item) AS example\nFROM items;\n\n/*---------*\n | example |\n +---------+\n | FOO |\n | BAR |\n | BAZ |\n *---------*/\n```\n\n\n<span id=\"table_functions_built_in\">\n## Table functions (built in)\n\n</span>\nGoogleSQL for BigQuery supports built-in table functions.\n\nThis topic includes functions that produce columns of a table.\nYou can only use these functions in the`FROM`clause.\n\n\n\n"
},
{
"name": "VARIANCE",
"arguments": [],
"category": "Statistical_aggregate",
"description_markdown": "```\nVARIANCE(\n [ DISTINCT ]\n expression\n)\n[ OVER over_clause ]\n\nover_clause:\n { named_window | ( [ window_specification ] ) }\n\nwindow_specification:\n [ named_window ]\n [ PARTITION BY partition_expression [, ...] ]\n [ ORDER BY expression [ { ASC | DESC } ] [, ...] ]\n [ window_frame_clause ]\n```\n\n **Description** \n\nAn alias of[VAR_SAMP](#var_samp).\n\n\n<span id=\"string_functions\">\n## String functions\n\n</span>\nGoogleSQL for BigQuery supports string functions.\nThese string functions work on two different values:`STRING`and`BYTES`data types.`STRING`values must be well-formed UTF-8.\n\nFunctions that return position values, such as[STRPOS](#strpos),\nencode those positions as`INT64`. The value`1`refers to the first character (or byte),`2`refers to the second, and so on.\nThe value`0`indicates an invalid position. When working on`STRING`types, the\nreturned positions refer to character positions.\n\nAll string comparisons are done byte-by-byte, without regard to Unicode\ncanonical equivalence.\n\n\n\n"
},
{
"name": "VAR_POP",
"arguments": [],
"category": "Statistical_aggregate",
"description_markdown": "```\nVAR_POP(\n [ DISTINCT ]\n expression\n)\n[ OVER over_clause ]\n\nover_clause:\n { named_window | ( [ window_specification ] ) }\n\nwindow_specification:\n [ named_window ]\n [ PARTITION BY partition_expression [, ...] ]\n [ ORDER BY expression [ { ASC | DESC } ] [, ...] ]\n [ window_frame_clause ]\n```\n\n **Description** \n\nReturns the population (biased) variance of the values. The return result is\nbetween`0`and`+Inf`.\n\nAll numeric types are supported. If the\ninput is`NUMERIC`or`BIGNUMERIC`then the internal aggregation is\nstable with the final output converted to a`FLOAT64`.\nOtherwise the input is converted to a`FLOAT64`before aggregation, resulting in a potentially unstable result.\n\nThis function ignores any`NULL`inputs. If all inputs are ignored, this\nfunction returns`NULL`. If this function receives a single non-`NULL`input,\nit returns`0`.\n\n`NaN`is produced if:\n\n- Any input value is` NaN`\n- Any input value is positive infinity or negative infinity.\n\nIf this function is used with the`OVER`clause, it's part of a\nwindow function call. In a window function call,\naggregate function clauses can't be used.\nTo learn more about the`OVER`clause and how to use it, see[Window function calls](/bigquery/docs/reference/standard-sql/window-function-calls).\n\n **Return Data Type** \n\n`FLOAT64`\n\n **Examples** \n\n```\nSELECT VAR_POP(x) AS results FROM UNNEST([10, 14, 18]) AS x\n\n/*--------------------*\n | results |\n +--------------------+\n | 10.666666666666666 |\n *--------------------*/\n```\n\n```\nSELECT VAR_POP(x) AS results FROM UNNEST([10, 14, NULL]) AS x\n\n/*----------*\n | results |\n +---------+\n | 4 |\n *---------*/\n```\n\n```\nSELECT VAR_POP(x) AS results FROM UNNEST([10, NULL]) AS x\n\n/*----------*\n | results |\n +---------+\n | 0 |\n *---------*/\n```\n\n```\nSELECT VAR_POP(x) AS results FROM UNNEST([NULL]) AS x\n\n/*---------*\n | results |\n +---------+\n | NULL |\n *---------*/\n```\n\n```\nSELECT VAR_POP(x) AS results FROM UNNEST([10, 14, CAST('Infinity' as FLOAT64)]) AS x\n\n/*---------*\n | results |\n +---------+\n | NaN |\n *---------*/\n```\n\n\n"
},
{
"name": "VAR_SAMP",
"arguments": [],
"category": "Statistical_aggregate",
"description_markdown": "```\nVAR_SAMP(\n [ DISTINCT ]\n expression\n)\n[ OVER over_clause ]\n\nover_clause:\n { named_window | ( [ window_specification ] ) }\n\nwindow_specification:\n [ named_window ]\n [ PARTITION BY partition_expression [, ...] ]\n [ ORDER BY expression [ { ASC | DESC } ] [, ...] ]\n [ window_frame_clause ]\n```\n\n **Description** \n\nReturns the sample (unbiased) variance of the values. The return result is\nbetween`0`and`+Inf`.\n\nAll numeric types are supported. If the\ninput is`NUMERIC`or`BIGNUMERIC`then the internal aggregation is\nstable with the final output converted to a`FLOAT64`.\nOtherwise the input is converted to a`FLOAT64`before aggregation, resulting in a potentially unstable result.\n\nThis function ignores any`NULL`inputs. If there are fewer than two non-`NULL`inputs, this function returns`NULL`.\n\n`NaN`is produced if:\n\n- Any input value is` NaN`\n- Any input value is positive infinity or negative infinity.\n\nTo learn more about the optional aggregate clauses that you can pass\ninto this function, see[Aggregate function calls](/bigquery/docs/reference/standard-sql/aggregate-function-calls).\n\nThis function can be used with the[AGGREGATION_THRESHOLD clause](/bigquery/docs/reference/standard-sql/query-syntax#agg_threshold_clause).\n\nIf this function is used with the`OVER`clause, it's part of a\nwindow function call. In a window function call,\naggregate function clauses can't be used.\nTo learn more about the`OVER`clause and how to use it, see[Window function calls](/bigquery/docs/reference/standard-sql/window-function-calls).\n\n **Return Data Type** \n\n`FLOAT64`\n\n **Examples** \n\n```\nSELECT VAR_SAMP(x) AS results FROM UNNEST([10, 14, 18]) AS x\n\n/*---------*\n | results |\n +---------+\n | 16 |\n *---------*/\n```\n\n```\nSELECT VAR_SAMP(x) AS results FROM UNNEST([10, 14, NULL]) AS x\n\n/*---------*\n | results |\n +---------+\n | 8 |\n *---------*/\n```\n\n```\nSELECT VAR_SAMP(x) AS results FROM UNNEST([10, NULL]) AS x\n\n/*---------*\n | results |\n +---------+\n | NULL |\n *---------*/\n```\n\n```\nSELECT VAR_SAMP(x) AS results FROM UNNEST([NULL]) AS x\n\n/*---------*\n | results |\n +---------+\n | NULL |\n *---------*/\n```\n\n```\nSELECT VAR_SAMP(x) AS results FROM UNNEST([10, 14, CAST('Infinity' as FLOAT64)]) AS x\n\n/*---------*\n | results |\n +---------+\n | NaN |\n *---------*/\n```\n\n\n"
},
{
"name": "VECTOR_SEARCH",
"arguments": [],
"category": "Search",
"description_markdown": " **Preview** \n\nThis product or feature is subject to the \"Pre-GA Offerings Terms\"\n in the General Service Terms section of the[Service Specific Terms](/terms/service-terms).\n Pre-GA products and features are available \"as is\" and might have\n limited support. For more information, see the[launch stage descriptions](/products#product-launch-stages).\n\nTo provide feedback or request support for this feature, send email to[bq-vector-search@google.com](mailto:bq-vector-search@google.com).\n\n```\nVECTOR_SEARCH(\n {TABLE base_table | base_table_query_statement},\n column_to_search,\n TABLE query_table\n [, query_column_to_search => query_column_to_search_value]\n [, top_k => top_k_value ]\n [, distance_type => distance_type_value ]\n [, options => options_value ]\n)\n```\n\n```\nVECTOR_SEARCH(\n {TABLE base_table | base_table_query_statement},\n column_to_search,\n (query_statement)\n [, query_column_to_search => query_column_to_search_value]\n [, top_k => top_k_value ]\n [, distance_type => distance_type_value ]\n [, options => options_value ]\n)\n```\n\n **Description** \n\nThe`VECTOR_SEARCH`function lets you search embeddings to find semantically\nsimilar entities.\n\nEmbeddings are high-dimensional numerical vectors that represent a given entity,\nlike a piece of text or an audio file. Machine learning (ML) models use\nembeddings to encode semantics about such entities to make it easier to\nreason about and compare them. For example, a common operation in clustering,\nclassification, and recommendation models is to measure the distance between\nvectors in an[embedding space](https://en.wikipedia.org/wiki/Latent_space)to\nfind items that are most semantically similar.\n\n **Definitions** \n\n- ` base_table`: The table to search for nearest neighbor embeddings.\n- ` base_table_query_statement`: A query that you can use to pre-filter the base\ntable. Only` SELECT`,` FROM`, and` WHERE`clauses are allowed in this query.\nDon't apply any filters to the embedding column.\nYou can't use[logical views](/bigquery/docs/views-intro)in this query.\nUsing a[subquery](/bigquery/docs/reference/standard-sql/subqueries)might\ninterfere with index usage or cause your query to fail.\nIf the base table is indexed and the` WHERE`clause contains columns that are\nnot stored in the index, then` VECTOR_SEARCH`post-filters on those columns\ninstead. To learn more and enable pre-filtering, see[Store columns and pre-filter](/bigquery/docs/vector-index#stored-columns).\n- ` column_to_search`: The name of the base table column\nto search for nearest neighbor embeddings. The column must have\na type of` ARRAY<FLOAT64>`. All elements in the array must be non-` NULL`, and\nall values in the column must have the same array dimensions.\nIf the column has a vector index, BigQuery attempts to use it.\nTo determine if an index was used in the vector search, see[Vector index usage](/bigquery/docs/vector-index#vector_index_usage).\n- ` query_table`: The table that provides the\nembeddings for which to find nearest neighbors. All columns are passed\nthrough as output columns.\n- ` query_statement`: A query that provides the\nembeddings for which to find nearest neighbors. All columns are passed\nthrough as output columns.\n- ` query_column_to_search`: An optional` STRING`positional-named argument.` query_column_to_search_value`specifies the name of the column in the query\ntable or statement that contains the embeddings for which to find nearest\nneighbors. The column must have a type of` ARRAY<FLOAT64>`. All elements in\nthe array must be non-` NULL`and all values in the column must have the same\narray dimensions as the values in the` column_to_search`column. If you don't\nspecify` query_column_to_search_value`, the function uses the` column_to_search`value.\n- ` top_k`: An optional` INT64`mandatory-named argument.` top_k_value`specifies the number of nearest neighbors to\nreturn. The default is` 10`. A negative value is treated as infinity, meaning\nthat all values are counted as neighbors and returned.\n- ` distance_type`: An optional` STRING`mandatory-named argument.` distance_type_value`specifies the type of metric to use to\ncompute the distance between two vectors. Supported distance types are[EUCLIDEAN](https://en.wikipedia.org/wiki/Euclidean_distance)and[COSINE](https://en.wikipedia.org/wiki/Cosine_similarity#Cosine_Distance). The default is` EUCLIDEAN`.\n \n If you don't specify` distance_type_value`and the` column_to_search`column has a vector index that is used,` VECTOR_SEARCH`uses the distance\ntype specified in the[distance_type option](/bigquery/docs/reference/standard-sql/data-definition-language#vector_index_option_list)of the` CREATE VECTOR INDEX`statement.\n \n \n- ` options`: An optional JSON-formatted` STRING`mandatory-named argument.` options_value`is a literal that specifies the following vector search\noptions:\n \n \n - ` fraction_lists_to_search`: A JSON number that specifies the\npercentage of lists to search. For example,` options => '{\"fraction_lists_to_search\":0.15}'`. The` fraction_lists_to_search`value must be in the range` 0.0`to` 1.0`,\nexclusive.\n \n Specifying a higher percentage leads to higher recall and slower\nperformance, and the converse is true when specifying a lower percentage.\n \n ` fraction_lists_to_search`is only used when a vector index is also used.\nIf you don't specify a` fraction_lists_to_search`value but an index is\nmatched, the default number of lists to scan is calculated as` min(0.002 * number_of_lists, 10)`.\n \n The number of available lists to search is determined by the[num_lists option](/bigquery/docs/reference/standard-sql/data-definition-language#vector_index_option_list)in the` ivf_options`option of the` CREATE VECTOR INDEX`statement if that is specified. Otherwise,\nBigQuery calculates an appropriate number.\n \n You can't specify` fraction_lists_to_search`when` use_brute_force`is\nset to` true`.\n \n \n - ` use_brute_force`: A JSON boolean that determines whether to use brute\nforce search by skipping the vector index if one is available. For\nexample,` options => '{\"use_brute_force\":true}'`. The\ndefault is` false`. If you specify` use_brute_force=false`and there is\nno useable vector index available, brute force is used anyway.\n \n ` options`defaults to` '{}'`to denote that all underlying options use their\ncorresponding default values.\n \n \n\n **Details** \n\nYou can optionally use`VECTOR_SEARCH`with a[vector index](/bigquery/docs/vector-index). When\na vector index is used,`VECTOR_SEARCH`uses the[Approximate Nearest\nNeighbor](https://en.wikipedia.org/wiki/Nearest_neighbor_search#Approximation_methods)search technique to help improve vector search performance, with\nthe trade-off of reducing[recall](https://developers.google.com/machine-learning/crash-course/classification/precision-and-recall#recallsearch_term_rules)and so returning more approximate\nresults. Brute force is used to return exact results when a vector index isn't\navailable, and you can choose to use brute force to get exact results even when\na vector index is available.\n\n **Output** \n\nFor each row in the query data, the output contains multiple rows from the\nbase table that satisfy the search criteria. The number of results rows per\nquery table row is either 10 or the`top_k`value if it is specified. The\norder of the output isn't guaranteed.\n\nThe output includes the following columns:\n\n- ` query`: A` STRUCT`value that contains all selected columns from the query\ndata.\n- ` base`: A` STRUCT`value that contains all columns from` base_table`or a\nsubset of the columns from` base_table`that you selected in the` base_table_query_statement`query.\n- ` distance`: A` FLOAT64`value that represents the distance between the base\ndata and the query data.\n\n **Limitations** \n\nBigQuery data security and governance rules apply to the use of`VECTOR_SEARCH`, which results in the following behavior:\n\n- If the base table has[row-level security policies](/bigquery/docs/row-level-security-intro),` VECTOR_SEARCH`applies the row-level\naccess policies to the query results.\n- If the indexed column from the base table has[data masking policies](/bigquery/docs/column-data-masking-intro),` VECTOR_SEARCH`succeeds only if the user\nrunning the query has the[Fine-Grained Reader](/iam/docs/understanding-roles#datacatalog.categoryFineGrainedReader)role on the policy tags\nthat are used. Otherwise,` VECTOR_SEARCH`fails with an invalid query error.\n- If any base table column or any column in the query table or statement has[column-level security policies](/bigquery/docs/column-level-security)and you don't have appropriate\npermissions to access the column,` VECTOR_SEARCH`fails with a permission\ndenied error.\n \n \n- The project that runs the query containing` VECTOR_SEARCH`must match the\nproject that contains the base table.\n \n \n\n **Examples** \n\nThe following queries create test tables`table1`and`table2`to use in\nsubsequent query examples :\n\n```\nCREATE OR REPLACE TABLE mydataset.table1\n(\n id INT64,\n my_embedding ARRAY<FLOAT64>\n);\n\nINSERT mydataset.table1 (id, my_embedding)\nVALUES(1, [1.0, 2.0]),\n(2, [2.0, 4.0]),\n(3, [1.5, 7.0]),\n(4, [1.0, 3.2]),\n(5, [5.0, 5.4]),\n(6, [3.7, 1.8]),\n(7, [4.4, 2.9]);\n```\n\n```\nCREATE OR REPLACE TABLE mydataset.table2\n(\n query_id STRING,\n embedding ARRAY<FLOAT64>\n);\n\nINSERT mydataset.table2 (query_id, embedding)\nVALUES('dog', [1.0, 2.0]),\n('cat', [3.0, 5.2]);\n```\n\nThe following example searches the`my_embedding`column of`table1`for the top\ntwo embeddings that match each row of data in the`embedding`column of`table2`:\n\n```\nSELECT *\nFROM\n VECTOR_SEARCH(\n TABLE mydataset.table1,\n 'my_embedding',\n (SELECT query_id, embedding FROM mydataset.table2),\n 'embedding',\n top_k => 2);\n\n/*----------------+-----------------+---------+----------------------------------------*\n | query.query_id | query.embedding | base.id | base.my_embedding | distance |\n +----------------+-----------------+---------+-------------------+--------------------+\n | dog | 1.0 | 1 | 1.0 | 0 |\n | | 2.0 | | 2.0 | |\n +----------------+-----------------+---------+-------------------+--------------------+\n | dog | 1.0 | 4 | 1.0 | 1.2000000000000002 |\n | | 2.0 | | 3.2 | |\n +----------------+-----------------+---------+-------------------+--------------------+\n | cat | 3.0 | 2 | 2.0 | 1.5620499351813311 |\n | | 5.2 | | 4.0 | |\n +----------------+-----------------+---------+-------------------+--------------------+\n | cat | 3.0 | 5 | 5.0 | 2.0099751242241779 |\n | | 5.2 | | 5.4 | |\n *----------------+-----------------+---------+-------------------+--------------------*/\n```\n\nThe following example pre-filters`table1`to rows where`id`is not equal to\n4 and then searches the`my_embedding`column of`table1`for the top\ntwo embeddings that match each row of data in the`embedding`column of`table2`. To enable pre-filtering, fill out the[enrollment form](https://docs.google.com/forms/d/e/1FAIpQLSfMD2Ebj9JRaB3Hy83ZJDCjkZmMcaazYlwQT1H1CSmM7ks51w/viewform).\n\n```\nSELECT *\nFROM\n VECTOR_SEARCH(\n (SELECT * FROM mydataset.table1 WHERE id != 4),\n 'my_embedding',\n (SELECT query_id, embedding FROM mydataset.table2),\n 'embedding',\n top_k => 2,\n options => '{\"use_brute_force\":true}');\n\n/*----------------+-----------------+---------+----------------------------------------*\n | query.query_id | query.embedding | base.id | base.my_embedding | distance |\n +----------------+-----------------+---------+-------------------+--------------------+\n | dog | 1.0 | 1 | 1.0 | 0 |\n | | 2.0 | | 2.0 | |\n +----------------+-----------------+---------+-------------------+--------------------+\n | dog | 1.0 | 2 | 2.0 | 2.23606797749979 |\n | | 2.0 | | 4.0 | |\n +----------------+-----------------+---------+-------------------+--------------------+\n | cat | 3.0 | 2 | 2.0 | 1.5620499351813311 |\n | | 5.2 | | 4.0 | |\n +----------------+-----------------+---------+-------------------+--------------------+\n | cat | 3.0 | 5 | 5.0 | 2.0099751242241779 |\n | | 5.2 | | 5.4 | |\n *----------------+-----------------+---------+-------------------+--------------------*/\n```\n\nThe following example searches the`my_embedding`column of`table1`for the top\ntwo embeddings that match each row of data in the`embedding`column of`table2`, and uses the`COSINE`distance type to measure the distance between\nthe embeddings:\n\n```\nSELECT *\nFROM\n VECTOR_SEARCH(\n TABLE mydataset.table1,\n 'my_embedding',\n TABLE mydataset.table2,\n 'embedding',\n top_k => 2,\n distance_type => 'COSINE');\n\n/*----------------+-----------------+---------+-------------------------------------------+\n | query.query_id | query.embedding | base.id | base.my_embedding | distance |\n +----------------+-----------------+---------+-------------------+-----------------------+\n | dog | 1.0 | 2 | 2.0 | 0 |\n | | 2.0 | | 4.0 | |\n +----------------+-----------------+---------+-------------------+-----------------------+\n | dog | 1.0 | 1 | 1.0 | 0 |\n | | 2.0 | | 2.0 | |\n +----------------+-----------------+---------+-------------------+-----------------------+\n | cat | 3.0 | 2 | 2.0 | 0.0017773842088002478 |\n | | 5.2 | | 4.0 | |\n +----------------+-----------------+---------+-------------------+-----------------------+\n | cat | 3.0 | 1 | 1.0 | 0.0017773842088002478 |\n | | 5.2 | | 2.0 | |\n *----------------+-----------------+---------+-------------------+-----------------------*/\n```\n\n\n<span id=\"security_functions\">\n## Security functions\n\n</span>\nGoogleSQL for BigQuery supports the following security functions.\n\n\n\n"
}
]