CREATE TASK command in Snowflake - SQL Syntax and Examples
CREATE TASK Description
Preview Feature — Open
CREATE TASK command Syntax
CREATE [ OR REPLACE ] TASK [ IF NOT EXISTS ] <name>
WAREHOUSE = <string>
[ SCHEDULE = '{ <num> MINUTE | USING CRON <expr> <time_zone> }' ]
[ <session_parameter> = <value> [ , <session_parameter> = <value> ... ] ]
[ USER_TASK_TIMEOUT_MS = <num> ]
[ COPY GRANTS ]
[ COMMENT = '<string_literal>' ]
[ AFTER <string> ]
[ WHEN <boolean_expr> ]
AS
<sql>
CREATE TASK command Examples
Create a task that inserts the current timestamp into a table every hour starting at 9 AM and ending at 5 PM on Sundays (America/Los_Angeles time zone). The task sets the TIMESTAMP_INPUT_FORMAT parameter for the session in which the task runs:
CREATE TASK mytask_hour
WAREHOUSE = mywh
SCHEDULE = 'USING CRON 0 9-17 * * SUN America/Los_Angeles'
TIMESTAMP_INPUT_FORMAT = 'YYYY-MM-DD HH24'
AS
INSERT INTO mytable(ts) VALUES(CURRENT_TIMESTAMP);
Additional timing examples
SCHEDULE Value |
Description |
---|---|
|
Every minute. UTC time zone. |
|
Every night at 2 AM. UTC time zone. |
|
Twice daily, at 5 AM and 5 PM (at the top of the hour). UTC time zone. |
|
In June, on the last day of the month, at 2:30 AM. UTC time zone. |
Create a task that inserts the current timestamp into a table every 5 minutes:
CREATE TASK mytask_minute
WAREHOUSE = mywh,
SCHEDULE = '5 MINUTE'
AS
INSERT INTO mytable(ts) VALUES(CURRENT_TIMESTAMP);
Create a task that inserts change tracking data for INSERT operations from a stream into a table every 5 minutes. The task polls the stream using the SYSTEM$STREAM_HAS_DATA function to determine whether change data exists and, if the result is FALSE
, skips the current run:
CREATE TASK mytask1
WAREHOUSE = mywh
SCHEDULE = '5 minute'
WHEN
SYSTEM$STREAM_HAS_DATA('MYSTREAM')
AS
INSERT INTO mytable1(id,name) SELECT id, name FROM mystream WHERE METADATA$ACTION = 'INSERT';
Create a simple tree of tasks by specifying the existing mytask1
task as the predecessor task that triggers the new mytask2
task when run successfully. The new task queries the mytable
table and inserts the query results into another table:
CREATE TASK mytask2
WAREHOUSE = mywh
AFTER mytask1
AS
INSERT INTO mytable2(id,name) SELECT id, name FROM mytable1;
Create a task named my_copy_task
that calls a stored procedure to unload data from the mytable
table to the named mystage
stage (using ) every hour:
-- Create a stored procedure that unloads data from a table
-- The COPY statement in the stored procedure unloads data to files in a path identified by epoch time (using the Date.now() method)
create or replace procedure my_unload_sp()
returns string not null
language javascript
as
$$
var my_sql_command = ""
var my_sql_command = my_sql_command.concat("copy into @mystage","/",Date.now(),"/"," from mytable overwrite=true;");
var statement1 = snowflake.createStatement( {sqlText: my_sql_command} );
var result_set1 = statement1.execute();
return my_sql_command; // Statement returned for info/debug purposes
$$;
-- Create a task that calls the stored procedure every hour
create or replace task my_copy_task
warehouse = mywh
schedule = '60 minute'
as
call my_unload_sp();
CREATE TASK command Usage
Executing this command requires the following privileges:
CREATE TASK on the schema.
USAGE on the warehouse in the task definition.
Any privileges required to execute the SQL statement or stored procedure in the task definition.
In addition, the task owner (i.e. the role with the OWNERSHIP privilege on the task) must have the global EXECUTE TASK privilege in order for tasks to run. Revoking the EXECUTE TASK privilege on a role prevents all subsequent task runs from starting under that role.
After creating a task, you must execute … RESUME before the task will run based on the parameters specified in the task definition. Note that accounts are currently limited to a maximum of 10000 resumed tasks.
In addition, when a task is cloned, execution of the cloned task is suspended by default and must be enabled explicitly using the same command.
Tasks run using the role that has the OWNERSHIP privilege on the task. SQL statements executed by the task can only operate on Snowflake objects on which the role has the required privileges.
By default, a DML statement executed without explicitly starting a transaction is automatically committed on success or rolled back on failure at the end of the statement. This behavior is called autocommit and is controlled with the parameter. This parameter must be set to TRUE. If the AUTOCOMMIT parameter is set to FALSE at the account level, then set the parameter to TRUE for the individual task (using ALTER TASK … SET AUTOCOMMIT = TRUE); otherwise, the any DML statement executed by the task fails.
Multiple tasks that consume change data from a single table stream retrieve different deltas. When a task consumes the change data in a stream using a DML statement, the stream advances the offset. The change data is no longer available for the next task to consume. Currently, we recommend that only a single task consumes the change data from a stream. Multiple streams can be created for the same table and consumed by different tasks.
When the CREATE OR REPLACE syntax is used, the existing task is dropped and recreated using the specified definition. Any current run of the task (i.e. a run with an EXECUTING state in the function.