imports_put_files_csv {civis}R Documentation

Replace all attributes of this CSV Import

Description

Replace all attributes of this CSV Import

Usage

imports_put_files_csv(
  id,
  source,
  destination,
  first_row_is_header,
  name = NULL,
  column_delimiter = NULL,
  escaped = NULL,
  compression = NULL,
  existing_table_rows = NULL,
  max_errors = NULL,
  table_columns = NULL,
  loosen_types = NULL,
  execution = NULL,
  redshift_destination_options = NULL
)

Arguments

id

integer required. The ID for the import.

source

list required. A list containing the following elements:

  • fileIds array, The file ID(s) to import, if importing Civis file(s).

  • storagePath list . A list containing the following elements:

    • storageHostId integer, The ID of the source storage host.

    • credentialId integer, The ID of the credentials for the source storage host.

    • filePaths array, The file or directory path(s) within the bucket from which to import. E.g. the file_path for "s3://mybucket/files/all/" would be "/files/all/"If specifying a directory path, the job will import every file found under that path. All files must have the same column layout and file format (e.g., compression, columnDelimiter, etc.).

destination

list required. A list containing the following elements:

  • schema string, The destination schema name.

  • table string, The destination table name.

  • remoteHostId integer, The ID of the destination database host.

  • credentialId integer, The ID of the credentials for the destination database.

  • primaryKeys array, A list of column(s) which together uniquely identify a row in the destination table.These columns must not contain NULL values. If the import mode is "upsert", this field is required;see the Civis Helpdesk article on "Advanced CSV Imports via the Civis API" for more information.

  • lastModifiedKeys array, A list of the columns indicating a record has been updated.If the destination table does not exist, and the import mode is "upsert", this field is required.

first_row_is_header

boolean required. A boolean value indicating whether or not the first row of the source file is a header row.

name

string optional. The name of the import.

column_delimiter

string optional. The column delimiter for the file. Valid arguments are "comma", "tab", and "pipe". Defaults to "comma".

escaped

boolean optional. A boolean value indicating whether or not the source file has quotes escaped with a backslash.Defaults to false.

compression

string optional. The type of compression of the source file. Valid arguments are "gzip" and "none". Defaults to "none".

existing_table_rows

string optional. The behavior if a destination table with the requested name already exists. One of "fail", "truncate", "append", "drop", or "upsert".Defaults to "fail".

max_errors

integer optional. The maximum number of rows with errors to ignore before failing. This option is not supported for Postgres databases.

table_columns

array optional. An array containing the following fields:

  • name string, The column name.

  • sqlType string, The SQL type of the column.

loosen_types

boolean optional. If true, SQL types with precisions/lengths will have these values increased to accommodate data growth in future loads. Type loosening only occurs on table creation. Defaults to false.

execution

string optional. In upsert mode, controls the movement of data in upsert mode. If set to "delayed", the data will be moved after a brief delay. If set to "immediate", the data will be moved immediately. In non-upsert modes, controls the speed at which detailed column stats appear in the data catalogue. Defaults to "delayed", to accommodate concurrent upserts to the same table and speedier non-upsert imports.

redshift_destination_options

list optional. A list containing the following elements:

  • diststyle string, The diststyle to use for the table. One of "even", "all", or "key".

  • distkey string, Distkey for this table in Redshift

  • sortkeys array, Sortkeys for this table in Redshift. Please provide a maximum of two.

Value

A list containing the following elements:

id

integer, The ID for the import.

name

string, The name of the import.

source

list, A list containing the following elements:

  • fileIds array, The file ID(s) to import, if importing Civis file(s).

  • storagePath list . A list containing the following elements:

    • storageHostId integer, The ID of the source storage host.

    • credentialId integer, The ID of the credentials for the source storage host.

    • filePaths array, The file or directory path(s) within the bucket from which to import. E.g. the file_path for "s3://mybucket/files/all/" would be "/files/all/"If specifying a directory path, the job will import every file found under that path. All files must have the same column layout and file format (e.g., compression, columnDelimiter, etc.).

destination

list, A list containing the following elements:

  • schema string, The destination schema name.

  • table string, The destination table name.

  • remoteHostId integer, The ID of the destination database host.

  • credentialId integer, The ID of the credentials for the destination database.

  • primaryKeys array, A list of column(s) which together uniquely identify a row in the destination table.These columns must not contain NULL values. If the import mode is "upsert", this field is required;see the Civis Helpdesk article on "Advanced CSV Imports via the Civis API" for more information.

  • lastModifiedKeys array, A list of the columns indicating a record has been updated.If the destination table does not exist, and the import mode is "upsert", this field is required.

firstRowIsHeader

boolean, A boolean value indicating whether or not the first row of the source file is a header row.

columnDelimiter

string, The column delimiter for the file. Valid arguments are "comma", "tab", and "pipe". Defaults to "comma".

escaped

boolean, A boolean value indicating whether or not the source file has quotes escaped with a backslash.Defaults to false.

compression

string, The type of compression of the source file. Valid arguments are "gzip" and "none". Defaults to "none".

existingTableRows

string, The behavior if a destination table with the requested name already exists. One of "fail", "truncate", "append", "drop", or "upsert".Defaults to "fail".

maxErrors

integer, The maximum number of rows with errors to ignore before failing. This option is not supported for Postgres databases.

tableColumns

array, An array containing the following fields:

  • name string, The column name.

  • sqlType string, The SQL type of the column.

loosenTypes

boolean, If true, SQL types with precisions/lengths will have these values increased to accommodate data growth in future loads. Type loosening only occurs on table creation. Defaults to false.

execution

string, In upsert mode, controls the movement of data in upsert mode. If set to "delayed", the data will be moved after a brief delay. If set to "immediate", the data will be moved immediately. In non-upsert modes, controls the speed at which detailed column stats appear in the data catalogue. Defaults to "delayed", to accommodate concurrent upserts to the same table and speedier non-upsert imports.

redshiftDestinationOptions

list, A list containing the following elements:

  • diststyle string, The diststyle to use for the table. One of "even", "all", or "key".

  • distkey string, Distkey for this table in Redshift

  • sortkeys array, Sortkeys for this table in Redshift. Please provide a maximum of two.

hidden

boolean, The hidden status of the item.

myPermissionLevel

string, Your permission level on the object. One of "read", "write", or "manage".


[Package civis version 3.1.2 Index]