The tbl2sde command converts a table to an ArcSDE geodatabase table. The input table format may be a geodatabase, INFO, or dBase table.
Note: This function is not supported on 64-bit Windows, Linux, or HP-Itanium.
tbl2sde -o append -t <table> -f <file_name>
-T {dBASE | INFO | SDE}
[-I] [-a {all | file=<file_name>}] [-c <commit_interval>]
[-i {<service> | <port#> | <direct connection>}] [-s <server_name>]
[-D <database_name>] -u <DB_user_name> [-p <DB_user_password>]
[-v] [-w <"where_clause">]
tbl2sde -o create -t <table> -f <file_name>
-T {dBASE | INFO | SDE}
[-I]
[-a {all | file=<file_name>}] [-c <commit_interval>]
[-k <config_keyword>] [-w <"where_clause">]
[-s <server_name>]
[-i {<service> | <port#> | <direct connection>}] [-D <database_name>]
-u <DB_user_name> [-p <DB_user_password>] [-v]
tbl2sde -o init -t <table> -f <file_name>
-T {dBASE | INFO | SDE}
[-I]
[-a {all | file=<file_name>}] [-c <commit_interval>]
[-i {<service> | <port#> | <direct connection>}] [-s <server_name>]
[-D <database_name>]
-u <DB_user_name> [-p <DB_user_password>]
[-v]
[-w <"where_clause">]
tbl2sde -h
tbl2sde -?
Operation | Description |
append | Adds records to an existing DBMS table (the default) |
create | Creates a new table and imports
records into it An error is returned if the table already exists. |
init | Deletes all records in an existing DBMS table before importing new records |
Options | Description |
-a | Attribute modes: all: Loads all columns (the default) If the table
exists, the incoming schema must be union compatible with the
table if using the append or init option. The fr_colName is the column from the table being brought into the geodatabase, while the to_colName is the new name of the column in the table in the ArcSDE geodatabase. The type specifies a legal data type. The size is the maximum size of the column and nDecs the number of digits to the right of the decimal point for floating point data types. NOT NULL, if specified, requires that the column must have a nonNULL value. Supported types include the following: The allowed type, size, and nDecs (number of decimal places) values will vary according to each DBMS. |
-c | Commit rate (default: AUTOCOMMIT value from SERVER_CONFIG or sde_server_config table) |
-D | Database name (not supported on Oracle) |
-f | Input table name Define the table type with the -T option. |
-h or -? | Use either of these options to see the usage and options for the command. Note: If using a C shell, use -h or "-\?". |
-i | ArcSDE service name, port number, or direct connection information (default: esri_sde or 5151) |
-I | Disable buffered inserts (default: ON). |
-k | Configuration keyword present in DBTUNE table (default: DEFAULTS) |
-o | Operation |
-p | DBMS user password |
-s | ArcSDE server host name (default: localhost) |
-t | Output table name |
-T | Input table type, either dBASE, INFO, or SDE |
-u | DBMS user name |
-v | Verbose option—reports records committed at the commit interval |
-w | SQL WHERE clause—only used if the input table type is SDE. |
The tbl2sde command converts INFO and dBASE tables to ArcSDE geodatabase tables. You can also use tbl2sde to selectively copy columns from one geodatabase table to another geodatabase table. The example below converts a dBASE table called census_data into a table called block_attr. By specifying all with the -a option, all attributes from the table are brought into the table in the geodatabase.
tbl2sde -o create -t block_attr -f census_data -T dBASE -a all -k block_attr -u abc -p mo
While converting the INFO or dBASE table, you can convert string columns to NSTRING (Unicode). This is done using the -a option with the create operation to specify a file that contains the definition for the attribute columns in the table. In the file, the entries should be in the following format:
<fr_ColName> [to_ColName] [type] [size] [nDecs] [NOT_NULL]
The fr_ColName is the column name in the dBASE or INFO table, the to_ColName is the new name of the column in the ArcSDE geodatabase. The type specifies a legal data type. The size is the maximum size of the column and nDecs the number of digits to the right of the decimal point for floating point data types. NOT NULL, if specified, requires that the column must have a nonNULL value.
As indicated by the brackets, all but the first parameter is optional. If you do not supply values for the other parameters, default values are used. In the case of type, if the data type was character in the source table, the default in the table in the geodatabase will be STRING. Therefore, to create an NSTRING column, you must specify the type.
If you do not specify the NOT_NULL parameter, the column in the table in the geodatabase will allow null values. If you do not specify to_ColName, size, or nDecs, they will be the same as they were in the source table.
For a table, study_areas.dbf, with the following definition:
Column name | Type | Width | Decimal Places |
NAME | C | 40 | - |
ID | N | 8 | - |
A file, convert_study_area, can be created to define the NAME column as data type NSTRING in the table in the geodatabase. The entries in the file could be as follows:
NAME NAME NSTRING 45 0
ID ID INT32 10 0 NOT_NULL
The file indicates that both columns will have a greater length in the table in the geodatabase (45 and 10 instead of 40 and 8), the ID column will not allow null values, and the NAME column will use the NSTRING data type to store data.
For this example, the following command is executed to convert the table to a table in the ArcSDE geodatabase:
tbl2sde -o create -t study_area -T dBASE -a d:\mydata\convert_study_area.txt -i 4000 -D clienta -u jfr
If the table already exists in the geodatabase, you can add more records with the append operation. You can also remove the records from an existing table with the init operation before loading more records. In both cases, the incoming schema must be union compatible with the business table to which it is being imported.
tbl2sde -o append -t sherds -f site3 -T INFO -u abc -p mo
tbl2sde -o init -t catchbasin -f storms -T SDE -u abc -p mo
Home Copyright © Environmental Systems Research Institute, Inc. 2004 - 2010. |