Easily find issues by searching: #<Issue ID>
Easily find members by searching in: <username>, <first name> and <last name>.
Example: Search smith, will return results smith and adamsmith
The Import Data Tool allows data of different formats to be imported and inserted into databases, tables, and files. The following steps describe how to use the Import Data Tool.
Import will not work for Generic JDBC or Generic ODBC connections
If you want to import files that contain a Binary Large Object (BLOB) data type, make sure to enable the option Convert binary to hex.
1. Select Tools > Import Data from the Menu bar. This will prompt you to choose a server in which to import data. Navigate and select your server and click ok. This will bring up the Import dialog. It is also possible to right-click within the schema browser and select Tools > Import Data to launch the Import Tool.
2. The first tab in the Import Data dialog window is the General Tab. Browse and select the file to import. Once the file is selected a sample of the file will be displayed in the bottom grid to indicate how it will form columns on import. Then, select the encoding and platform the file is formatted in. The sample columns will be refreshed as the options are changed. Select whether the file is delimited or has fixed width columns. If the file has fixed width columns, type the widths of columns separated by commas (eg: 15,25,35,60 ). While typing the column widths the sample data will not change, so make sure to click on the Fixed Width radio box to refresh. Select whether the first row in the file contains the column names to help the import tool map to the table. Last, select the quote identifier for data values. Make sure the sample data being displayed is formatted correctly before proceeding, then click next.
3. In the Format Tab, select the database, schema, and table in which to import the data. To import into a new table, click on the "..." button which will open a Create Table dialog (see the Create Tables page for more) with the columns defined in the sample file. If needed change the names and datatypes of the columns in the table and then click ok. At this point, it is possible to import into the newly created table. If the sample file contains the column names of the values, Aqua Data Studio will make an attempt at matching the column names of the import file to the names of the columns of the table. It is possible to reorder the column mapping by changing the Position column value to match the column number in the sample file or remove the position value to exclude the column from being imported. Once all columns are mapped, click on Create in the left upper corner of the Create Table dialog. This will take you back to the Format dialog (with your table details appropriately adjusted). Continue by clicking the Next button.
4. In the Options Tab begin by selecting whether "(null)" text values are to be converted to NULL values. Then, select the format of the dates & time. Date and time values will be formatted in the text file to be imported into date/time columns. If a date string is to be imported into a VARCHAR column then the format does not apply. There is an option to import directly into the database or generate an SQL file with INSERT statements to import into the database. A sample of the file's values is provided below for configuring the date/time formats. When ready, click the Next button to import the data. If the Import Tool is being used to generate a script file for use in Aqua Data Studio's Query Analyzer, it is strongly suggested that the statement separator "GO" be selected.
5. In the Transaction tab select the type of transaction, Batch Size, Threshold and/or Wait Time that best suits your environment. By default, Transaction Type is set to Full so that the Import takes place in a single transaction. The Transaction Type of Batch allows indicating a specific number of records per transaction by entering a number in the Batch Size field. The Transaction Type of Threshold allows executing a specific number of transactions within a specified number of milliseconds as indicated in the Threshold field. Wait Time is used to pause between transactions, and, if set to -1, will not wait between transactions.
If you are importing files that contain BLOB data types, make sure to select Batch mode in the Transaction tab.
6. Once the Status Tab takes focus, the import has begun. It is possible to cancel the import at any time by clicking on the cancel button at the bottom of the dialog. If any errors or warnings occur, they will be displayed in the message text window.
Date and time formats are specified by date and time pattern strings. Within date and time pattern strings, unquoted letters from ’A’ to ’Z’ and from ’a’ to ’z’ are interpreted as pattern letters representing the components of a date or time string. Text can be quoted using single quotes (’) to avoid interpretation. "’’" represents a single quote. All other characters are not interpreted; they’re simply copied into the output string during formatting or matched against the input string during parsing. For more information on how to configure Date, Time and Date/Time formats (including user-customized formatting) throughout Aqua Data Studio, see the Options for Date Time and Date/Time.
The following examples show how date and time patterns are interpreted in the U.S. locale. The given date and time are 2001-07-04 12:08:56 local time in the U.S. Pacific Time time zone.
Import of a JSON file for MongoDB
In the Schema Tree Menu or the toolbar menu, right-click on Tools > Import Data.
Aqua Data Studio supports four types of JSON files to Import:
Date, Time, and Number Formatting options are not available for this feature.