Easily find issues by searching: #<Issue ID>
Example: #1832
Easily find members by searching in: <username>, <first name> and <last name>.
Example: Search smith, will return results smith and adamsmith
Aqua Data Studio / nhilam |
Follow
829
|
Description
• Adds new columns to a table; drops or modifies existing columns; adds,
changes, or drops constraints; changes properties of an existing table;
enables or disables triggers on a table.
• Supports adding, dropping, and modifying computed columns and to
enable the materialized property, nullability, or definition of an existing
computed column to be changed. New clauses and variables in the syntax
section are shown in bold font.
• Partitions and repartitions a table with specified partition strategy, or add
partitions to a table with existing partitions. Syntax for altering table
partitions is listed separately.
Syntax
alter table [[database.][owner].table_name
{ add column_name datatype}
[default {constant_expression | user | null}]
{identity | null | not null}
[off row | in row]
[ [constraint constraint_name]
{ { unique | primary key }
[clustered | nonclustered]
[asc | desc]
[with { fillfactor = pct,
max_rows_per_page = num_rows,
reservepagegap = num_pages }]
[on segment_name]
| references [[database.]owner.]ref_table
[(ref_column)]
[match full]
| check (search_condition) ] ... }
[, next_column]...
| add {[constraint constraint_name]
{ unique | primary key}
[clustered | nonclustered]
(column_name [asc | desc]
[, column_name [asc | desc]...])
[with { fillfactor = pct,
max_rows_per_page = num_rows,
reservepagegap = num_pages}]
[on segment_name]
| foreign key (column_name [{, column_name}...])
references [[database.]owner.]ref_table
[(ref_column [{, ref_column}...])]
[match full]
| check (search_condition)}
| drop {column_name [, column_name]...
| constraint constraint_name }
| modify column_name datatype [null | not null]
[, next_column]...
| replace column_name
default { constant_expression | user | null}
| { enable | disable } trigger
| lock {allpages | datarows | datapages } }
| with exp_row_size=num_bytes
| partition number_of_partitions
| unpartition
| partition_clause
| add_partition_clause
alter table syntax for partitions
partition_clause::=
partition by range ( column_name[, column_name ]...)
( [ partition_name ] values <= ( { constant | MAX }
[, { constant | MAX } ] ...) [ on segment_name ]
[, [ partition_name ] values <= ( { constant | MAX }
[, { constant | MAX } ] ...) [ on segment_name ] ]...)
| partition by hash (column_name[, column_name ]...)
{ ( partition_name [ on segment_name ]
[, partition_name [ on segment_name ] ]...)
| number_of_partitions
[ on (segment_name[, segment_name ] ...) ] }
| partition by list (column_name)
( [ partition_name ] values ( constant[, constant ] ...)
[ on segment_name ]
[, [ partition_name ] values ( constant[, constant ] ...)
[ on segment_name ] ] ...)
| partition by roundrobin
{ ( partition_name [ on segment_name ]
[, partition_name [ on segment_name ] ]...)
| number_of_partitions
[ on ( segment_name [, segment_name ]...) ] }
add_partition_clause::=
add partition
{ ( [ partition_name ] values <= ( { constant | MAX }
[, { constant | MAX } ]...)
[ on segment_name ]
[, [ partition_name ] values <= ( { constant | MAX }
[, { constant | MAX } ] ...)
[ on segment_name ] ]... )
| ( [ partition_name ] values ( constant[, constant ] ...)
[ on segment_name ]
[, [ partition_name ] values ( constant[, constant ] ...)
[ on segment_name ] ] ...) }
alter table syntax for computed columns
alter table
ADD column_name {datatype | {COMPUTE | AS}
computed_column_expression
[MATERIALIZED | NOT MATERIALIZED]
DROP column_name {datatype | [COMPUTE | AS
computed_column_expression...
[MATERIALIZED | NOT MATERIALIZED]
}...
| MODIFY column_name {datatype [null | not null] | null | not null ]com
{MATERIALIZED | NOT MATERIALIZED} [null | not null] |
{COMPUTE | AS}
computed_column_expression
[MATERIALIZED | NOT MATERIALIZED]
[ null | not null ]}
Parameters
table_name
is the name of the table to change. Specify the database name if the table is
in another database, and specify the owner’s name if more than one table of
that name exists in the database. The default value for owner is the current
user, and the default value for database is the current database.
add
specifies the name of the column or constraint to add to the table. If
Component Integration Services is enabled, you cannot use add for remote
servers.
column_name
is the name of a column in that table. If Java is enabled in the database, the
column can be a Java-SQL column.
datatype
is any system datatype except bit or any user-defined datatype except those
based on bit.
If Java is enabled in the database, can be the name of a Java class installed
in the database, either a system class or a user-defined class.
default
specifies a default value for a column. If you specify a default and the user
does not provide a value for this column when inserting data, Adaptive
Server inserts this value. The default can be a constant_expression, user (to
insert the name of the user who is inserting the data), or null (to insert the null
value).
Adaptive Server generates a name for the default in the form of
tabname_colname_objid, where tabname is the first 10 characters of the table
name, colname is the first 5 characters of the column name, and objid is the
object ID number for the default. Setting the default to null drops the default.
If Component Integration Services is enabled, you cannot use default for
remote servers.
constant_expression
is a constant expression to use as a default value for a column. It cannot
include global variables, the name of any columns, or other database objects,
but can include built-in functions. This default value must be compatible
with the datatype of the column.
user
specifies that Adaptive Server should insert the user name as the default if
the user does not supply a value. The datatype of the column must be either
char(30), varchar(30), or a type that Adaptive Server implicitly converts to
char; however, if the datatype is not char(30) or varchar(30), truncation may
occur.
null | not null
specifies Adaptive Server’s behavior during data insertion if no default exists.
null specifies that a column is added that allows nulls. Adaptive Server
assigns a null value during inserts if a user does not provide a value.
not null specifies that a column is added that does not allow nulls. Users must
provide a non-null value during inserts if no default exists.
If you do not specify null or not null, Adaptive Server uses not null by default.
However, you can switch this default using sp_dboption to make the default
compatible with the SQL standards. If you specify (or imply) not null for the
newly added column, a default clause is required. The default value is used
for all existing rows of the newly added column, and applies to future inserts
as well.
identity
indicates that the column has the IDENTITY property. Each table in a
database can have one IDENTITY column with a datatype of:
• Exact numeric and scale of 0, or
• Any of the integer datatypes, including signed or unsigned bigint, int,
smallint, or tinyint.
IDENTITY columns are not updatable and do not allow nulls.
IDENTITY columns store sequential numbers, such as invoice numbers or
employee numbers, automatically generated by Adaptive Server. The value
of the IDENTITY column uniquely identifies each row in a table.
off row | in row
specifies whether the Java-SQL column is stored separate from the row or
in storage allocated directly in the row.
The storage for an in row column must not exceed 16K bytes, depending on
the page size of the database server and other variables. The default value is
off row.
constraint
introduces the name of an integrity constraint. If Component Integration
Services is enabled, you cannot use constraint for remote servers.
constraint_name
is the name of the constraint. It must conform to the rules for identifiers and
be unique in the database. If you do not specify the name for a table-level
constraint, Adaptive Server generates a name in the form of
tabname_colname_objectid, where tabname is the first 10 characters of the
table name, colname is the first 5 characters of the column name, and objectid
is the object ID number for the constraint. If you do not specify the name for
a unique or primary key constraint, Adaptive Server generates a name in the
format tabname_colname_tabindid, where tabindid is a string concatenation
of the table ID and index ID.
Constraints do not apply to the data that already exists in the table at the time
the constraint is added.
unique
constrains the values in the indicated column or columns so that no two rows
can have the same non-null value. This constraint creates a unique index that
can be dropped only if the constraint is dropped. You cannot use this option
along with the null option described above.
primary key
constrains the values in the indicated column or columns so that no two rows
can have the same value and so that the value cannot be NULL. This
constraint creates a unique index that can be dropped only if the constraint
is dropped.
clustered | nonclustered
specifies that the index created by a unique or primary key constraint is a
clustered or nonclustered index. clustered is the default (unless a clustered
index already exists for the table) for primary key constraints; nonclustered
is the default for unique constraints. There can be only one clustered index
per table.
asc | desc
specifies whether the index is to be created in ascending (asc) or descending
(desc) order. The default is ascending order.
with fillfactor=pct
specifies how full to make each page when Adaptive Server creates a new
index on existing data. “pct” stands for percentage. The fillfactor percentage
is relevant only when the index is created. As the data changes, the pages are
not maintained at any particular level of fullness.
: Warning! Creating a clustered index with a fillfactor affects the amount of
storage space your data occupies, since Adaptive Server redistributes the data
as it creates the clustered index.
The default for fillfactor is 0; this is used when you do not include with
fillfactor in the create index statement (unless the value has been changed
with sp_configure). When specifying a fillfactor, use a value between 1 and 100.
A fillfactor of 0 creates clustered indexes with completely full pages and
nonclustered indexes with completely full leaf pages. It leaves a comfortable
amount of space within the index B-tree in both clustered and nonclustered
indexes. There is seldom a reason to change the fillfactor.
If the fillfactor is set to 100, Adaptive Server creates both clustered and
nonclustered indexes with each page 100 percent full. A fillfactor of 100
makes sense only for read-only tables—tables to which no additional data
will ever be added.
fillfactor values smaller than 100 (except 0, which is a special case) cause
Adaptive Server to create new indexes with pages that are not completely
full. A fillfactor of 10 might be a reasonable choice if you are creating an
index on a table that will eventually hold a great deal more data, but small
fillfactor values cause each index (or index and data) to take more storage space.
max_rows_per_page = num_rows
limits the number of rows on data pages and the leaf level pages of indexes.
Unlike fillfactor, the max_rows_per_page value is maintained until it is
changed with sp_chgattribute.
If you do not specify a value for max_rows_per_page, Adaptive Server uses
a value of 0 when creating the index. When specifying max_rows_per_page
for data pages, use a value between 0 and 256. The maximum number of
rows per page for nonclustered indexes depends on the size of the index key;
Adaptive Server returns an error message if the specified value is too high.
For indexes created by constraints, a max_rows_per_page setting of 0
creates clustered indexes with full pages and nonclustered indexes with full
leaf pages. A setting of 0 leaves a comfortable amount of space within the
index B-tree in both clustered and nonclustered indexes.
If max_rows_per_page is set to 1, Adaptive Server creates both clustered and
nonclustered leaf index pages with one row per page at the leaf level. You
can use this to reduce lock contention on frequently accessed data.
Low max_rows_per_page values cause Adaptive Server to create new
indexes with pages that are not completely full, use more storage space, and
may cause more page splits.
: Warning! Creating a clustered index with max_rows_per_page can affect the
amount of storage space your data occupies, since Adaptive Server
redistributes the data as it creates the clustered index.
reservepagegap = num_pages
specifies a ratio of filled pages to empty pages to be left during extent I/O
allocation operations for the index created by the constraint. For each
specified num_pages, an empty page is left for future expansion of the table.
Valid values are 0 – 255. The default value, 0, leaves no empty pages.
on segment_name
specifies that the index is to be created on the named segment. Before the on
segment_name option can be used, the device must be initialized with disk
init, and the segment must be added to the database with the sp_addsegment
system procedure. See your System Administrator or use sp_helpsegment
for a list of the segment names available in your database.
If you specify clustered and use the on segment_name option, the entire table
migrates to the segment you specify, since the leaf level of the index
contains the actual data pages.
references
specifies a column list for a referential integrity constraint. You can specify
only one column value for a column-constraint. By including this constraint
with a table that references another table, any data inserted into the
referencing table must already exist in the referenced table.
To use this constraint, you must have references permission on the
referenced table. The specified columns in the referenced table must be
constrained by a unique index (created by either a unique constraint or a
create index statement). If no columns are specified, there must be a primary
key constraint on the appropriate columns in the referenced table. Also, the
datatypes of the referencing table columns must exactly match the datatype
of the referenced table columns.
If Component Integration Services is enabled, you cannot use references for
remote servers.
foreign key
specifies that the listed column(s) are foreign keys in this table whose
matching primary keys are the columns listed in the references clause.
ref_table
is the name of the table that contains the referenced columns. You can
reference tables in another database. Constraints can reference up to 192
user tables and internally generated worktables. Use the system procedure
sp_helpconstraint to check a table’s referential constraints.
ref_column
is the name of the column or columns in the referenced table.
match full
specifies that if all values in the referencing columns of a referencing row are:
• Null – the referential integrity condition is true.
• Non-null values – if there is a referenced row where each corresponding
column is equal in the referenced table, then the referential integrity
condition is true.
If they are neither, then the referential integrity condition is false when:
• All values are non-null and not equal, or
• Some of the values in the referencing columns of a referencing row are
non-null values, while others are null.
check
specifies a search_condition constraint that Adaptive Server enforces for all
the rows in the table. If Component Integration Services is enabled, you
cannot use check for remote servers.
search_condition
is a boolean expression that defines the check constraint on the column
values. These constraints can include:
• A list of constant expressions introduced with in.
• A set of conditions, which may contain wildcard characters, introduced
with like.
An expression can include arithmetic operations and Transact-SQL
functions. The search_condition cannot contain subqueries, aggregate
functions, parameters, or host variables.
next_column
includes additional column definitions (separated by commas) using the
same syntax described for a column definition.
drop
specifies the name of a column or constraint to drop from the table. If
Component Integration Services is enabled, you cannot use drop for remote
servers.
modify
specifies the name of the column whose datatype or nullability you are
changing.
replace
specifies the column whose default value you want to change with the new
value specified by a following default clause. If Component Integration
Services is enabled, you cannot use replace for remote servers.
enable | disable trigger
Enables or disables a trigger. For more information, see the System
Administration Guide.
lock datarows | datapages | allpages
changes the locking scheme to be used for the table.
with exp_row_size=num_bytes
specifies the expected row size. You can only apply with
exp_row_size=num_bytes:
• To datarows and datapages locking schemes
• To tables with variable-length rows
• When alter table performs a data copy, such as with alter table add or
modify. You cannot use with exp_row_size=num_bytes with alter table
lock change operations.
Valid values are 0, 1, and any value between the minimum and maximum
row length for the table. The default value is 0, which means a server-wide
setting is applied.
partition number_of_partitions
adds (number_of_partitions –1) empty partitions to an unpartitioned
(single-partitioned) table. Thus, the total number of partitions for the table
becomes number_of_partitions. Even if Component Integration Services
(CIS) is enabled, you cannot use partition for remote servers.
unpartition
changes a round-robin partitioned table without indexes, to an unpartitioned
table. Even if CIS is enabled, you cannot use unpartition for remote servers.
partition by range
specifies records are to be partitioned according values in the partitioning
column or columns. Each partitioning column value is compared with sets
of user-supplied upper and lower bounds to determine partition assignment.
partition_name
specifies the name of a new partition on which table records are to stored.
Partition names must be unique within the set of partitions on a table or
index. Partition names can be delimited identifiers if set quoted_identifier is
on. Otherwise, they must be valid identifiers.
If partition_name is omitted, Adaptive Server creates a name in the form
table_name_partition_id. Adaptive Server truncates partition names that
exceed the allowed maximum length.
values <= constant | MAX
specifies the inclusive upper bound of values for a named partition.
Specifying a constant value for the highest partition bound imposes an
implicit integrity constraint on the table. The keyword MAX specifies the
maximum value in a given datatype.
on segment_name
specifies the name of the segment on which to place the partition. When
using on segment_name, the logical device must already have been assigned
to the database with create database or alter database, and the segment must
have been created in the database with sp_addsegment. See your System
Administrator or use sp_helpsegment for a list of the segment names
available in your database.
partition by hash
specifies records are to be partitioned by a system-supplied hash function.
The function computes the hash value of the partition keys that specify the
partition to which records are assigned.
partition by list
specifies records are to be partitioned according to literal values specified in
the named column. The partition key contains only one column. You can list
up to 250 constants as the partition values for each list partition.
partition by round-robin
specifies records are to be partitioned in a sequential manner. A round-robin
partitioned table has no partitioning key. Neither the user nor the optimizer
knows in which partition a particular record resides.
add partition
applies only to range- or list-partitioned tables:
• For range-partitioned tables – adds one or more partitions to the upper
end of a range partitioned table.
• For list-partitioned tables – adds one or more partitions with a new set
of values.
compute | as
adds or drops a new computed column. Follow the same rules defined for
the existing create table command and the alter table add rules.
computed_column_expression
defines a computed column. It can be a regular column name, constant,
function, global variable, or any combination of these, connected by one or
more operators. This expression cannot be a subquery, and it is verified for
correctness. Columns and functions referenced must exist, parameters must
match the function signature, and so forth.
materialized | not materialized
reserved keywords in the modify clause that specify whether the computed
column is materialized, or physically stored in the table. By default, a
computed column is not materialized (that is, not physically stored in the
table). You can also use this piece of syntax to change the definitions of
existing virtual computed columns; that is, to materialize them.
compute column_name
recomputes a materialized computed column. Remember, the new value
may differ from the old one if the expression is deterministic.
EXAMPLES :
Example 1 Adds a column to a table. For each existing row in the table,
Adaptive Server assigns a NULL column value:
alter table publishers
add manager_name varchar(40) null
Example 2 Adds an IDENTITY column to a table. For each existing row in
the table, Adaptive Server assigns a unique, sequential column value. Note that
the IDENTITY column could be type numeric or integer, and a scale of zero.
The precision determines the maximum value (10 5 -1, or 99,999) that can be
inserted into the column:
alter table sales_daily
add ord_num numeric(5,0) identity
Example 3 Adds a primary key constraint to the authors table. If there is an
existing primary key or unique constraint on the table, the existing constraint
must be dropped first (see Example 5):
alter table authors
add constraint au_identification
primary key (au_id, au_lname, au_fname)
Example 4 Creates an index on authors; the index has a reservepagegap value
of 16, leaving 1 empty page in the index for each 15 allocated pages:
alter table authors
add constraint au_identification
primary key (au_id, au_lname, au_fname)
with reservepagegap = 16
Example 5 Drops the au_identification constraint:
alter table titles
drop constraint au_identification
Example 6 Removes the default constraint on the phone column in the authors
table. If the column allows NULL values, NULL is inserted if no column value
is specified. If the column does not allow NULL values, an insert that does not
specify a column value fails:
alter table authors
replace phone default null
Example 7 Changes an unpartitioned table to a range-partitioned table with
three partitions, each of which is on a different segment:
alter table titles partition by range (total_sales)
(smallsales values <= (500) on seg1,
mediumsales values <= (5000) on seg2,
bigsales values <= (25000) on seg3)
Example 8 Adds another range partition to the titles table:
alter table titles add partition
(vbigsales values <= (40000) on seg4)
Example 9 Changes the locking scheme for the titles table to datarows locking:
alter table titles lock datarows
Example 10 Adds the not-null column author_type to the authors table with a
default of primary_author:
alter table authors
add author_type varchar(20)
default "primary_author" not null
Example 11 Drops the advance, notes, and contract columns from the titles table:
alter table titles
drop advance, notes, contract
Example 12 Modifies the city column of the authors table to be a varchar(30)
with a default of NULL:
alter table authors
modify city varchar(30) null
Example 13 Modifies the stor_name column of the stores table to be NOT
NULL. Note that its datatype, varchar(40), remains unchanged:
alter table stores
modify stor_name not null
Example 14 Modifies the type column of the titles table and changes the
locking scheme of the titles table from allpages to datarows:
alter table titles
modify type varchar(10)
lock datarows
Example 15 Modifies the notes column of the titles table from varchar(200) to
varchar(150), changes the default value from NULL to NOT NULL, and
specifies an exp_row_size of 40:
alter table titles
modify notes varchar(150) not null
with exp_row_size = 40
Example 16 Adds, modifies, and drops a column, and then adds another column in one
query. Alters the locking scheme and specifies the exp_row_size of the new column:
alter table titles
add author_type varchar(30) null
modify city varchar(30)
drop notes
add sec_advance money default 1000 not null
lock datarows
with exp_row_size = 40
Example 17 Add a virtual computed column:
alter table authors
add fullname compute au_fname + ' ' + au_lname
Example 18 Changes a virtual computed column to a materialized computed column:
alter table authors modify fullname materialized
Usage
• If stored procedures using select * reference a table that has been altered,
no new columns appear in the result set, even if you use the with recompile
option. You must drop the procedure and re-create it to include these new
columns. Otherwise, the wrong results can be caused by the insert...select
statement of insert into table1 select * from table2 in the procedure when the
tables have been altered and new columns have been added to the tables.
• When the table owner uses alter table, Adaptive Server disables access
rules during the execution of the command and enables them upon
completion of the command. The access rules are disabled to avoid
filtering of the table data during alter table.
Restrictions
: Warning! Do not alter the system tables.
• You cannot add a column of datatype bit to an existing table if you specify
a default value. This default value must be 0 or 1.
• The maximum number of columns in a table is:
• 1024 for fixed-length columns in both all-pages-locked (APL) and
data-only-locked (DOL) tables.
• 254 for variable-length columns in APL tables.
• 1024 for variable-length columns in both APL and DOL tables.
• alter table raises an error if the number of variable-length columns in an
APL table exceeds 254.
• The maximum length for in-row Java columns is determined by the
maximum size of a variable-length column for the table’s schema, locking
style, and page size.
• When converting a table to a different locking scheme, the data in the
source table cannot violate the limits of the target table. For example, if
you attempt to convert a DOL with more than 254 variable-length columns
to an APL table, alter table fails because an APL table is restricted to
having no more than 254 variable-length columns.
• Columns with fixed-length data (for example char, binary, and so on) have
the maximum sizes shown:
APL tables
Page size = 2K (2048 bytes); Maximum row length = 1962; Maximum column length = 1960 bytes
Page size = 4K (4096 bytes); Maximum row length = 4010; Maximum column length = 4008 bytes
Page size = 8K (8192 bytes); Maximum row length = 8106; Maximum column length = 8104 bytes
Page size = 16K (16384 bytes); Maximum row length = 16298; Maximum column length = 16296 bytes
DOL tables
Page size = 2K (2048 bytes); Maximum row length = 1964; Maximum column length = 1958 bytes
Page size = 4K (4096 bytes); Maximum row length = 4012; Maximum column length = 4006 bytes
Page size = 8K (8192 bytes); Maximum row length = 8108; Maximum column length = 8102 bytes
Page size = 16K (16384 bytes); Maximum row length = 16300; Maximum column length = 16294 bytes
– if table does not include any variable length columns
Page size = 16K (16384 bytes); Maximum row length = 16K (16384 bytes) 16300 (subject to a max
start offset of varlen = 8191); Maximum column length = 8191-6-2 = 8183 bytes – if table
includes at least on variable length column.*
* This size includes six bytes for the row overhead and two bytes for the row length field
• The maximum number of bytes of variable length data per row depends on
the locking scheme for the table. The following describes the maximum
size columns for
APL table:
Page size = 2K (2048 bytes); Maximum row length = 1960; Maximum column length = 1960
Page size = 4K (4096 bytes); Maximum row length = 4008; Maximum column length = 4008
Page size = 8K (8192 bytes); Maximum row length = 8104; Maximum column length = 8157
Page size = 16K (16384 bytes); Maximum row length = 16296; Maximum column length = 16227
DOL table:
Page size = 2K (2048 bytes); Maximum row length = 1960; Maximum column length = 1958
Page size = 4K (4096 bytes); Maximum row length = 4008; Maximum column length = 4006
Page size = 8K (8192 bytes); Maximum row length = 8157; Maximum column length = 8102
Page size = 16K (16384 bytes); Maximum row length = 16294; Maximum column length = 16294
• You cannot issue the alter table command with a partition or unpartition
clause within a user-defined transaction.
• You cannot use alter table to add a declarative or check constraint and then
insert data into the table in the same batch or procedure. Either separate the
alter and insert statements into two different batches or procedures, or use
execute to perform the actions separately.
• You cannot use the following variable in alter table statements that include
defaults:
declare @a int
select @a = 2
alter table t2 add c3 int
default @a
Doing so results in error message 154, which says, “Variable is not
allowed in default.”
Getting information about tables
• For information about a table and its columns, use sp_help.
• To rename a table, execute the system procedure sp_rename (do not
rename the system tables).
• For information about integrity constraints (unique, primary key,
references, and check) or the default clause, see create table in this chapter.
Specifying ascending or descending ordering in indexes
• Use the asc and desc keywords after index column names to specify the
sort order for the index. Creating indexes so that columns are in the same
order specified in the order by clause of queries eliminates the sorting step
during query processing. For more information, see Chapter 8, “Indexing
for Performance” in the Performance and Tuning Guide.
Using cross-database referential integrity constraints
• When you create a cross-database constraint, Adaptive Server stores the
following information in the sysreferences system table of each database:
sysreferences Columns - referenced table Columns - referencing table
Key column IDs refkey1 - refkey16 fokey1 - fokey16
Table ID reftabid tableid
Database ID pmrydbid frgndbid
Database name pmrydbname frgndbname
• When you drop a referencing table or its database, Adaptive Server
removes the foreign key information from the referenced database.
• Because the referencing table depends on information from the referenced
table, Adaptive Server does not allow you to:
• Drop the referenced table,
• Drop the external database that contains the referenced table, or
• Rename either database with sp_renamedb.
You must first remove the cross-database constraint with alter table.
• Each time you add or remove a cross-database constraint, or drop a table
that contains a cross-database constraint, dump both of the affected
databases.
: Warning! Loading earlier dumps of these databases could cause database
corruption.
• The sysreferences system table stores the name and the ID number of the
external database. Adaptive Server cannot guarantee referential integrity
if you use load database to change the database name or to load it onto a
different server.
: Warning! Before dumping a database in order to load it with a different
name or move it to another Adaptive Server, use alter table to drop all
external referential integrity constraints.
Changing defaults
• You can create column defaults in two ways: by declaring the default as a
column constraint in the create table or alter table statement or by creating
the default using the create default statement and binding it to a column
using sp_bindefault.
• You cannot replace a user-defined default bound to the column with
sp_bindefault. Unbind the default with sp_unbindefault first.
• If you declare a default column value with create table or alter table, you
cannot bind a default to that column with sp_bindefault. Drop the default
by altering it to NULL, then bind the user-defined default. Changing the
default to NULL unbinds the default and deletes it from the sysobjects
table.
Setting space management properties for indexes
• The space management properties fillfactor, max_rows_per_page, and
reservepagegap in the alter table statement apply to indexes that are
created for primary key or unique constraints. The space management
properties affect the data pages of the table if the constraint creates a
clustered index on an allpages-locked table.
• Use sp_chgattribute to change max_rows_per_page or reservepagegap for
a table or an index, to change the exp_row_size value for a table, or to store
fillfactor values.
• Space management properties for indexes are applied:
• When indexes are re-created as a result of an alter table command that
changes the locking scheme for a table from allpages locking to
data-only locking or vice versa.
• When indexes are automatically rebuilt as part of a reorg rebuild
command.
• To see the space management properties currently in effect for a table, use
sp_help. To see the space management properties currently in effect for an
index, use sp_helpindex.
• The space management properties fillfactor, max_rows_per_page, and
reservepagegap help manage space usage for tables and indexes in the
following ways:
• fillfactor leaves extra space on pages when indexes are created, but the
fillfactor is not maintained over time. It applies to all locking schemes.
• max_rows_per_page limits the number of rows on a data or index
page. Its main use is to improve concurrency in allpages-locked
tables.
• reservepagegap specifies the ratio of empty pages to full pages to
apply for commands that perform extent allocation. It applies to all
locking schemes.
Space management properties can be stored for tables and indexes so that
they are applied during alter table and reorg rebuild commands.
• The following table shows the valid combinations of space management
properties and locking schemes. If an alter table command changes the
table so that the combination is not compatible, the values stored in the
stored in system tables remain there, but are not applied during operations
on the table. If the locking scheme for a table changes so that the properties
become valid, then they are used.
Parameter allpages datapages datarows
max_rows_per_page Yes No No
reservepagegap Yes Yes Yes
fillfactor Yes Yes Yes
exp_row_size No Yes Yes
• The following table shows the default values and the effects of using the
default values for the space management properties.
Parameter Default Effect of using the default
max_rows_per_page 0 Fits as many rows as possible on the page, up to a maximum of 255
reservepagegap 0 Leaves no gaps
fillfactor 0 Fully packs leaf pages
Conversion of max_rows_per_page to exp_row_size
• If a table has max_rows_per_page set, and the table is converted from
allpages locking to data-only locking, the value is converted to an
exp_row_size value before the alter table...lock command copies the table
to its new location. The exp_row_size is enforced during the copy. The
following table shows how the values are converted.
If max_rows_per_page is set to Set exp_row_size to
0 Percentage value set by default exp_row_size percent
255 1, that is, fully packed pages
1–254 The smaller of:
• maximum row size
• 2002/max_rows_per_page value
Using reservepagegap
• Commands that use large amounts of space allocate new space by
allocating an extent rather than allocating single pages. The
reservepagegap keyword causes these commands to leave empty pages so
that future page allocations take place close to the page that is being split
or to the page from which a row is being forwarded.
• The reservepagegap value for a table is stored in sysindexes, and is applied
when the locking scheme for a table is changed from allpages locking to
data-only locking or vice versa. To change the stored value, use the system
procedure sp_chgattribute before running alter table.
• reservepagegap specified with the clustered keyword on an
allpages-locked table overwrites any value previously specified with
create table or alter table.
Partitioning tables for improved performance
• You can partition an unpartitioned table or repartition an already
partitioned table using the partition by clause. The task requires data copy;
all data rows are redistributed according to the specified partition criteria.
The task may be run in parallel if the Adaptive Server is configured for
parallel processing. You must set the select into/bulkcopy/pllsort option to
true. If the table has indexes, you must drop the indexes before you can:
• Change an unpartitioned table into a semantic-partitioned table
• Change the partitioning strategy
• Change the partitioning key – You need not drop indexes to change
other attributes of the partitions, such as number of partitions,
partition bounds, or partition location; the indexes are built
automatically. See create table on page 135 for more information on
partition key and bound restrictions.
• You can use the add partition clause to add partitions to list- or
range-partitioned tables, but not to hash or round-robin partitioned tables.
• The partition number_of_partition and unpartition clause are provided for
compatibility with versions of Adaptive Server earlier than 15.0. These
commands do not require data movement. You can use partition
number_of_partition clause only on unpartitioned tables to add
(number_of_partition-1) empty round-robin partitions; existing data is
placed on the first partition, with subsequent data distributed among all
partitions. You can use the unpartition clause only on
round-robin-partitioned tables without indexes.
• You cannot partition remote proxy tables.
• You cannot partition system tables.
• You cannot issue the partition-related alter table commands within a
user-defined transactions.
• You cannot drop a column that is part of a partitioning key.
• Alter key columns with care. In some cases, modifying the datatype of a
key column might redistribute data among partitions.
Using computed columns
• When you add a new computed column without specifying nullability and
the materialization property, the default option is nullable and not
materialized.
• When you add a new materialized computed column, the
computed_column_expression is evaluated for each existing row in the
table, and the result is stored in the table.
• You cannot add new computed columns and add or modify their base
columns at the same time.
• You can modify the entire definition of an existing computed column. This
is a quick way to drop the computed column and add a new one with the
same name. Such a column behaves like a new computed column: its
defaults are not materialized and nullable, if you do not specify these
options.
• You can modify the materialization property of an existing computed
column without changing its other properties, such as the expression that
defines it or its nullability.
• When you modify a not-null, materialized computed column into a virtual
column, you must specify “null” in the modify clause.
• When you modify a computed column that is not materialized, to
materialize it, the computed_column_expression is evaluated for each
existing row in the table, and the result is stored in the table.
• If you modify existing columns that are index keys, the index is rebuilt.
• You cannot modify a materialized computed column into a virtual column
if it has been used as an index key; you must first drop the index.
• You cannot modify a regular column to become a computed column, or
acomputed column to become a regular column.
• You cannot modify or drop the base column referenced by a computed
column.
• You cannot drop a computed column if it is used as an index key.
Adding IDENTITY columns
• When adding a numeric IDENTITY column to a table, make sure the
column precision is large enough to accommodate the number of existing
rows. If the number of rows exceeds 10 precision - 1, Adaptive Server prints
an error message and does not add the column.
• When adding an IDENTITY column to a table, Adaptive Server:
• Locks the table until all the IDENTITY column values have been
generated. If a table contains a large number of rows, this process may
be time-consuming.
• Assigns each existing row a unique, sequential IDENTITY column
value, beginning with the value 1.
• Logs each insert operation into the table. Use dump transaction to clear
the database’s transaction log before adding an IDENTITY column to
a table with a large number of rows.
• Each time you insert a row into the table, Adaptive Server generates an
IDENTITY column value that is one higher than the last value. This value
takes precedence over any defaults declared for the column in the alter
table statement or bound to it with sp_bindefault.
Altering table schema
• add, drop, or modify, and lock sub-clauses are useful to change an existing
table’s schema. A single statement can contain any number of these
sub-clauses, in any order, as long as the same column name is not
referenced more than once in the statement.
• To ensure that triggers fire properly, you must drop and re-create all
triggers on an altered table after you perform an add, drop, modify, or lock
operation.
• If stored procedures using select * reference a table that has been altered,
no new columns appear in the result set, even if you use the with recompile
option. You must drop the procedure and re-create it to include these new
columns.
• Adaptive Server issues an error message if you add a non-null column with
alter table.
• You cannot drop all the columns in a table. Also, you cannot drop the last
remaining column from a table (for example, if you drop four columns
from a five-column table, you cannot then drop the remaining column). To
remove a table from the database, use drop table.
• Data copy is required:
• To drop a column
• To add a NOT NULL column
• For most alter table ... modify commands
Use set noexec on and showplan on options to determine if a data copy is
required for a particular alter table command.
• You can specify a change in the locking scheme for the modified table with
other alter table commands (add, drop, or modify) when the other alter table
command requires a data copy.
• If alter table performs a data copy, select into /bulkcopy/pllsort must be
turned on in the database that includes the table whose schema you are
changing.
• The modified table retains the existing space management properties
(max_rows_per_page, fillfactor, and so on) and indexes of the table.
• alter table that requires a data copy does not fire any triggers.
• You can use alter table to change the schema of remote proxy tables created
and maintained by Component Integration Services (CIS).
• You cannot perform a data copy and add a table level or referential
integrity constraint in the same statement.
• You cannot perform a data copy and create a clustered index in the same
statement.
• If you add a NOT NULL column, you must also specify a default clause.
This rule has one exception: if you add a user-defined type column, and
the type has a default bound to it, you do not need to specify a default
clause.
• You can always add, drop, or modify a column in an all-pages locked
tables. However, there are restrictions for adding, dropping, or modifying
a column in a data-only locked table, which are described in the following
table:
Type of All pages locked, All pages locked, Data-only locked, Data-only locked,
index partitioned table unpartitioned table partitioned table unpartitioned table
Clustered Yes Yes No Yes
Non-clustered Yes Yes Yes Yes
If you need to add, drop, or modify a column in a data-only locked table
partitioned table with a clustered index, you can:
a Drop the clustered index.
b Alter the (data-only locked) table.
c Re-create the clustered index.
• You cannot add a NOT NULL Java object as a column. By default, all Java
columns always have a default value of NULL, and are stored as either
varbinary strings or as image datatypes.
• You cannot modify a partitioned table that contains a Java column if the
modification requires a data copy. Instead, first unpartition the table, run
the alter table command, then repartition the table.
• You cannot drop the key column from an index or a referential integrity
constraint. To drop a key column, first drop the index or referential
integrity constraint, then drop the key column.
• You can drop columns that have defaults or rules bound to them. Any
column-specific defaults are also dropped when you drop the column. You
cannot drop columns that have check constraints or referential constraints
bound to them. Instead, first drop the check constraint or referential
constraint, then drop the column. Use sp_helpconstraint to identify any
constraints on a table, and use sp_depends to identify any column- level
dependencies.
• You cannot drop a column from a system table. Also, you cannot drop
columns from user tables that are created and used by Sybase-provided
tools and stored procedures.
• You can generally modify the datatype of an existing column to any other
datatype if the table is empty. If the table is not empty, you can modify the
datatype to any datatype that is explicitly convertible to the original
datatype.
• You can:
• Add a new IDENTITY column.
• Drop an existing IDENTITY column.
• Modify the size of an existing IDENTITY.
• Altering the schema of a table increments the schema count, causing
existing stored procedures that access this table to be renormalized the
next time they are executed. Changes in datatype-dependent stored
procedures or views may fail with datatype normalization type errors. You
must update these dependent objects so they refer to the modified schema
of the table.
Restrictions for modifying a table schema
• You cannot run alter table from inside a transaction.
• Altering a table’s schema can invalidate backups that you made using bcp.
These backups may use a tables schema that is no longer compatible with
the table’s current schema.
• You can add NOT NULL columns with check constraints, however,
Adaptive Server does not validate the constraint against existing data.
• You cannot change the locking scheme of a table using the alter table . . .
add, drop, or modify commands if the table has a clustered index and the
operation requires a data copy. Instead you can
a Drop the clustered index.
b Alter the table’s schema.
c Re-create the clustered index.
• You cannot alter a table’s schema if there are any active open cursors on
the table.
Restrictions for modifying text and image columns
• You can only add text or image columns that accept null values.
To add a text or image column so it contains only non-null values, first add
a column that only accepts null values and then update it to the non-null
values.
• You can only modify a column from text datatype to the following
datatypes:
• [n]char
• [n]varchar
• unichar
• univarchar
• nchar
• nvarchar
• You can only modify a column from image datatype to a varbinary
datatype, and the column can only include non-null data.
• You can modify text or image columns to any other datatypes only if the
table is empty.
• You cannot add a new text or image column and then drop an existing text
or image column in the same statement.
• You cannot modify a column to either text or image datatype.
Modifying tables with unitext columns
The following restrictions apply when you use alter table to modify unitext
columns:
• You can add a new unitext column that accepts NULL values.
• You can modify a column from unitext only to the following datatypes:
• [n]char
• [n]varchar
• unichar
• univarchar
• binary
• varbinary
• You cannot modify a column to the unitext datatype.
• You cannot add a unitext column and drop an existing unitext column in
the same statement.
Changing locking schemes
• alter table supports changing from any locking scheme to any other locking
scheme. You can change:
• From allpages to datapages or vice versa
• From allpages to datarows or vice versa
• From datapages to datarows or vice versa
• Before you change from allpages locking to a data-only locking scheme,
or vice versa, use sp_dboption to set the database option
select into/bulkcopy/pllsort to true, then run checkpoint in the database if any
of the tables are partitioned and the sorts for the indexes require a parallel
sort.
• After changing the locking scheme from allpages-locking to data-only
locking or vice versa, the use of the dump transaction command to back up
the transaction log is prohibited; you must first perform a full database
dump.
• When you use alter table...lock to change the locking scheme for a table
from allpages locking to data-only locking or vice versa, Adaptive Server
makes a copy of the table’s data pages. There must be enough room on the
segment where the table resides for a complete copy of the data pages.
There must be space on the segment where the indexes reside to rebuild
the indexes.
Clustered indexes for data-only-locked tables have a leaf level above the
data pages. If you are altering a table with a clustered index from
allpages-locking to a data-only-locking, the resulting clustered index
requires more space. The additional space required depends on the size of
the index keys.
Use sp_spaceused to determine how much space is currently occupied by
the table, and use sp_helpsegment to see the space available to store the
table.
• When you change the locking scheme for a table from allpages locking to
datapages locking or vice versa, the space management properties are
applied to the tables, as the data rows are copied, and to the indexes, as
they are re-created. When you change from one data-only locking scheme
to another, the data pages are not copied, and the space management
properties are not applied.
• If a table is partitioned, changing the locking scheme performs a
partition-to-partition copy of the rows. It does not balance the data on the
partitions during the copy.
• When you change the locking scheme for a table, the alter table...lock
command acquires an exclusive lock on the table until the command
completes.
• When you use alter table...lock to change from datapages locking to
datarows locking, the command does not copy data pages or rebuild
indexes. It only updates system tables.
• Changing the locking scheme while other users are active on the system
may have the following effects on user activity:
• Query plans in the procedure cache that access the table will be
recompiled the next time they are run.
• Active multi-statement procedures that use the table are recompiled
before continuing with the next step.
• Ad hoc batch transactions that use the table are terminated.
: Warning! Changing the locking scheme for a table while a bulk copy
operation is active can cause table corruption. Bulk copy operates by
first obtaining information about the table and does not hold a lock
between the time it reads the table information and the time it starts
sending rows, leaving a small window of time for an alter table...lock
command to start.
Adding Java-SQL columns
• If Java is enabled in the database, you can add Java-SQL columns to a
table.
• The declared class (datatype) of the new Java-SQL column must
implement either the Serializable or Externalizable interface.
• When you add a Java-SQL column to a table, the Java-SQL column cannot
be specified:
• As a foreign key
• In a references clause
• As having the UNIQUE property
• As the primary key
• If in row is specified, then the value stored cannot exceed 16K bytes,
depending on the page size of the data server.
• If off row is specified, then:
• The column cannot be referenced in a check constraint.
• The column cannot be referenced in a select that specifies distinct.
• The column cannot be specified in a comparison operator, in a
predicate, or in a group by clause.
Standards ANSI SQL – Compliance level: Transact-SQL extension.
Permissions alter table permission defaults to the table owner; it cannot be transferred
except to the Database Owner, who can impersonate the table owner by
running the setuser command. A System Administrator can also alter user
tables.
Auditing Values in event and extrainfo columns are:
Command or access
Event Audit option audited Information in extrainfo
3 alter alter table • Roles – Current active roles
• Keywords or options – add column, drop column,
modify column, add constraint, or drop constraint
• Previous value – NULL
• Current value – NULL
• Other information – NULL
• Proxy information – Original login name, if a set
proxy is in effect
About AquaClusters Privacy Policy Support Version - 19.0.2-4 AquaFold, Inc Copyright © 2007-2017