TE0-123 Teradata 12 Physical Design & Implementation Exam Set 3

An existing personnel application is utilizing 300 GB spread out over five base tables. The OURGUYS table is 50 GB containing employee data. The LEARNING table is 150 GB containing the training records for all employees. The STAFFING table is 50 GB containing data on prospective new employees. The THEIRGUYS table is 40 GB containing data on contractors. The HIERARCHY table is 10 GB containing both financial and organization hierarchies. The training department is launching a new course that all employees will need to complete this year which will increase the training records by 10%. The database administrator is planning to compress the three employee tables and estimates that 40% compression will be achieved. What is the closest space estimate for the personnel application after the changes?


Options are :

  • 215 GB
  • 210 GB
  • 180 GB
  • . 195 GB

Answer : 215 GB

An insurance company is developing a web-based application to provide member access to claims history. There is no Service Level Agreement (SLA) for the initial launch of the application. The data model contains claims history and member identification tables which will be joined to a set of security tables to support a set of canned queries. The claims history table is loaded daily with MultiLoad while the other tables are refreshed monthly using FastLoad. The database architect is considering denormalizing all the application tables into a single table to support the queries. The ETL architect is willing to make any changes necessary to support the new application. Which statement is true concerning denormalizing the tables into a single table?


Options are :

  • Advanced indexing on the existing tables might support the queries without requiring adenormalized table.
  • Multiple indexes will need to be created on thedenormalized table in order to support the queries.
  • Creating adenormalized table for web-access only is the most optimal data integration option.
  • The creation and maintenance of thedenormalized table will require a change in load strategy in order to support the queries.

Answer : Multiple indexes will need to be created on thedenormalized table in order to support the queries.

As part of a query tuning effort, which two sources of information will help determine whether statistics should be collected on additional columns? (Choose two.)


Options are :

  • the estimated execution frequency of the query
  • EXPLAIN
  • output from the HELP STATISTICS command
  • SQL of the query
  • the priority of the workload

Answer : the estimated execution frequency of the query SQL of the query

TE0-141 14 Teradata Basics Certified Practice Test Set 4

Given the following partitioning expression: PARTITION BY (RANGE_N How would an administrator complete this expression for a MLPPI table that needs to be partitioned into 10 store locations within weeks?


Options are :

  • (sale_date BETWEEN DATE '2010-01-01' AND DATE '2010-12-31' EACH INTERVAL '1' WEEK), RANGE_N (store_location BETWEEN 1 AND 10 EACH 1))
  • (store_location BETWEEN 1 AND 10 EACH 1), RANGE_N (sale_date BETWEEN DATE '2010-01-01' AND DATE '2010-12-31' EACH INTERVAL '1' WEEK))
  • (sale_date BETWEEN DATE '2010-01-01' AND DATE '2010-12-31' EACH INTERVAL '7' DAY), RANGE_N (store_location BETWEEN 1 AND 10 EACH 1))
  • (store_location BETWEEN 1 AND 10 EACH 1), RANGE_N (sale_date BETWEEN DATE '2010-01-01' AND DATE '2010-12-31' EACH INTERVAL '7' DAY))

Answer : (sale_date BETWEEN DATE '2010-01-01' AND DATE '2010-12-31' EACH INTERVAL '1' WEEK), RANGE_N (store_location BETWEEN 1 AND 10 EACH 1))

Which two column level attributes affect table sizing estimates? (Choose two.)


Options are :

  • journaling
  • fallback
  • Unicode
  • compression

Answer : fallback compression

A database conversion needs to modify most of the rows in a 4 billion row transaction table. The conversion also needs to modify the code tables referenced by the transaction table to implement a revised set of codes. The job contains a set of SQL requests all of which can operate independently and be run multiple times without negative impact to the data. The available space for before images does not permit the transaction table to be updated using a single transaction. Which approach and session mode should be used? (Choose two.)


Options are :

  • The job should use ANSI session mode to reduce before image journal space.
  • The updates to the transaction table need to include a COMMIT statement after each request.
  • The transaction table needs to be updated with multiple SQL requests each processing a percentage of the table.
  • The job can use either ANSI orTeradata session mode as session mode does not impact before image journal space.
  • The job should useTeradata session mode to reduce before image journal space

Answer : The job can use either ANSI orTeradata session mode as session mode does not impact before image journal space. The job should useTeradata session mode to reduce before image journal space

TE0-01A 12 Teradata Fundamentals Certification Practice Exam Set 3

Which three statements are true about implementing Referential Integrity? (Choose three.)


Options are :

  • When enforced, a row cannot exist in a table with a value for a referenced column if no equal value exists in its referencing column.
  • The keyword REFERENCES describes the table and column set in which the Primary Key image of the Foreign Key resides.
  • Foreign Keys are used to maintain Referential Integrity among related tables.
  • When enforced, a row cannot be deleted from the referencing table as long as there is a row in the referenced table whose Foreign Key matches it
  • Referential Integrity constraints are frequently used to optimize join plans.

Answer : The keyword REFERENCES describes the table and column set in which the Primary Key image of the Foreign Key resides. When enforced, a row cannot be deleted from the referencing table as long as there is a row in the referenced table whose Foreign Key matches it Referential Integrity constraints are frequently used to optimize join plans.

A user submits a series of multi-statement requests against multiple tables in a database. Some requests are performing updates to a large percentage of the table rows, while other requests are performing inserts. Which two statements are true regarding locking for the multi-statement request? (Choose two.)


Options are :

  • Locks are released automatically as part of normal transaction processing.
  • The most exclusive locks are held for the entire request.
  • Statements are performed in order, with knowledge of dependencies, so lock conflicts do not occur.
  • Locks must be explicitly released when transaction processing completes.

Answer : Statements are performed in order, with knowledge of dependencies, so lock conflicts do not occur. Locks must be explicitly released when transaction processing completes.

An existing credit card application is being migrated in the data warehouse from another database platform. The TRANSACTION table in the existing application has 20 dollar amount columns with large amount data types. Analysis of the 20 columns shows that the largest amount is 6 billion (6 ? 10**9). Which data type should be selected for the amount columns to assure that these amounts will migrate successfully without truncation but require the least storage space?


Options are :

  • DECIMAL(38,2)
  • DECIMAL(10,2)
  • NUMERIC(8,2)
  • NUMERIC(18,2)

Answer : DECIMAL(10,2)

A large transaction table containing five years of data is being implemented, including the following columns: Location_Id, Item_Id, and Date_Id. The following data demographics are available: * Location_Id has 500 distinct values * Item_Id has 75,000 distinct values * Date_Id typically has 10,000,000 rows/value What is the optimal implementation of a PPI on a single table join index if a majority of the join access occurs on the Item_Id and Date_Id?


Options are :

  • Implement PPI onDate_Id.
  • Implement PPI onItem_Id.
  • Implement MLPPI onItem_Id (highest) and Date_Id (lowest).
  • Implement MLPPI onDate_Id (highest) and Item_Id (lowest).

Answer : Implement MLPPI onItem_Id (highest) and Date_Id (lowest).

A source system supplies data files where columns are updated, deleted, or recycled on a periodic basis, may be incomplete, and are supposed to uniquely identify the row. Which two approaches should be implemented as the primary index to reduce data volatility during loads? (Choose two.)


Options are :

  • Composite Natural Key
  • Surrogate Key
  • Identity Column
  • Natural Key

Answer : Composite Natural Key Surrogate Key

TE0-123 Teradata 12 Physical Design & Implementation Exam Set 2

Which two conditions would most likely reduce the value of a multi-column statistic? (Choose two.)


Options are :

  • The columns within the multi-column statistic are used only for joining, not selection.
  • The first column listed in the table definition has a data type ofchar(16).
  • Single-column statistics have already been collected on the participating columns.
  • The total length of the concatenated values in a multi-column statistic is 56 bytes.

Answer : Single-column statistics have already been collected on the participating columns. The total length of the concatenated values in a multi-column statistic is 56 bytes.

After creating a partitioned table to improve data access, it is decided to enforce Referential Integrity by adding a Primary Key constraint to the table. Which two load approaches could be used on this table? (Choose two.)


Options are :

  • FastLoad
  • TPump
  • INSERT SELECT
  • MultiLoad

Answer : TPump MultiLoad

Analysis of a set of queries against a large table with 150 columns has identified that only five of the columns have a high access frequency. One of these columns is an INTEGER and is used to qualify rows using a range condition. Which index(s) could vertically partition the five columns to improve performance to the largest degree?


Options are :

  • Create multiple single column NUSIs to cover the query
  • Create a covering value ordered NUSI.
  • Create a covering hash ordered NUSI.
  • Create a covering composite NUSI.

Answer : Create a covering hash ordered NUSI.

TE0-121 12 Teradata Basics Certification Practical Exam Set 8

Which two statements are true about a CREATE TABLE statement that defines a UPI and a Primary Key? (Choose two.)


Options are :

  • One may use value compression while the other may not.
  • One may reference multiple columns while the other may not
  • . One of them is physical while the other is logical.
  • One implies an access path while the other does not.
  • Only one of them may allow a null.

Answer : One may use value compression while the other may not. One may reference multiple columns while the other may not

In what order are the columns concatenated for multi-column statistics?


Options are :

  • column size order (smallest to largest)
  • alphabetical order of the columns
  • field ID order
  • order of columns in the COLLECT STATISTICS statement

Answer : field ID order

TE0-125 Teradata 12 Solutions Development Practice Exam Set 1

Analysis of a query finds that a large amount of I/O is being used to redistribute and sort approximately 543 million rows in preparation for joining two large tables together. Which index could cover this query by avoiding the redistribution, sorting, and join?


Options are :

  • global join index
  • sparse join index
  • single table join index
  • multi-table join index

Answer : single table join index

A new application is being developed that will have two large base tables, TRANSACTION_HEADER and TRANSACTION_DETAIL, which will need to be joined together frequently to provide transaction reporting to the end users. The TRANSACTION_HEADER table has a UPI of TRANS_ID and the TRANSACTION_DETAIL table has a UPI of (TRANS_ID, TRANS_DT, LINE_NUMBER). It is common for the TRANSACTION_DETAIL table to have a transaction with more than 1000 lines of transaction details. During initial user application testing, excessive query times were reported. There is concern about skewed queries running on the database. Which two changes might be made to improve query performance? (Choose two.)


Options are :

  • Change the TRANSACTION_DETAIL table to a PPI table with a primary index on (TRANS_ID, TRANS_DT) and a partitioning expression using TRANS_DT and implement a secondary index for TRANS_ID.
  • Implement a single table join index on the TRANSACTION DETAIL with a NUPI on TRANS_ID.
  • Implement a secondary index for TRANS_ID on the TRANSACTION_DETAIL table to minimize query skew.
  • Change the primary index of the TRANSACTION_DETAIL table to a NUPI on TRANS_ID.

Answer : Implement a secondary index for TRANS_ID on the TRANSACTION_DETAIL table to minimize query skew. Change the primary index of the TRANSACTION_DETAIL table to a NUPI on TRANS_ID.

A user is looking at an Explain plan and notices the Optimizer is going to duplicate a large table on all AMPs instead of performing a product join. What is the most likely reason the Optimizer chose this plan?


Options are :

  • A hash join will be used in the current step instead of a product join.
  • The plan will save CPU by duplicating large tables onto all AMPs.
  • The statistics might be stale.
  • The join column is not indexed on either table.

Answer : A hash join will be used in the current step instead of a product join.

TE0-124 Teradata 12 Database Administration Practice Exam Set 2

A large transaction table for a global implementation is being designed where the access demographics show that a majority of access is based on filtering of the Region_Cd and Transaction_Dt. The data demographics reveal the following: ?Region_Cd has 10 distinct values ?Transaction_Dt has 100,000 typical rows/value What are two goals in determining the order (highest to lowest) of the partition columns? (Choose two.)


Options are :

  • Minimize contiguous internal partitions to scan and skip at the higher level.
  • Maximize contiguous internal partitions to scan and skip at the higher level.
  • Maximize repositioning to the next internal partition to be read at the lower level.
  • Minimize repositioning to the next internal partition to be read at the lower level.

Answer : Minimize contiguous internal partitions to scan and skip at the higher level. Maximize repositioning to the next internal partition to be read at the lower level.

Random AMP sampling provides demographic information about which two? (Choose two.)


Options are :

  • number of distinct values in a NUPI
  • skew in a table's distribution of rows
  • table row counts
  • number of distinct values in each NUSI

Answer : number of distinct values in a NUPI number of distinct values in each NUSI

Which situation is most likely to be a good candidate to consider for a multi-column statistic?


Options are :

  • multiple columns participating in a NUSI
  • multi-level partitioning columns
  • multi-column USI
  • multi-column non-equalityjoin columns

Answer : multi-level partitioning columns

TE0-121 12 Teradata Basics Certification Practical Exam Set 1

Given the following steps: 1. Equality join between a small table and large table on non primary index columns 2. Large table is not placed into spool 3. Small table is read into a single partition 4. Return rows that have matching row hash and data values Which type of join is being performed?


Options are :

  • Dynamic Hash Join
  • Nested Join
  • Sliding Window Merge Join
  • Inclusion Merge Join

Answer : Sliding Window Merge Join

Which two are limitations of a UPI? (Choose two.)


Options are :

  • It may not be defined in a single table join index.
  • It may not be compressed.
  • It may not be defined on a Volatile Table.
  • It may contain no more than 32 columns.

Answer : It may not be defined in a single table join index. It may not be defined on a Volatile Table.

Which statement is true when determining if a VARCHAR column should be changed to CHAR to reduce perm space usage?


Options are :

  • Changing the column to CHAR is more efficient when the difference between maximum and average field length is high and compressibility is high.
  • Leaving the column as VARCHAR is more efficient when the difference between maximum and average field length is low and compressibility is low.
  • Changing the column to CHAR is more efficient when the difference between maximum and average field length is low and compressibility is low.
  • Leaving the column as VARCHAR is more efficient when the difference between maximum and average field length is high and compressibility is low

Answer : Leaving the column as VARCHAR is more efficient when the difference between maximum and average field length is low and compressibility is low.

TE0-125 Teradata 12 Solutions Development Practice Exam Set 3

Given the following EXPLAIN text: We do an all-AMPs JOIN step from Spool 4 (Last Use) by way of an all-rows scan, which is joined to Spool 5 (Last Use) by way of an all-rows scan. Spool 4 and Spool 5 are joined using a ________ join, with a join condition of ("(1=1)"). Which join method is represented in this EXPLAIN text?


Options are :

  • Hash
  • Product
  • Partition
  • Nested
  • Merge

Answer : Hash

For performance reasons, what will Teradata use to hash distribute the reference index subtable when Referential Integrity is defined?


Options are :

  • the Primary Key value of the referencing table
  • the Foreign Key value of the referenced table
  • the Primary Key value of the referenced table
  • the Foreign Key value of the referencing table

Answer : the Primary Key value of the referencing table

What will be the resulting number of characters when casting INTEGER to CHAR?


Options are :

  • 11
  • 10
  • 4
  • 8

Answer : 4

TE0-124 Teradata 12 Database Administration Practice Exam Set 5

An address field contains multiple lines. Within the field, a null character '00'XB defines the end of a line. Which data type will minimize storage for this data?


Options are :

  • CHAR
  • BYTE
  • VARBYTE
  • VARCHAR

Answer : BYTE

A consumer risk application uses three base tables: ?customer data stored in PARTY with a primary index of PARTY_ID ?account data stored in ACCT with a primary index of ACCT_ID ?PARTY_TO_ACCT associative table which contains the relationship between customers and accounts which has a primary index of ACCT ID. A database administrator is considering a single table join index on the associative table to improve query processing when the three tables are joined together. Which type of indexing strategy should be employed on the single table join index to gain the performance improvement?


Options are :

  • a NUPI on ACCT_ID to enable the join to the ACCT table
  • a NUSI on PARTY_ID to enable the join to the PARTY table
  • a NUPI on PARTY_ID to enable the join to the PARTY table
  • a NUSI on PARTY_ID to enable the join to the ACCT table
  • a NUPI on (PARTY_ID, ACCT_ID) for better data distribution and a NUSI on PARTY_ID to enable the join to the PARTY table

Answer : a NUSI on PARTY_ID to enable the join to the ACCT table

Which scenario makes the strongest case for creating a single table join index?


Options are :

  • Processing time and row redistribution effort can be saved by using a join index when a self-join of a large table is frequently required.
  • To avoid the effort of joining tables at the time the query runs, a join index can perform the join once, materializing the result as though it were a single table.
  • Join queries against a large base table can avoid costly redistributions by replicating some or all of the large table's columns to a join index.
  • An application needs a persistent image of a table that will be representative of how that table looked at a given point in time.

Answer : Processing time and row redistribution effort can be saved by using a join index when a self-join of a large table is frequently required.

TE0-125 Teradata 12 Solutions Development Practice Exam Set 2

Which two tasks conserve disk space? (Choose two.)


Options are :

  • Run PACKDISK.
  • Remove fallback from non-critical tables.
  • Add Permanent Journals to non-critical tables.
  • Run Filer.

Answer : Run PACKDISK. Run Filer.

Given the following EXPLAIN text: We do an all-AMPs JOIN step from Spool 3 (Last Use) by way of an all-rows scan, which is joined to Spool 4 (Last Use) by way of an all-rows scan. Spool 3 and Spool 4 are joined using a single partition _________ join, with a join condition of ("Job_Code = Job_Code"). Which join method is represented in this EXPLAIN text?


Options are :

  • Hash
  • Nested
  • Merge
  • Product

Answer : Product

From which two tables should data be deleted on a periodic basis to conserve disk space? (Choose two.)


Options are :

  • DBC.ResUsageScpu
  • DBC.AccessLog
  • DBCmngr.LogSpool
  • DBC.AccessRight
  • DBCmngr.LogWDSummary

Answer : DBCmngr.LogSpool DBC.AccessRight

TE0-141 14 Teradata Basics Certified Practice Test Set 4

Which two statements are true when defining an identity column during table creation? (Choose two.)


Options are :

  • GENERATED ALWAYS ensures that identity column values are always systemgenerated.
  • GENERATED BY DEFAULT means identity column values can be system-generated or user-inserted, depending on the circumstance.
  • An error is returned if a MAXVALUE with NO CYCLE is specified.
  • Identity column values can be manually assigned when using GENERATED ALWAYS AS IDENTITY.

Answer : GENERATED ALWAYS ensures that identity column values are always systemgenerated. An error is returned if a MAXVALUE with NO CYCLE is specified.

What are two reasons denormalization may not be used? (Choose two.)


Options are :

  • It is not suitable with mini-batch or frequent loading.
  • Third party tools may not supportdenormalized data.
  • It may increase hardware costs.
  • It is less flexible in the face of changing requirements.
  • Maintaining views with derived column definitions is complex.

Answer : Third party tools may not supportdenormalized data. Maintaining views with derived column definitions is complex.

NR0-015 Teradata Design Architecture V2R5 Practice Exam Set 4

An administrator has been asked to add a new column to the largest table in the database system. The table contains 50 billion rows and 10 NUSIs. This change needs to be made as quickly as possible. The system has plenty of free space available. Which approach is least invasive and maximizes data availability?


Options are :

  • Create a new table that contains the new column, then INSERT SELECT the data into the new table, recreate the NUSIs, use TSET to copy statistics, drop the old table, then rename the new table to the name of the old table.
  • Leave the NUSIs in place,then alter the table to add the new column.
  • Drop the NUSIs,then alter the table to add the new column.
  • Create a new table that contains the new column, then INSERT SELECT the data into the new table, do an ALTER TABLE to add the new column, recreate the NUSIs, use TSET to copy statistics, drop the old table, then rename the new table to the name of the old table.

Answer : Create a new table that contains the new column, then INSERT SELECT the data into the new table, do an ALTER TABLE to add the new column, recreate the NUSIs, use TSET to copy statistics, drop the old table, then rename the new table to the name of the old table.

A very large table is partitioned by day that has the UNKNOWN and NO RANGE partitions defined. At the end of the month the partitions for the days of the oldest month are dropped and partitions for the days of the next month are added. What happens to the data in the dropped partitions?


Options are :

  • The data is placed into the save table.
  • The data is placed in the UNKNOWN partition.
  • The data is placed into the permanent journal.
  • The data is placed in the NO RANGE partition.
  • The data is dropped.

Answer : The data is placed into the save table.

An EXPLAIN plan shows that the Optimizer has chosen to duplicate a large table when joining to a small table. The user cannot understand why the Optimizer has chosen to do this. Which is most likely reason that the Optimizer duplicated the large table?


Options are :

  • soa hash join can be performed in a later step
  • to save I/O and CPU
  • because statistics are stale
  • to perform a productjoin

Answer : to perform a productjoin

TE0-141 14 Teradata Basics Certified Practice Test Set 4

Comment / Suggestion Section
Point our Mistakes and Post Your Suggestions