Avoiding Product Joins

How to eliminate Product Joins

What is product join?

Avoiding Product Cross Join in Oracle
Product join is one of the implementation methods of an SQL JOIN operation.
Do not mix up with cross join (Cartesian product), which is one type of SQL joins.

SQL join types, eg.: inner join, left outer join, full outer join, cross (Cartesian) join
Join implementation types, eg.: nested join, merge join, hash join, product join.

Product join (of tables A and B ) is the most simple method of join implementation:

  • Produce each of <A;B> record combinations, say take each records from A singly, and match it with each records of B one-by-one.
  • Test the join condition on each produced <A;B> record pairs, and eliminate those combinations where the condition fails.
The two steps are often combined, and the “testing phase” is executed right after a record combination is generated, and the non valid combinations right after dropped. This saves a lot of temp space.

Why don’t we like product joins?

Well, it has a really bad reputation. It is slow, stuffs CPU, etc.
Yes, it usually is, does. It is the brute force method for executing a join, with costs in order of N*M (where N, M are the record numbers of the joinable tables)

Indeed there are situations when it is the best choice, or the only feasible way.

When is it good or necessary?

Please note that product join is the method what is always applicable, independently of all circumstances.

Good

Product join is typically simple, dumb and slow algorithm, this is why we do not like it, but has a very important advantage: requires no pre-processing.* This is why we LIKE IT:)
If we have to join a really large table to a very small table (couple of records) product join is far the most effective method, since the sort of a very large table ( order of N*logN ) can cost a lot, while joining to 1-2 records is really not a big deal.

Necessary

There are join situations when the only way to go is the product join. Why? Because of the join condition. The “clever joins” (merge, hash) require some information and/or condition that somehow enables to cheat the A x B comparisons: reduce them to the ones that really necessary, and be done in a more effective manner.

* OK, in Teradata this means: only requires that the matchable records from both tables must be on the same AMP. This implies the “small” table to be duplicated to all AMPs.

Merge join example


from A
join  B on A.customer_id = B.customer_id
and A.trx_dt between B.eff_dt and B.exp_dt

  • Customer_id clause is in AND condition with the others
  • Customer_id is selective enough that hash(customer_id) can reduce the comparisons reasonably
  • Note that A and B must be sorted (re-sorted) by the hash of customer_id

Product join example


from A
join   B on substr(A.telephone_no,1,B.prefix_length) = B.telephone_no_prefix

  • There is no comparison reducing partial-condition
  • Note that neither of the tables required to be sorted in a specific order.

Unavoidable product joins

  • Non-eqality condition
  • Function used (eg. substr())
  • Dependent expression is used (eg. A.x+B.y = A.z)
  • Cross join: intentional Cartesian product

Avoidable product joins

Data type mismatch

The merge join example above works only if customer_no in A and B tables have the same “style” data types, since their hash value will match only in this case. Say hash(13674) <> hash(‘13674’), however integer is compatible with decimal, and char is compatible with varchar.
Pay attention on data type consistence during physical data modeling.
  • Use domains to eliminate the possibility of mismatch
  • Align to used data types when defining temp tables, or use “create table as …” statements
  • If you cannot avoid mismatch, relocate the necessary data to temp tables with proper data types during processing.

OR condition

Let’s assume the following join condition:
select ...
from A
join  B on A.col1 = B.Col1
OR 

           A.Col2 = B.Col2

This is equivalent, w/o compulsory product join :

select ... 
from A
join  B on A.col1 = B.Col1 

UNION 
select ...

from A
join  B on A.Col2 = B.Col2

Missing/stale statistics

As I mentioned before product join is the most effective join between a very large and a really small (couple of records) table. If the optimizer thinks that a table is pretty small, but it is not indeed, it may choose a product join in all good faith, misleaded by a stale or missing statistics.
Define and keep fresh those statistics by the optimizer can determine the size of the joinable record sets  properly.

How to find avoidable product joins

It is not trivial to list the avoidable product joins. Practically all product joins are required to be examined one-by-one and judged to be avoidable or not. And if avoidable, what to do for.

I strongly recommend to use PRISE Tuning Assistant for both finding the product joins and analyzing the possibility and necessity of elimination:

  • List top consuming queries with product join(s)
  • Check the PROD JOIN steps: which tables are processed that way
  • Check those join conditions for cases described above

What to do if cannot be avoided?

In this case I recommend to try the decomposition, described here.
It can help reducing the number of comparisons, saving CPU and runtime.

 
Have a successful optimization!
 

The “No more spool space” Error Explained

Why do I get “No more spool space” error?

This is the most familiar error message in Teradata world:
“Failure 2646 No more spool space”
What does it really mean, what is it caused by?
Let’s get back to the basics.

What is spool space?

Spool space is a temporary area, that can be used to store the part-results during query processing, as well as volatile tables. All free space in the database that are not allocated by PERM data, technically can be used for spool area, since a PERM data does not want to allocate that area.

Each database users may have a “spool limit” that restricts the user to allocate more spool area at a time, than its limit. Keep in mind that all active sessions of a username must share the spool limit together.

Teradata is a massive parallel system, therefore the spool limit must be interpreted on AMP level:
Eg: 100AMP system, a user with 10G spool limit means: 100M spool/AMP

What is spool space limit good for?

This limitation method is a quite simple way to cut those queries from the system that would suck too much resources. There is no exact relationship between high spool usage and ineffective query, but statistically the correlation is high.
Practically: a bad query is being kicked off before is consumes too much resources unnecessarily.

No more spool space scenarios

System ran out of spool space

This is the most rare situation, forget about. There are too few free space on the system, but this situation used to be avoided by defining a “SpoolReserve” database, where no objects are created, this way that area is always available for spool.
If many “big spool limit” users run high spool queries parallel, then this rare situation can yet occure.

Multiple session of the user are active together

This is a quite rare situation also. Check the active users from dbc.sessioninfo.

Volatile tables

All existing volatile tables reside in your available spool space, reducing the available. If you create many, and even with skewed distribution, you can stuff your spool up. Choose “primary index” carefully, when defining volatile tables also.

Improper execution plan

These are the >90% of cases that cause the “No more spool space” errors. Let’ see how:
  • “Duplication to all AMPs” of a non-small set of records
    The root cause is typically missing or stale statistics. Either system thinks that much less records will be duplicated than the real (sometimes billions of records get in this kind of spools), or knows this exactly, but on the other branch of the query there are more low quality estimations, and this execution seems to be cheaper.
  • Redistribution of records by a hash that causes skewed distribution
    Check the corresponding blog post: Accelerate skewed joins
  • Retrieve huge amount of records into spool (locally or redistributed onto the AMPs)
    Specific query structures imply this execution, like: join to a view that “union all”-s big tables.
I suggest to use PRISE Tuning Assistant to identify what is the problem. It spectacularly displays which execution step falls in the problems above.
Increasing the spool limit will not solve the problems in the most cases. 

Too big task

Sometimes a given SQL query requires big spool area to be performed, even with the best execution plan.

This is the only case when raising spool limit is the solution. But first you have to understand that the task is really big. PRISE Tuning Assistant is a good tool for identify this in a minute.

5 tips for Teradata Fastload

teradata fastload

Teradata Fastload – use effectively

Teradata Fastload is a very fast and efficient tool to load huge amount of data into a Teradata database. It has good documentation to browse as a reference, however there are a few small things good to know to use it more comfortably – either in challenged situations.

Tip1: How to understand “Error1” table

When a record is failed to load because of any Teradata error (conversion, null problem, etc), the record get into “Error1” table, which is set in the “BEGIN LOADING” section of the control script.
The structure of the “Error1” table is fix:

  1. ErrorFieldName:
    Which column was problematic
  2. ErrorCode:
    The Teradata error code, lookup in DBC.ErrorMsgs if documentation is not at your hands
  3. DataParcel
    This filed contains the record transformed into hexa string format

Unfortunately DataParcel’s hexadecimal codes are unreadable for a human in Queryman or other GUIs.
With this short BTEQ script you can export the original record layout into a file:

.SET RECORDMODE ON
.EXPORT DATA FILE=badrecord.out
SELECT DataParcel FROM <Errortable1>;

Tip2: How to switch off deduplication?

There is no way to switch it off. Teradata Fastload will do a full-record-layout deduplication. No matter we use nonunique primary index (NUPI) and MULTISET table definition Teradata will deduplicate eagerly.

If you must load each records, use a preprocessor script (PERL, AWK, etc), which adds a running number column to your data. This way there will be no identical records at all, surely.

Tip3: Deduplication vs. “Error2” table

Why do we have “Error2” table, if Teradata Fastload always does a deduplication? This error table contains those records which passed the full-record-layout deduplication, but failed at the unique primary index (UPI) duplication check. So the data contained other record with the same PI value, but differ in one or more other columns. To load those records either, user NUPI or NO PI table.

Tip4: What is the good “Sessions” number?

SESSIONS is a parameter to be specified at the “BEGIN LOADING” section. Can fall between 1 and “Number-of-AMPs”. The bigger is the number, the better is the loading performance, but the longer is the logon time. So for a small file use 1, for large number of rows, use the maximum allowed. For medium files I generally use 10.

If a lot of small different files to different tables are to be loaded, mind using other technique, since Teradata Fastload will not have advantages in this league.

Tip5: Table is “Being loaded”…

When a Teradata Fastload session is stopped for reaching error limit, the target and the error tables cannot be accessed due to locking:
Error 2652: Operation not allowed: <db>.<table> is being Loaded.

Run this Fastload script to release the locks and get access to the tables:

BEGIN LOADING <db>.<table>;

END LOADING;

After this script finished the error tables are readable to identify the load problem root causes, see the “Error1” table tip above.

Teradata performance optimization project
I. Basics and prerequisites

Teradata performance optimization

Introduction

The Teradata database is a pretty expensive box, which serves a lot of people in the company – in a direct or indirect way. The efficient utilization of its resources is essential, from both financial and user satisfaction aspects.


This series of posts will follow a real-life Teradata optimization project’s methodology, steps and results, with this achievement:

System level gain: 31,6%

Levels of performance optimization

We can typically apply 3 levels of Teradata performance optimization:

Application

Meaning: Analyze user requests and system activity, then refine them, eliminating unnecessary activities.
This level checks if the executed processes are really needed, and they do what the user really needs. Eg: eliminate unnecessary data loads or reports; limit a report to a shorter period

  • Big gains possible w/o any trade-off
  • Comprehensive analysis (incl. find and interview appropriate persons)

SQL

Meaning: Pick individual SQLs and optimize its execution
This approach finds the top resource consumer SQL statements and reduces the resource consumption, while produces the same results.
Eg: Change PI; add/refresh statistics; rewrite SQL

  • Easy analysis
  • Low risk (small affect, good testing possibilities)
  • Needs SQL level analysis

Infrastructure

Meaning: Tune low level system (RDBMS, OS, Hardware) parameters
Tuning on infrastructure level is a global intervention regardless of users/business areas or importance

  • Miracle in some special cases
  • High risk (affects the all system)
  • Complex cross-dependencies

I prefer and recommend SQL level optimization (b.) in the beginning, because it delivers high value along with low risk and scaleablity of project. Later on can the others come into focus.

Goal of SQL optimization

The goals of an optimization project can be different. Some examples:
“Finish load procedure until 6am”
“Present my report 2 hours earlier”
“Reduce CPU load by 20%”
“Let hardware extension postponed by 6 months”

The goal of our project was: speed up the “load batch” process, in order to let a specific identified data area load finished and available for further usage 1 hour earlier than before.

Prerequisites

The key input for an SQL optimization is the information about what exactly happens:

What, when, how:

Which SQL statements ran, when, what steps were executed, how much resources did the consume with what skewness, etc.
All these can be found in the DBQL tables of Teradata (dbc.dbql*)
Switching on them will not cause significant extra load, but it is required to archive/purge them regularly (see PDCR tool in Teradata package).
Teradata Administrator enables to switch logging on (Tools/Query logging…) and one can check the actual settings:

select * from dbc.dbqlruletbl;
Explain/Obj/Sql/Step flags should be set to ‘T’, others are not necessary. Global (1 record with UserID: 00000000 ) logging is recommended.
If continuous logging is not possible, logging should be switched on 1 week before starting the optimization project

What / Who runs the statements?

Teradata provides a great feature for this: Query Banding
All executed SQL statements can be flagged with several information elements regarding the environment, job, script, version etc. which helps identifying a specific one.
The queryband data will appear in the DBQL logs – if the running environment is configured to flag the queries being run.
Querybanding is not essential for an optimization, but helps really a lot to identify the exact point in the data warehouse ecosystem, where the modifications should be done (which job/jobstep/script).

What are the environmental conditions?

Eg. Data volumes, statistics, etc.
This information is available in the data dictionary (DBC tables and views), but only as an actual – not for the time the statement run.
Most of the cases it is OK for us, but this property needs to be kept in mind.

When all these are finished, we are ready to do a successful performance optimization project.

The next post is about how to identify the good candidate SQL statements for optimization.

Stay tuned!

How to speed up Teradata Fastexport

About Teradata Fastexport

Fastexport1
Teradata Fastexport is a nice, powerful tool to export mass volume of data to file. Its default execution flow is the following:

  1. Apply locks on the affected tables
  2. Execute the SELECT statement
  3. Places the result in a SPOOL
  4. Releases locks
  5. Exports data from the SPOOL

It has an advantage: the release of locks will happen immediately after the result data is copied into the SPOOL. Terefore need not to wait until the data is written to the export file outside the database – it is typically a single process operation, can last for a long time.

The price to be payed for this advantage: all the exported data must be copied first to the SPOOL. When the SELECT is a simple operation (no join, sort or other extras), the copy phase can be saved, which can approach 50% of the total resource and time consumption.

The NOSPOOL trick

For these simple cases one can use the NOSPOOL/NOSPOOLONLY mode.
While NOSPOOL is a soft switch, which is ignored if not applicable, NOSPOOLONLY will make statement failed if the nospool option is not available for the given query.

In NOSPOOL mode the database will not make a copy of result data. Instead it directly passes the data to the client, which has another benefit: the execution of the SELECT runs dominantly parallel with the file write operation, instead of successively like in SPOOL (default) case.

Usage:
.BEGIN EXPORT
SESSIONS …
[SPOOLMODE SPOOL|NOSPOOL|NOSPOOLONLY];

The tradeoff

The NOSPOOL operation only supports the “simple” SELECT statements. The not supported operations:

  • SORT (ORDER BY, HAVING)
  • JOIN
  • Aggregations (GROUP BY, SUM())
  • OLAP functions
  • Non-table data (date, session, user)
  • Multiple statements

But allows:

  • Sampling
  • Column selection
  • Scalar functions (like col1 + col2, substr())
  • “Where” clause (incl. partition elimination)

The disadvatages of NOSPOOL mode:

  • No ActivityCount is available
  • Locks maintained till the end of the export session
  • Data conversion errors can happen during the exporting phase
  • (Only available from V13.10)

What is it good for indeed?
It can almost double the throughput in case of:

  • DataMart export (to another database)
  • Archive dumps
  • Copy data to test environment
  • Migration to another database systems

Measures

We’ve executed a Teradata to Netezza migration, where got to transport ~12TBytes of data through file interface. After we activated the NOSPOOL option, the average export speed grew up by 84%.

Quicktips – Rowcount

Quick rowcount

Sometimes we are interested in how many records are in a table.
It is easy, we issue the typical rowcount SQL statement:

select count(*) from <table>;

However Teradata can serve this request effectively by only scanning the cylinder index (not all data blocks), it still can last for a long time in case of big tables and extensively loaded systems.

One can have a quite good assumption for the record count from the statistics. This method is implemented in the PRISE Tuning Assistant software

V13.10 and before

help stat <table>;

Choose any existing statistics, the latest is the best (by “Date”;”Time”), and then issue this, replacing “xxx” with the chosen stat colmn(s):

help stat <table> column (xxx);

The “Number of rows” column shows the estimated record count. The same value appears for each buckets, take the value in the first row.

V14.00 and after

select databasename, tablename, RowCount,LastCollectTimestamp from dbc.statsv
where databasename='<db>' and tablename='<table>'
qualify sum(1) over (partition by databasename,tablename order by lastcollecttimestamp desc rows unbounded preceding) = 1
;

Please note that this data may be an estimation (sample statistics) or outdated (stale statistics).

Without statistics

In this case we can have deduct the order of magnitude from the allocated size of the table.
Naturally, the recordcount strongly depends on the average record size.

select databasename,tablename, sum(currentperm) /1024/1024 Table_MBytes
from dbc.allspaceV
where databasename='<db>' and tablename = '<table>'
group by 1,2;

QUICKTIPS – DELETE or CREATE+INSERT?

Delete or Insert? Which one is faster?

The question

If I got to delete rows from a table, practically I have two options:

  • Issue a DELETE command with the desired WHERE / JOIN condition
  • Create an empty table with the same structure and INSERT the remaining records using the negated WHERE condition and the rename tables
The first one is simple and straightforward, the second is complicated and will require more perm space. So it is a question?
Yes, it is. Because of the backgrounds of the execution:
  • DELETE
    It will modify each blocks that contain record(s) to be deleted. It has to write Transient Journal (transaction log) to enable transaction related functionality. In addition those data blocks which fill-up-rate falls below a threshold (eg. 50%) they will be merged. All these may cause really lot of I/O overhead. In some cases the deletion of the rows will happen in “random-like” order therefore the I/O access will be random, which is painfully slow.
  • INSERT
    In this case it is quite simple: we insert into an empty table, which causes almost zero Transient Journal data, and the system has to write only that amount of data blocks, what can store the records that will remain, once each block. An additional advantage: the I/O will always be “linear” instead of “random-like” therefore it can be multi-block operation, which is pretty fast.

Shall we forget about the delete?

The answer

No, absolutely not. DELETE is good enough if the amount of the records that must be deleted is not that much.
We have two factors that will help us in the decision:

  • How many records to be deleted?
  • How many records will remain after deletion?

I do not have well defined universal thresholds where it is worth to choose this or that, it depends.
I used to use the CREATE+INSERT+RENAME method if

    • The table is big enough (> > 0,1-0,5Gbytes or 10-50k rows / AMP)
      and
      I got to delete more than 20..40% of the records
    OR
  • The table has high hash collision (HC) rate on the PI
    and
    The table is MULTISET*
    and
    The delete is a JOIN-ed one

So it depends, got to measure. PRISE Tuning Assistant can help you to build up your own rule of thumb, applicable on your environment, by showing the resource consumption in pretty details.

* If the HC rate is high and the table is SET then the INSERT will be very slow because of the duplication check method

QUICKTIPS – DELETE+INSERT – together or separetely?

Shall I run DELETE and INSERT separately?

We often have permanent or temporary tables in our data warehouse which must be purged before they are populated again. We can do the operation within a transaction or separately. It is not the same…

Options – (example)

The examples assume Teradata (BTET)  mode, which is Auto-commit by default

  1. Together: both statements run in the same transaction
    1. Explicit transaction
      begin transaction;
      delete from TABLE1;
      insert into TABLE1 SELECT …..;
      end transaction;

    2. Multi-statement
      delete from TABLE1
      ;insert into TABLE1 SELECT …..;
  2. Separately: the statements run in separate transaction
    delete from TABLE1 [ALL];
    insert into TABLE1 SELECT …..;

     (Please note if you run it in Teradata Administrator aka. WinDDI, this will be run in one explicit transaction, since the application will place a BT; before the script and an ET; after.)

    In this case the ALL clause is only optics: the DELETE without a WHERE condition is handled internally as “ALL”.

What is the difference?

The result

If everything is OK, we will not experience any difference at the end – from result aspect. The same data will be found in TABLE1 after we finished each ways. The difference shows if the INSERT step fails eg. for running out of perm space:

  • In “A.” cases the all transaction will be rolled back, therefore we will see all the data in the table we had before the delete.
  • In “B.” case we will delete the data from the tab, independently the success of the INSERT phase. After a failed INSERT the table will be empty

We have another additional transient difference:

  • In A. case we will not “see” an empty table from another session in any point of time: until a point the table seems as a “before” version, and when the transaction finished it will switch to an “after” image for other sessions in no time.
  • In B. case when the DELETE finished, the table seems to be empty up to the moment of COMMITing the INSERT statement. So if the INSERT lasts 2 hours, others will face and empty TABLE1 for 2 hours.

So I do not recommend to use option B. if the “empty table” state is not valid.

The performance

The A. and B. options will be executed significantly different manner. The DELETE operation will know that all the data will be dropped, so internally it just administers the data blocks to be free instead of clearing their content record-by-record, BUT only if the statement is the only statement in the transaction! In this case only the data block administration must be logged (to be able to roll back for some reason) instead of every block changes. This results far less Transient Journal activity besides less block I/Os.

Let’s see some measurements.
I’ve put 10M records into a table, and then executed the same DELETE + INSERT statement on it. The results are the following:

  1. CPU:    9  sec I/O: 39K
  2. CPU  >0.1sec I/O:   4K

It is worth to mind the difference if you got to do it with billions of records.

Monitor ongoing SQLs – progress indication

How can I monitor the progress of my SQL?

We often have SQL operations that last hours or more, and we do want to know when will they expectedly finish. There is no progress bar or something similar tool in Teradata, but we have some useful information parcels to be used to make our estimations.

Let’s see how.

Execution steps

SQL statements are broken down to execution steps by the Parsing Engine (PE), which steps are executed sometimes sequentially, sometimes parallel to each other.
This level of monitoring can easily be done by either Teradata Viewpoint or DB_MON tools. It will show you which step(s) was/is/will be executed, and some information about them:
  • What is the estimated runtime (for all steps)
  • How long did the finished ones run (only for finished ones)
  • For how long the currently executed is/are running (only for the running ones)
First we can compare the estimations and the fact runtimes for the finished steps. It can give a guess rate we can use for the further estimations. Naturally it is far from exact calculation, but is just a guideline. The difference comes from estimation bias and server load/priority conditions.
So at the end we can use the estimations’ sum multiplied by the finished steps’ fact/estimation rate as a very rough guess

Within a step

Typically a long running SQL statement contains a few number of long running steps besides more lightweight ones. So the runtime is dominated by some few heavy steps, and in this case the interesting question is: what is the progress rate of the actually running step?
The answer is not that very simple, one must know what happens behind – more or less.
Before examining  exact cases, I provide some basic information and tricks.

Tricks to know

Typically used SQL scripts by myself

Current spool used by a user (1)

We can query the data dictionary for the actually allocated spool space by a user. This is very important that this is related to a user, not a session (sum of all logged in sessions of that specific user). 
The current spool covers all objects currently stored in spool: 
  • used by the currently executed query step
  • all the other spools generated by previous steps and still not dropped
  • all existing volatile tables either. 

So be careful with it.

The current spool information is usually interesting from two aspects:
  • Total spool usage of a user, now (a):
    SELECT cast(sum(currentspool) /1024/1024/1024 as decimal(15,3)) Current_spool,
    Current_spool / (hashamp()+1) Average_spool, cast(max(currentspool) /1024/1024/1024 as decimal(15,3)) MaxAMP_spool,
    cast(100.0 - ((Average_spool / nullifzero(MaxAMP_spool)) * 100.0) as decimal(4,1)) Spool_skew
    FROM dbc.diskspace WHERE databasename = '
    <user>';
  • Spool usage by AMP of a user, now (b):
    SELECT vproc, cast(currentspool /1024/1024 as decimal(15,3)) Current_AMP_spool_MB
    FROM dbc.diskspace WHERE databasename = '
    <user>'
    ORDER BY 2 DESC;

Current size of a table (2)

  • Total size of a table (a):
    SELECT databasename,tablename,cast(sum(currentperm) /1024/1024/1024 as decimal(15,1)) Current_perm_GB
    FROM dbc.allspace
    WHERE databasename = '
    <db_name>' and tablename = '<table_name>'
    GROUP BY 1,2;
  • Table size by AMP (b)
    SELECT databasename,tablename,vproc,cast(currentperm /1024/1024/1024 as decimal(15,3)) Current_perm_MB
    FROM dbc.allspace
    WHERE databasename = '
    <db_name>' and tablename = '<table_name>'
    ORDER BY 4 DESC;

Current allocated and free space in a database (3)

  • Total allocated/free space in a database, currently (a):
    SELECT databasename,
    cast(sum(currentperm) /1024/1024/1024 as decimal(15,1)) Used_perm_GB,
    cast(sum(maxperm-currentperm)/1024/1024/1024 as decimal(15,1)) Free_perm_GB
    FROM dbc.diskspace
    WHERE databasename = '
    <db_name>'
    GROUP BY 1;
  • Sizes in a database by AMP (b)
    Run before the operation:
    CREATE VOLATILE TABLE CPERM as
    (SELECT vproc,currentperm, maxperm-currentperm freeperm FROM dbc.diskspace WHERE databasename = '<db_name>') with data PRIMARY INDEX( vproc) on commit preserve rows;

    Run during the operation any time of interest:
    SELECT databasename,
    cast(sum(a.currentperm-b.currentperm) /1024/1024/1024 as decimal(15,1)) Used_perm_GB,
    cast(sum(a.maxperm-a.currentperm-b.freeperm)/1024/1024/1024 as decimal(15,1)) Free_perm_GB
    FROM dbc.diskspace a join CPERM b on a.vproc=b.vproc
    WHERE databasename = '<db_name>' GROUP BY 1

Transient journal size and free space in DBC (4)

When running an SQL statement which modifies table data, it will generate more or less transaction log (called TransientJournal in Teradata). Its size sometimes helps to determine the progress of an operation. Please note that this information is very-very informative, since we do not know how much journal will be written, and in addition all the transactions use the same journal table, and the purge of the released journal area is an asynchronous task. It is important, that the journal must fit into DBC, so the following query shows the DBC’s free space along with the current journal size. Please note: this query examines the conditions on total level instead of AMP level, so if you run out of DBC space on one AMP the system will fail, in spite you see lots of total free spaces.
  • How’s going with the transient journal:
    SELECT tj.Current_perm_GB TransientJournal_current_GB, dbspc.Free_perm_GB DBC_free_perm_GB FROM
    (SELECT databasename,tablename,cast(sum(currentperm) /1024/1024/1024 as decimal(15,1)) Current_perm_GB
    FROM dbc.allspace WHERE databasename = 'DBC' and tablename = 'TransientJournal' GROUP BY 1,2 ) tj cross join
    (SELECT cast(sum(maxperm-currentperm)/1024/1024/1024 as decimal(15,1)) Free_perm_GB FROM dbc.diskspace WHERE databasename = 'DBC') dbspc;

Within a step, continued…

The progress examination method will differ depending on the type of the operation – unfortunately this is how it goes. The reasons are hidden behind the internal processes of Teradata… 

INSERT INTO xxx SELECT … FROM …

This type of operation typically consists of the following steps:

  1. Retrieve data from the source into a SpoolX
    This step can be examined by checking the executing user’s spool space (query 1.a/1.b). If the data is skewed, it is worth to check the spool on AMP level (1.b), mainly after the growth of the total will get slower. This is because most of the AMPs are finished already, but a few of them must process the “peak” 
  2. Merge SpoolX into “xxx” table
    I monitor this step in two sections: first check the target table’s size (query 2.a/2.b), which will increase as the merge process advances. The target table’s size will increase by the size of the source spool, (except in some special cases like SET table duplication elimination) so if the target table was not empty before the insert, just watch the difference between initial and current size. It goes linearly along the time up to a specific point, when the first AMP finishes its work.When an AMP has been finished the merge, it will release the corresponding source spool area ( which was produced by step 1. ). So when the growth of the target slows down, switch to check the executing user’s spool space by AMP (query 1.b), and watch how many AMPs of the all will have zero or minimal spool occupied (the non-finished ones show the full spool size and do not decrement until they finish). This phase is quite short if the data is not skewed AND the system is well balanced (co-existing node generations…) AND another sessions do not cause skewed load to the system, otherwise the first and last finishing AMP may differ significantly. In addition, the last AMP can run extremely long when the hash collision problem escalates. Unfortunately the progress within a single AMP is not visible* (see note below).
    When the target table was non empty when the INSERT started, we usually experience significant increase in the size of the Transient Journal (query 4). This is not a good progress indicator, since we do not have a guideline how big is the target journal size, and other transactions use the same journal. In addition the release of the unused journal area is an asynchronous process, which is may be delayed from the SQL operations finish. The purpose of monitoring this object – together with the DBC free space – is to ensure that we do not eat up all the free DBC space. Of course it is a rare situation, but is is a very unpleasant situation when half-a-day run is rolled back because we run out of DBC space…
  3. Commit transaction
    This step usually lasts a very short time, except when the table is very skewed. Usually I do not monitor this step.

CREATE TABLE AS …

The basic execution structure (from our aspect) will not differ in major things, except one. The “preparation step(s)” eg. retrieve into a spool can be measured exactly the same way as we did at the “INSERT… SELECT…” case, but the MERGE is different: at the time of execution we do not have the target object in the data dictionary, which size we could measure. However we are not absolutely lost: the free space in the target database (query 3.a/3.b) is kept maintained during the MERGE, so if we measure it, and compare to the amount when the process started, we get a good progress indicator of the MERGE step. Take care, it only works if nobody modifies the Maxperm (size) of the target database neither other process puts or removes data into/from it during our operation.

  1. Save the initial condition into a volatile table (it is very small spool)
  2. Query the current increment globally

When the size increasing slows down we can go back to check the spool size at AMP level, and watch how many AMPs has already been finished.

* Note: The progress of a single AMP is not measurable using the provided methods, but we can have some indirect deductions at the MERGE step. If we know the expected size or record number ratio of the smallest and biggest AMP, and we can measure the finish time of the smallest AMP (when the spool size on an AMPs begin to fall to zero or very low value (the MERGE is finished on that AMP), we can extrapolate the runtime on the biggest AMP (this will be the total time). But only if hash collision is not significant: MULTISET TABLE, or low “high-mode-frequency” of the PI columns, otherwise the runtime is dominantly determined by the hash collision level (where Nx is the top mode-frequency values of the PI; O(sum(Nx*Nx)) ), not the number of records