Oracle SQL
  • LICENSE

Yearly Archives: 2013

SQL*Plus tips #6: Colorizing output

Posted on May 22, 2013 by Sayan Malakshinov Posted in oracle, SQL*Plus, SQL*PLus tips 5,921 Page views 4 Comments

If you have seen a colored scripts like a fish from “Session Snapper v.4” by Tanel Poder or OraLatencyMap by Luca Canali, you may be also want to colorize your scripts.
I’ve created the script for this purposes with a set of predefined substitution variables.
Just download colors.sql and try this script:

@colors.sql;
prompt ::: &_C_RED ***  TEST PASSED  *** &_C_RESET :::
prompt ::: &_C_RED *** &_C_BLINK TEST PASSED &_C_BLINK_OFF *** &_C_RESET :::

You will get something like this:
sqltips6
All these variables have a prefix “_C” and “_CB” for background colors.

Do not forget to close text attributes, for example: &_C_BOLD; bold text &_C_BOLD_OFF; enables “bold” attribute and disables it after “bold text”.

Full list of the predefined variables:

Predefined variables

Description Variable
_C_RESET Reset formatting – Turn off all attributes
_C_BOLD
_C_BOLD_OFF
Set bright mode on/off
_C_UNDERLINE
_C_UNDERLINE_OFF
Set underline mode on/off
_C_BLINK
_C_BLINK_OFF
Set blink mode on/off
_C_REVERSE
_C_REVERSE_OFF
Exchange foreground and background colors
_C_HIDE
_C_HIDE_OFF
Hide text (foreground color would be the same as background)
_C_BLACK
_C_RED
_C_GREEN
_C_YELLOW
_C_BLUE
_C_MAGENTA
_C_CYAN
_C_WHITE
_C_DEFAULT
Font colors
_CB_BLACK
_CB_RED
_CB_GREEN
_CB_YELLOW
_CB_BLUE
_CB_MAGENTA
_CB_CYAN
_CB_WHITE
_CB_DEFAULT
Background colors

[collapse]

In addition, I want to show a simple example of printing histograms.
We can print a simple histogram using the following query:

-- loading colors variables:
@inc/colors;
-- set max length of bar:
def _max_length=80;
-- columns formatting:
col bar format a&_max_length;
-- clear screen:
prompt &_CLS
with t as (-- it's just a test values for example:
            select level id
                 , round(dbms_random.value(1,100)) val
            from dual
            connect by level<=10
          )
select t.*
      -- bar length is just " (value / max_value) * max_length" in symbols:
      ,floor( val * &_max_length / max(val)over()
            ) as bar_length
      -- generating of bar:
      ,lpad( chr(176)
            ,ceil(val * &_max_length / max(val)over())
            ,chr(192)
           ) as bar
from t;
sqltips6-1

And now we can colorize it:

Colorized script

[sourcecode language="sql"]
-- loading colors variables:
@inc/colors;
-- set max length of bar:
def _max_length=100;
-- column formatting
col bar format a&amp;amp;_max_length;
-- clear screen:
prompt &amp;amp;_CLS

-- test query which prints histogram(or may be simply bars?):
with t as (-- it's just a test values for example:
            select level id
                 , round(dbms_random.value(1,100)) val
            from dual
            connect by level&amp;lt;=10
          )
select
       id
      ,val
      , case
           when pct &amp;gt;= 0.9 then '&amp;amp;_C_RED'
           when pct &amp;lt;= 0.4 then '&amp;amp;_C_GREEN'
           else '&amp;amp;_C_YELLOW'
        end
        -- string generation:
      ||lpad( chr(192)
             ,ceil(pct * &amp;amp;_max_length)-9 -- color - 5 chars and reset - 4
             ,chr(192)
            )
      ||'&amp;amp;_C_RESET'
       as bar
from (
     select
        t.*
       ,val / max(val)over() as pct -- as a percentage of max value:
     from t
     ) t2
/
[/sourcecode]

[collapse]
sqltips6-2
SQL*Plus

SQL*Plus tips #5: sql_text/sql_fulltext formatting(sql beatifier)

Posted on April 29, 2013 by Sayan Malakshinov Posted in oracle, SQL*Plus, SQL*PLus tips 7,903 Page views 13 Comments

Sometimes I get tired of watching unformatted query text from v$sqlarea, dba_hist_sqltext in SQL*Plus, so I decided to include automatic query formatting in my scripts.
I thought that there are many libraries for such purposes on languages which i know, and it’s true, but it turned out that many of them are not appropriate for Oracle.
So I took the most appropriate – perl module SQL::Beautify and corrected it. Now i can share my first solution.

How it looks:
sql_textf

What you need to do it:
1. If you on Windows and you have not install Oracle RDBMS or cygwin, you need to install perl. It can be done for example with ActivePerl or StrawberryPerl, but i recommend cygwin
2. You need to save sql_format_standalone.pl within $SQL_PATH/inc directory.

inc/sql_format_standalone.pl

[sourcecode language=”sql”]
package OraTokenizer;

use warnings;
use strict;

use 5.006002;

use Exporter;

our @ISA = qw(Exporter);

our @EXPORT_OK= qw(tokenize_sql);

our $VERSION= ‘0.24’;

my $re= qr{
(
(?:–)[\ \t\S]* # single line comments
|
(?:<>|<=>|>=|<=|==|=|!=|!|<<|>>|<|>|\|\||\||&&|&|-|\+|\*(?!/)|/(?!\*)|\%|~|\^|\?)
# operators and tests
|
\(\+\)
# oracle join
|
[\[\]\(\),;.] # punctuation (parenthesis, comma)
|
\’\'(?!\’) # empty single quoted string
|
\"\"(?!\"") # empty double quoted string
|
"(?>(?:(?>[^"\\]+)|""|\\.)*)+"
# anything inside double quotes, ungreedy
|
`(?>(?:(?>[^`\\]+)|“|\\.)*)+`
# anything inside backticks quotes, ungreedy
|
‘(?>(?:(?>[^’\\]+)|”|\\.)*)+’
# anything inside single quotes, ungreedy.
|
/\*[\ \t\r\n\S]*?\*/ # C style comments
|
(?:[\w:@]+(?:[.\$](?:\w+|\*)?)*)
# words, standard named placeholders, db.table.*, db.*
|
\n # newline
|
[\t\ ]+ # any kind of white spaces
)
}smx;

sub tokenize_sql {
my ( $query, $remove_white_tokens )= @_;

my @query= $query =~ m{$re}smxg;

if ($remove_white_tokens) {
@query= grep( !/^[\s\n\r]*$/, @query );
}

return wantarray ? @query : \@query;
}

sub tokenize {
my $class= shift;
return tokenize_sql(@_);
}

1;

=pod

=head1 NAME

OraTokenizer – A simple SQL tokenizer.

=head1 VERSION

0.20

=head1 SYNOPSIS

use OraTokenizer qw(tokenize_sql);

my $query= q{SELECT 1 + 1};
my @tokens= OraTokenizer->tokenize($query);

# @tokens now contains (‘SELECT’, ‘ ‘, ‘1’, ‘ ‘, ‘+’, ‘ ‘, ‘1’)

@tokens= tokenize_sql($query); # procedural interface

=head1 DESCRIPTION

OraTokenizer is a simple tokenizer for SQL queries. It does not claim to be
a parser or query verifier. It just creates sane tokens from a valid SQL
query.

It supports SQL with comments like:

— This query is used to insert a message into
— logs table
INSERT INTO log (application, message) VALUES (?, ?)

Also supports C<”>, C<""> and C<\’> escaping methods, so tokenizing queries
like the one below should not be a problem:

INSERT INTO log (application, message)
VALUES (‘myapp’, ‘Hey, this is a ”single quoted string”!’)

=head1 API

=over 4

=item tokenize_sql

use OraTokenizer qw(tokenize_sql);

my @tokens= tokenize_sql($query);
my $tokens= tokenize_sql($query);

$tokens= tokenize_sql( $query, $remove_white_tokens );

C<tokenize_sql> can be imported to current namespace on request. It receives a
SQL query, and returns an array of tokens if called in list context, or an
arrayref if called in scalar context.

=item tokenize

my @tokens= OraTokenizer->tokenize($query);
my $tokens= OraTokenizer->tokenize($query);

$tokens= OraTokenizer->tokenize( $query, $remove_white_tokens );

This is the only available class method. It receives a SQL query, and returns an
array of tokens if called in list context, or an arrayref if called in scalar
context.

If C<$remove_white_tokens> is true, white spaces only tokens will be removed from
result.

=back

=head1 ACKNOWLEDGEMENTS

=over 4

=item

Evan Harris, for implementing Shell comment style and SQL operators.

=item

Charlie Hills, for spotting a lot of important issues I haven’t thought.

=item

Jonas Kramer, for fixing MySQL quoted strings and treating dot as punctuation character correctly.

=item

Emanuele Zeppieri, for asking to fix OraTokenizer to support dollars as well.

=item

Nigel Metheringham, for extending the dollar signal support.

=item

Devin Withers, for making it not choke on CR+LF in comments.

=item

Luc Lanthier, for simplifying the regex and make it not choke on backslashes.

=back

=head1 AUTHOR

Copyright (c) 2007, 2008, 2009, 2010, 2011 Igor Sutton Lopes "<IZUT@cpan.org>". All rights
reserved.

This module is free software; you can redistribute it and/or modify it under
the same terms as Perl itself.

=cut

###################################

package OraBeautify;

use strict;
use warnings;

our $VERSION = 0.04;

use Carp;

# Keywords from SQL-92, SQL-99 and SQL-2003.
use constant KEYWORDS => qw(
ABSOLUTE ACTION ADD AFTER ALL ALLOCATE ALTER AND ANY ARE ARRAY AS ASC
ASENSITIVE ASSERTION ASYMMETRIC AT ATOMIC AUTHORIZATION AVG BEFORE BEGIN
BETWEEN BIGINT BINARY BIT BIT_LENGTH BLOB BOOLEAN BOTH BREADTH BY CALL
CALLED CASCADE CASCADED CASE CAST CATALOG CHAR CHARACTER CHARACTER_LENGTH
CHAR_LENGTH CHECK CLOB CLOSE COALESCE COLLATE COLLATION COLUMN COMMIT
CONDITION CONNECT CONNECTION CONSTRAINT CONSTRAINTS CONSTRUCTOR CONTAINS
CONTINUE CONVERT CORRESPONDING COUNT CREATE CROSS CUBE CURRENT CURRENT_DATE
CURRENT_DEFAULT_TRANSFORM_GROUP CURRENT_PATH CURRENT_ROLE CURRENT_TIME
CURRENT_TIMESTAMP CURRENT_TRANSFORM_GROUP_FOR_TYPE CURRENT_USER CURSOR
CYCLE DATA DATE DAY DEALLOCATE DEC DECIMAL DECLARE DEFAULT DEFERRABLE
DEFERRED DELETE DEPTH DEREF DESC DESCRIBE DESCRIPTOR DETERMINISTIC
DIAGNOSTICS DISCONNECT DISTINCT DO DOMAIN DOUBLE DROP DYNAMIC EACH ELEMENT
ELSE ELSEIF END EPOCH EQUALS ESCAPE EXCEPT EXCEPTION EXEC EXECUTE EXISTS
EXIT EXTERNAL EXTRACT FALSE FETCH FILTER FIRST FLOAT FOR FOREIGN FOUND FREE
FROM FULL FUNCTION GENERAL GET GLOBAL GO GOTO GRANT GROUP GROUPING HANDLER
HAVING HOLD HOUR IDENTITY IF IMMEDIATE IN INDICATOR INITIALLY INNER INOUT
INPUT INSENSITIVE INSERT INT INTEGER INTERSECT INTERVAL INTO IS ISOLATION
ITERATE JOIN KEY LANGUAGE LARGE LAST LATERAL LEADING LEAVE LEFT LEVEL LIKE
LIMIT LOCAL LOCALTIME LOCALTIMESTAMP LOCATOR LOOP LOWER MAP MATCH MAX
MEMBER MERGE METHOD MIN MINUTE MODIFIES MODULE MONTH MULTISET NAMES
NATIONAL NATURAL NCHAR NCLOB NEW NEXT NO NONE NOT NULL NULLIF NUMERIC
OBJECT OCTET_LENGTH OF OLD ON ONLY OPEN OPTION OR ORDER ORDINALITY OUT
OUTER OUTPUT OVER OVERLAPS PAD PARAMETER PARTIAL PARTITION PATH POSITION
PRECISION PREPARE PRESERVE PRIMARY PRIOR PRIVILEGES PROCEDURE PUBLIC RANGE
READ READS REAL RECURSIVE REF REFERENCES REFERENCING RELATIVE RELEASE
REPEAT RESIGNAL RESTRICT RESULT RETURN RETURNS REVOKE RIGHT ROLE ROLLBACK
ROLLUP ROUTINE ROW ROWS PIVOT UNPIVOT XMLTABLE XMLSEQUENCE XMLQUERY
SAVEPOINT SCHEMA SCOPE SCROLL SEARCH SECOND SECTION SEQUENCE
SELECT SENSITIVE SESSION SESSION_USER SET SETS SIGNAL SIMILAR SIZE SMALLINT
SOME SPACE SPECIFIC SPECIFICTYPE SQL SQLCODE SQLERROR SQLEXCEPTION SQLSTATE
SQLWARNING START STATE STATIC SUBMULTISET SUBSTRING SUM SYMMETRIC SYSTEM
SYSTEM_USER TABLE TABLESAMPLE TEMPORARY TEXT THEN TIME TIMESTAMP
TIMEZONE_HOUR TIMEZONE_MINUTE TINYINT TO TRAILING TRANSACTION TRANSLATE
TRANSLATION TREAT TRIGGER TRIM TRUE UNDER UNDO UNION UNIQUE UNKNOWN UNNEST
UNTIL UPDATE UPPER USAGE USER USING VALUE VALUES VARCHAR VARYING VIEW WHEN
WHENEVER WHERE WHILE WINDOW WITH WITHIN WITHOUT WORK WRITE YEAR ZONE
);

sub new {
my ($class, %options) = @_;

my $self = bless { %options }, $class;

# Set some defaults.
$self->{query} = ” unless defined($self->{query});
$self->{spaces} = 4 unless defined($self->{spaces});
$self->{space} = ‘ ‘ unless defined($self->{space});
$self->{break} = "\n" unless defined($self->{break});
$self->{wrap} = {} unless defined($self->{wrap});
$self->{keywords} = [] unless defined($self->{keywords});
$self->{rules} = {} unless defined($self->{rules});
$self->{uc_keywords} = 0 unless defined $self->{uc_keywords};

push @{$self->{keywords}}, KEYWORDS;

# Initialize internal stuff.
$self->{_level} = 0;

return $self;
}

# Add more SQL.
sub add {
my ($self, $addendum) = @_;

$addendum =~ s/^\s*/ /;

$self->{query} .= $addendum;
}

# Set SQL to beautify.
sub query {
my ($self, $query) = @_;

$self->{query} = $query if(defined($query));

return $self->{query};
}

# Beautify SQL.
sub beautify {
my ($self) = @_;

$self->{_output} = ”;
$self->{_level_stack} = [];
$self->{_new_line} = 1;

my $last;

$self->{_tokens} = [ OraTokenizer->tokenize($self->query, 1) ];

while(defined(my $token = $self->_token)) {
my $rule = $self->_get_rule($token);

# Allow custom rules to override defaults.
if($rule) {
$self->_process_rule($rule, $token);
}

elsif($token eq ‘(‘) {
$self->_add_token($token);
$self->_new_line;
push @{$self->{_level_stack}}, $self->{_level};
$self->_over unless $last and uc($last) eq ‘WHERE’;
}

elsif($token eq ‘)’) {
$self->_new_line;
$self->{_level} = pop(@{$self->{_level_stack}}) || 0;
$self->_add_token($token);
$self->_new_line;
}

elsif($token eq ‘,’) {
$self->_add_token($token);
$self->_new_line;
}

elsif($token eq ‘;’) {
$self->_add_token($token);
$self->_new_line;

# End of statement; remove all indentation.
@{$self->{_level_stack}} = ();
$self->{_level} = 0;
}

elsif($token =~ /^(?:SELECT|FROM|WHERE|HAVING)$/i) {
$self->_back unless $last and $last eq ‘(‘;
$self->_new_line;
$self->_add_token($token);
$self->_new_line if($self->_next_token and $self->_next_token ne ‘(‘);
$self->_over;
}

elsif($token =~ /^(?:GROUP|ORDER|LIMIT)$/i) {
$self->_back;
$self->_new_line;
$self->_add_token($token);
}

elsif($token =~ /^(?:BY)$/i) {
$self->_add_token($token);
$self->_new_line;
$self->_over;
}

elsif($token =~ /^(?:UNION|INTERSECT|EXCEPT)$/i) {
$self->_new_line;
$self->_add_token($token);
$self->_new_line;
}

elsif($token =~ /^(?:LEFT|RIGHT|INNER|OUTER|CROSS)$/i) {
$self->_back;
$self->_new_line;
$self->_add_token($token);
$self->_over;
}

elsif($token =~ /^(?:JOIN)$/i) {
if($last and $last !~ /^(?:LEFT|RIGHT|INNER|OUTER|CROSS)$/) {
$self->_new_line;
}

$self->_add_token($token);
}

elsif($token =~ /^(?:AND|OR)$/i) {
$self->_new_line;
$self->_add_token($token);
$self->_new_line;
}

else {
$self->_add_token($token, $last);
}

$last = $token;
}

$self->_new_line;

$self->{_output};
}

# Add a token to the beautified string.
sub _add_token {
my ($self, $token, $last_token) = @_;

if($self->{wrap}) {
my $wrap;

if($self->_is_keyword($token)) {
$wrap = $self->{wrap}->{keywords};
}
elsif($self->_is_constant($token)) {
$wrap = $self->{wrap}->{constants};
}

if($wrap) {
$token = $wrap->[0] . $token . $wrap->[1];
}
}

my $last_is_dot =
defined($last_token) && $last_token eq ‘.’;

if(!$self->_is_punctuation($token) and !$last_is_dot) {
$self->{_output} .= $self->_indent;
}

# uppercase keywords
$token = uc $token
if $self->_is_keyword($token) and $self->{uc_keywords};

$self->{_output} .= $token;

# This can’t be the beginning of a new line anymore.
$self->{_new_line} = 0;
}

# Increase the indentation level.
sub _over {
my ($self) = @_;

++$self->{_level};
}

# Decrease the indentation level.
sub _back {
my ($self) = @_;

–$self->{_level} if($self->{_level} > 0);
}

# Return a string of spaces according to the current indentation level and the
# spaces setting for indenting.
sub _indent {
my ($self) = @_;

if($self->{_new_line}) {
return $self->{space} x ($self->{spaces} * $self->{_level});
}
else {
return $self->{space};
}
}

# Add a line break, but make sure there are no empty lines.
sub _new_line {
my ($self) = @_;

$self->{_output} .= $self->{break} unless($self->{_new_line});
$self->{_new_line} = 1;
}

# Have a look at the token that’s coming up next.
sub _next_token {
my ($self) = @_;

return @{$self->{_tokens}} ? $self->{_tokens}->[0] : undef;
}

# Get the next token, removing it from the list of remaining tokens.
sub _token {
my ($self) = @_;

return shift @{$self->{_tokens}};
}

# Check if a token is a known SQL keyword.
sub _is_keyword {
my ($self, $token) = @_;

return ~~ grep { $_ eq uc($token) } @{$self->{keywords}};
}

# Add new keywords to highlight.
sub add_keywords {
my $self = shift;

for my $keyword (@_) {
push @{$self->{keywords}}, ref($keyword) ? @{$keyword} : $keyword;
}
}

# Add new rules.
sub add_rule {
my ($self, $format, $token) = @_;

my $rules = $self->{rules} ||= {};
my $group = $rules->{$format} ||= [];

push @{$group}, ref($token) ? @{$token} : $token;
}

# Find custom rule for a token.
sub _get_rule {
my ($self, $token) = @_;

values %{$self->{rules}}; # Reset iterator.

while(my ($rule, $list) = each %{$self->{rules}}) {
return $rule if(grep { uc($token) eq uc($_) } @$list);
}

return undef;
}

sub _process_rule {
my ($self, $rule, $token) = @_;

my $format = {
break => sub { $self->_new_line },
over => sub { $self->_over },
back => sub { $self->_back },
token => sub { $self->_add_token($token) },
push => sub { push @{$self->{_level_stack}}, $self->{_level} },
pop => sub { $self->{_level} = pop(@{$self->{_level_stack}}) || 0 },
reset => sub { $self->{_level} = 0; @{$self->{_level_stack}} = (); },
};

for(split /-/, lc $rule) {
&{$format->{$_}} if($format->{$_});
}
}

# Check if a token is a constant.
sub _is_constant {
my ($self, $token) = @_;

return ($token =~ /^\d+$/ or $token =~ /^([‘"`]).*\1$/);
}

# Check if a token is punctuation.
sub _is_punctuation {
my ($self, $token) = @_;

return ($token =~ /^[,;.]$/);
}

1;

=pod

=head1 NAME

OraBeautify – Beautify SQL statements by adding line breaks indentation

=head1 SYNOPSIS

my $sql = OraBeautify->new;

$sql->query($sql_query);

my $nice_sql = $sql->beautify;

=head1 DESCRIPTION

Beautifies SQL statements by adding line breaks indentation.

=head1 METHODS

=over 4

=item B<new>(query => ”, spaces => 4, space => ‘ ‘, break => "\n", wrap => {})

Constructor. Takes a few options.

=over 4

=item B<query> => ”

Initialize the instance with a SQL string. Defaults to an empty string.

=item B<spaces> => 4

Number of spaces that make one indentation level. Defaults to 4.

=item B<space> => ‘ ‘

A string that is used as space. Default is an ASCII space character.

=item B<break> => "\n"

String that is used for linebreaks. Default is "\n".

=item B<wrap> => {}

Use this if you want to surround certain tokens with markup stuff. Known token
types are "keywords" and "constants" for now. The value of each token type
should be an array with two elements, one that is placed before the token and
one that is placed behind it. For example, use make keywords red using terminal
color escape sequences.

{ keywords => [ "\x1B[0;31m", "\x1B[0m" ] }

=item B<uc_keywords> => 1|0

When true (1) all SQL keywords will be uppercased in output. Default is false (0).

=back

=item B<add>($more_sql)

Appends another chunk of SQL.

=item B<query>($query)

Sets the query to the new query string. Overwrites anything that was added with
prior calls to B<query> or B<add>.

=item B<beautify>

Beautifies the internally saved SQL string and returns the result.

=item B<add_keywords>($keyword, $another_keyword, \@more_keywords)

Add any amount of keywords of arrays of keywords to highlight.

=item B<add_rule>($rule, $token)

Add a custom formatting rule. The first argument is the rule, a string
containing one or more commands (explained below), separated by dashes. The
second argument may be either a token (string) or a list of strings. Tokens are
grouped by rules internally, so you may call this method multiple times with
the same rule string and different tokens, and the rule will apply to all of
the tokens.

The following formatting commands are known at the moment:

=over 4

=item B<token> – insert the token this rule applies to

=item B<over> – increase indentation level

=item B<back> – decrease indentation level

=item B<break> – insert line break

=item B<push> – push current indentation level to an internal stack

=item B<pop> – restore last indentation level from the stack

=item B<reset> – reset internal indentation level stack

=back

B<push>, B<pop> and B<reset> should be rarely needed.

B<NOTE>:
Custom rules override default rules. Some default rules do things that
can’t be done using custom rules, such as changing the format of a token
depending on the last or next token.

B<NOTE>:
I’m trying to provide sane default rules. If you find that a custom
rule of yours would make more sense as a default rule, please create a ticket.

=back

=head1 BUGS

Needs more tests.

Please report bugs in the CPAN bug tracker.

This module is not complete (known SQL keywords, special formatting of
keywords), so if you want see something added, just send me a patch.

=head1 COPYRIGHT

Copyright (C) 2009 by Jonas Kramer. Published under the terms of the Artistic
License 2.0.

=cut

########################################
package main;
use strict;

open (SQL, "<", $ARGV[0]) || die (‘File not found!’);
my $query = join("\n",<SQL>);

my $beautifier = OraBeautify->new;
$beautifier -> add_keywords(qw{
pivot unpivot
model dimension measures rules
xmltable xmlsequence columns});

$beautifier->query(
$query,
spaces => 4,
space => ‘ ‘,
break => "\n",
wrap => {‘$’,’$’}
);

my $nice_sql = $beautifier->beautify;

print $nice_sql ."\n";
__END__

[/sourcecode]

[collapse]

3. Create empty directory “tmp” within $SQL_PATH
4. Now you can use it for example like i did it in sql_textf.sql:
sql_textf.sql

[sourcecode language=”sql”]
set timing off head off
col qtext format a150
prompt ################################ Original query text: ################################################;
spool tmp/to_format.sql
select
coalesce(
(select sql_fulltext from v$sqlarea a where a.sql_id=’&1′)
, (select sql_text from dba_hist_sqltext a where a.sql_id=’&1′ and dbid=(select dbid from v$database))
) qtext
from dual
;
spool off

prompt ################################ Formatted query text #################################################;
host perl inc/sql_format_standalone.pl tmp/to_format.sql
prompt ################################ Formatted query text End #############################################;
set termout on head on
[/sourcecode]

[collapse]

I also use it in other scripts, like sqlid.sql:

[sourcecode language=”sql”]
@inc/input_vars_init;

REM ############### COMMON FORMATTING #######################
col SQL_ID for a13
col sql_child_number head CH# for 999
col SQL_PROFILE head PROFILE for a19
—————————————–
— params check:
set termout off timing off
def _sqlid=&1
col _child new_val _child noprint
select
case
when translate(‘&2′,’x0123456789′,’x’) is null
then nvl(‘&2′,’%’)
else ‘%’
end "_CHILD"
from dual;
—————————————–
set termout on

prompt ####################################################################################################;
prompt # Show SQL text, child cursors and execution stats for SQLID &1 child &2
prompt ####################################################################################################;

REM ################### SHOW SQL TEXT ############################
@sql_textf &_sqlid

REM ################### SHOW V$SQL ##############################
col proc_name for a30
col P_schema for a20

select
s.sql_id
,s.CHILD_NUMBER sql_child_number
,s.address parent_handle
,s.child_address object_handle
,s.PLAN_HASH_VALUE plan_hv
,s.hash_value hv
,s.SQL_PROFILE sql_profile
,decode(s.EXECUTIONS,0,0, s.ELAPSED_TIME/1e6/s.EXECUTIONS) elaexe
,s.EXECUTIONS cnt
,s.FETCHES fetches
,s.END_OF_FETCH_COUNT end_of_fetch_count
,s.FIRST_LOAD_TIME first_load_time
,s.PARSE_CALLS parse_calls
,decode(s.executions,0,0, s.DISK_READS /s.executions) disk_reads
,decode(s.executions,0,0, s.BUFFER_GETS /s.executions) buffer_gets
,decode(s.executions,0,0, s.DIRECT_WRITES /s.executions) direct_writes
,decode(s.executions,0,0, s.APPLICATION_WAIT_TIME/1e6/s.executions) app_wait
,decode(s.executions,0,0, s.CONCURRENCY_WAIT_TIME/1e6/s.executions) concurrency
,decode(s.executions,0,0, s.USER_IO_WAIT_TIME /1e6/s.executions) io_wait
,decode(s.executions,0,0, s.PLSQL_EXEC_TIME /1e6/s.executions) plsql_t
,decode(s.executions,0,0, s.java_exec_time /1e6/s.executions) java_exec_t
,s.ROWS_PROCESSED row_processed
,s.OPTIMIZER_MODE opt_mode
,s.OPTIMIZER_COST cost
,s.OPTIMIZER_ENV_HASH_VALUE env_hash
,s.PARSING_SCHEMA_NAME P_schema
,decode(s.executions,0,0, s.CPU_TIME/1e6/s.executions) CPU_TIME
,s.PROGRAM_ID
,(select object_name from dba_objects o where o.object_id=s.PROGRAM_ID) proc_name
,s.PROGRAM_LINE# proc_line
from v$sql s
where
sql_id = (‘&_sqlid’)
and child_number like ‘&_child’
order by
sql_id,
hash_value,
child_number
/
REM ##################### END V$SQL ##############################

REM ################### PLSQL OBJECT ##############################
col owner for a10
col object_name for a30
col text for a120

select
a.SQL_ID,a.SQL_PROFILE
,p.owner,p.object_name
,s.line
,rtrim(s.text,chr(10)) text
from
v$sqlarea a
left join dba_procedures p
on a.PROGRAM_ID=p.OBJECT_ID
left join dba_source s
on p.owner=s.owner
and p.OBJECT_NAME=s.name
and s.line between a.PROGRAM_LINE#-5 and a.PROGRAM_LINE#+5
where a.SQL_ID=’&_sqlid’
/
REM ################### EXECUTIONS IN SQL_MONITOR ######################
@if "’&_O_RELEASE’>’11.2’" then

col error_message for a40
@rtsm/execs "&_sqlid" "&_child"

/* end if */
REM ########################### clearing ############################
col SQL_PROFILE clear
col owner clear
col object_name clear
col text clear
col error_message clear
@inc/input_vars_undef;
[/sourcecode]

[collapse]

SQL*Plus

SQL*Plus tips #4: Branching execution

Posted on April 17, 2013 by Sayan Malakshinov Posted in oracle, SQL*Plus, SQL*PLus tips 3,931 Page views Leave a comment

Today I’ll show a trick how we can use branching execution of SQLPlus scripts in SQL*Plus.
Although I previously showed the conditional execution of scripts and it really can be used for branching, but today I’ll show how to do it without splitting the script into several smaller scripts. In contrast to the conditional script execution, I’ll use a different method.

It is very simple, as usual – if you want to execute only one part of script, you can just comment out all unnecessary. So depending on the conditions, we can execute a script which will start a comment.

Suppose we need to create a script, which, depending on the input parameter will be required to execute a specific query.
See how this can be done:
1. “test.sql”:

def param = &1

@if &param=1
   select 1 from dual;
/* end_if */

@if &param=2
   select 2 from dual;
/* end_if */

@if &param=3
   select 3 from dual;
/* end_if */

2. “if.sql”:

col do_next new_val do_next noprint;
select 
      case 
         when &1 then 'inc/null'
         else 'inc/comment_on'
      end as do_next
from dual;
@&do_next

3. “inc/comment_on.sql” contains only 2 chars:

/*

4. “inc/null.sql” is the same as in the previous examples – just empty file.

Ok, lets test it:

SQL> @test 1

         1
----------
         1


SQL> @test 2


         2
----------
         2


SQL> @test 3


         3
----------
         3

As you see, we got what we wanted. Please note that we have to close the multiline comments in the right places only(/* end_if */). So we cannot use in these parts another “*/”. But you can use it in another child scripts.

Same way we can make an analogue of switch/case:
“test2.sql”:

@switch &1

   @when 1 then
      select 1 from dual;
   /* end when */

   @when 2 then
      select 2 from dual;
   /* end when */

   @when 3 then
      select 3 from dual;
   /* end when */

/* end switch */

switch.sql

[sourcecode language=”sql”]
def switch_param=&1
[/sourcecode]

[collapse]

when.sql

[sourcecode language=”sql”]
col do_next new_val do_next noprint;
select
case
when &1 = &switch_param then ‘inc/null’
else ‘inc/comment_on’
end as do_next
from dual;
@&do_next
[/sourcecode]

[collapse]

Example:

SQL> @test2 2

         2
----------
         2

SQL> @test2 3

         3
----------
         3
SQL*Plus

SQL*Plus tips #3: Iterators and recursions

Posted on April 9, 2013 by Sayan Malakshinov Posted in oracle, SQL*Plus, SQL*PLus tips 2,926 Page views 8 Comments

We all used to iterators, recursions and branching in programming, but sql*plus does not have such commands. Today I will show how to do iterators/recusions.

Suppose we want to call a script 3 times. So it would be convenient if we can do it like:

@iterate 3 @some_script

It is very easy: We can recursively call the script by reducing variable until it is greater than zero. and if a variable is zero, then call an empty “NULL” script.
iterate.sql:

-- initializing input parameters as i showed in previous tips:
@inc/input_params_init.sql;
-- for example max number of input parameters = 14:
def _INIT_PARAMS = '"&3" "&4" "&5" "&6" "&7" "&8" "&9" "&10" "&11" "&12" "&13" "&14" "&15" "&16"'
def _ITER = "&1"

prompt ------   iteration &_ITER start: "@&2  -----------;
set termout off


col _SCRIPT    new_val _SCRIPT   noprint;
col _CONTINUE  new_val _CONTINUE noprint;
col _PARAMS    new_val _PARAMS   noprint;
select 
   replace('"&3" "&4" "&5" "&6" "&7" "&8" "&9" "&10" "&11" "&12" "&13" "&14" "&15" "&16"'
          ,'${ITER}'
          ,'&_ITER'
          )          as "_PARAMS"
  ,case 
      when &1 > 0 then 'iterate '||(&1 - 1)
      else 'inc/null'
   end               as "_CONTINUE"
  ,case 
      when &1 > 0 then '&2' 
      else 'inc/null'
   end               as "_SCRIPT"
from dual;
set termout on

@&_script            &_PARAMS
@&_CONTINUE &_SCRIPT &_INIT_PARAMS

As you see i also added option to pass iteration_number as input parameter, so we can call it like: @iterate 3 script.sql ${ITER}
And null.sql in “inc” directory is just empty file.

Lets test it:

SQL> $cat test.sql
select &1 a,'&2' b,'&3' c from dual;

SQL> @iterate 3 test.sql 1 2 3
------   iteration 3 start: "@test.sql  -----------

         A B C
---------- - -
         1 2 3

------   iteration 2 start: "@test.sql  -----------

         A B C
---------- - -
         1 2 3

------   iteration 1 start: "@test.sql  -----------

         A B C
---------- - -
         1 2 3

------   iteration 0 start: "@test.sql  -----------

SQL> @iterate 2 test.sql 1 2 ${ITER}
------   iteration 2 start: "@test.sql  -----------

         A B C
---------- - -
         1 2 2

------   iteration 1 start: "@test.sql  -----------

         A B C
---------- - -
         1 2 1

------   iteration 0 start: "@test.sql  -----------

Note that last string “iteration 0 start” just means that it is last empty call. I made this only for showing how we can do postprocessing after all iterations.
Ok, it works good, but with a little modification we can also add option to call such script with list of values too:

final iterate.sql

[sourcecode language=”sql”]
@inc/input_params_init.sql;

set termout off
def _INIT_PARAMS = ‘"&3" "&4" "&5" "&6" "&7" "&8" "&9" "&10" "&11" "&12" "&13" "&14" "&15" "&16"’
col _ITER_CURR new_val _ITER_CURR noprint
col _ITER_NEXT new_val _ITER_NEXT noprint
select
case
— simple N iterations:
when translate(‘&1′,’x0123456789′,’x’) is null
then ‘&1’
— list iteration:
when substr(‘&1’,1,1)='(‘
then decode( instr(‘&1′,’,’)
,0,substr(‘&1’,2,length(‘&1’)-2)
,substr(‘&1’,2,instr(‘&1′,’,’)-2)
)
end "_ITER_CURR",
case
when translate(‘&1′,’x0123456789′,’x’) is null
then
case
when ‘&1’>1 then ‘iterate ‘||(‘&1’ – 1)
else ‘inc/null’
end
when substr(‘&1’,1,1)='(‘
then
case
when instr(‘&1′,’,’)=0 or ‘&1’='()’ then ‘inc/null’
else ‘iterate ‘
||'(‘
||decode( instr(‘&1′,’,’)
,0,’)’
,substr(‘&1’,instr(‘&1′,’,’)+1)
)
end
end "_ITER_NEXT"
from dual;

set termout on
prompt ****** iteration &_ITER_CURR start: "@&2 ***********;
set termout off

col _SCRIPT new_val _SCRIPT noprint;
col _PARAMS new_val _PARAMS noprint;
select
replace(‘"&3" "&4" "&5" "&6" "&7" "&8" "&9" "&10" "&11" "&12" "&13" "&14" "&15" "&16"’
,’${ITER}’
,’&_ITER_CURR’
) as "_PARAMS"
,case
when nvl(‘&_ITER_CURR’,0) in (‘0′,'()’) then ‘inc/null’
else ‘&2’
end as "_SCRIPT"
from dual;
set termout on

@&_script &_PARAMS
@&_ITER_NEXT &_SCRIPT &_INIT_PARAMS
[/sourcecode]

[collapse]

See how it works:

-- iterate by list of values  (7,3,10):
SQL> @iterate (7,3,10) test.sql 1 2 ${ITER}
******   iteration 7 start: "@test.sql  ***********

         A B C
---------- - -
         1 2 7

******   iteration 3 start: "@test.sql  ***********

         A B C
---------- - -
         1 2 3

******   iteration 10 start: "@test.sql  ***********

         A B C
---------- - --
         1 2 10

-- now with simple 2 iterations:
SQL> @iterate 2 test.sql 1 2 ${ITER}
******   iteration 2 start: "@test.sql  ***********

         A B C
---------- - -
         1 2 2

******   iteration 1 start: "@test.sql  ***********

         A B C
---------- - -
         1 2 1

So if you call script with “iterate N …” it will iterate N times, and if you specify “iterate (X,Y,…,Z)” it will iterate by this list.

PS. About a branching i will wrote later…

SQL*Plus

SQL*Plus tips. #2

Posted on April 2, 2013 by Sayan Malakshinov Posted in oracle, SQL*Plus, SQL*PLus tips 3,516 Page views 10 Comments

I think you know the famous print_table procedure by Tom Kyte. It is really great, but a little hard(it requires create procedure or place it in anonymous block) and in last oracle versions we can do same with one simple query with xmltable/xmlsequence:

SQL> select *
  2  from
  3     xmltable( '/ROWSET/ROW/*'
  4               passing xmltype(cursor(select * from hr.employees where rownum<3))
  5               columns
  6                  col varchar2(100) path 'name()'
  7                 ,val varchar2(100) path '.'
  8        );

COL                            VAL
------------------------------ ------------------------------------------------------
EMPLOYEE_ID                    198
FIRST_NAME                     Donald
LAST_NAME                      OConnell
EMAIL                          DOCONNEL
PHONE_NUMBER                   650.507.9833
HIRE_DATE                      21-JUN-07
JOB_ID                         SH_CLERK
SALARY                         2600
MANAGER_ID                     124
DEPARTMENT_ID                  50
EMPLOYEE_ID                    199
FIRST_NAME                     Douglas
LAST_NAME                      Grant
EMAIL                          DGRANT
PHONE_NUMBER                   650.507.9844
HIRE_DATE                      13-JAN-08
JOB_ID                         SH_CLERK
SALARY                         2600
MANAGER_ID                     124
DEPARTMENT_ID                  50

20 rows selected.

It is very easy, but for conveniency we need to add “rownum”:

SQL> select *
  2  from
  3     xmltable( 'for $a at $i in /ROWSET/ROW
  4                   ,$r in $a/*
  5                     return element ROW{
  6                                       element ROW_NUM{$i}
  7                                      ,element COL_NAME{$r/name()}
  8                                      ,element COL_VALUE{$r/text()}
  9                                      }'
 10               passing xmltype(cursor(select * from hr.employees where rownum<3))
 11               columns
 12                  row_num   int
 13                 ,col_name  varchar2(30)
 14                 ,col_value varchar2(100)
 15        );

   ROW_NUM COL_NAME                       COL_VALUE
---------- ------------------------------ ------------------------------------------
         1 EMPLOYEE_ID                    198
         1 FIRST_NAME                     Donald
         1 LAST_NAME                      OConnell
         1 EMAIL                          DOCONNEL
         1 PHONE_NUMBER                   650.507.9833
         1 HIRE_DATE                      21-JUN-07
         1 JOB_ID                         SH_CLERK
         1 SALARY                         2600
         1 MANAGER_ID                     124
         1 DEPARTMENT_ID                  50
         2 EMPLOYEE_ID                    199
         2 FIRST_NAME                     Douglas
         2 LAST_NAME                      Grant
         2 EMAIL                          DGRANT
         2 PHONE_NUMBER                   650.507.9844
         2 HIRE_DATE                      13-JAN-08
         2 JOB_ID                         SH_CLERK
         2 SALARY                         2600
         2 MANAGER_ID                     124
         2 DEPARTMENT_ID                  50

20 rows selected.

Now we can create simple script for it with formatting:

print_table.sql

[sourcecode language=”sql”]
— show output
set termout on
— but without echo
set echo off
— without newpage on start:
set embedded on
— scrolling control
set pause on
— two lines between rows:
set newpage 2
— text for prompt after each page:
set pause "Press Enter to view next row…"
— new page on new "row_num"
break on row_num skip page

— main query:
select *
from
xmltable( ‘for $a at $i in /ROWSET/ROW
,$r in $a/*
return element ROW{
element ROW_NUM{$i}
,element COL_NAME{$r/name()}
,element COL_VALUE{$r/text()}
}’
passing xmltype(cursor( &1 ))
columns
row_num int
,col_name varchar2(30)
,col_value varchar2(100)
);
— disabling pause and breaks:
set pause off
clear breaks
[/sourcecode]

[collapse]

Usage example:print_table example
As you see script works fine, but it is require to pass query as parameter, though sometimes it is not so convenient. For example if we want started query and later decided to show it with print_table. In that case we can create scripts with tricks from previous part:
print_last.sql

[sourcecode language=”sql”]
store set settings.sql replace
— saving previous query:
save tmp.sql replace

— OS-dependent removing trailing slash from file, choose one:
— 1. for *nix through head:
!head -1 tmp.sql >tmp2.sql

— 2. for for *nix through grep:
–!grep -v tmp.sql >tmp2.sql

— 3. for windows without grep and head:
— $cmd /C findstr /v /C:"/" tmp.sql > tmp2.sql

— 4. for windows with "head"(eg from cygwin)
–$cmd /C head -1 tmp.sql > tmp2.sql

— 5. for windows with "grep":
–$cmd /C grep -v "/" tmp.sql > tmp2.sql

— same setting as in print_table:
set termout on echo off embedded on pause on newpage 2
set pause "Press Enter to view next row…"
break on row_num skip page

— main query:
select *
from
xmltable( ‘for $a at $i in /ROWSET/ROW
,$r in $a/*
return element ROW{
element ROW_NUM{$i}
,element COL_NAME{$r/name()}
,element COL_VALUE{$r/text()}
}’
passing dbms_xmlgen.getxmltype(
q'[
@tmp2.sql
]’
)
columns
row_num int
,col_name varchar2(30)
,col_value varchar2(100)
);
— disabling pause and breaks:
set pause off
clear breaks
@settings.sql
[/sourcecode]

[collapse]

Example:

[sourcecode language=”sql”]
SQL> select * from hr.employees where rownum<3;

EMPLOYEE_ID FIRST_NAME LAST_NAME EMAIL PHONE_NUMBER HIRE_DATE JOB_ID SALARY COMMISSION_PCT MANAGER_ID DEPARTMENT_ID
———– ——————– ————————- ————————- ——————– ——— ———- ———- ————– ———- ————-
198 Donald OConnell DOCONNEL 650.507.9833 21-JUN-07 SH_CLERK 2600 124 50
199 Douglas Grant DGRANT 650.507.9844 13-JAN-08 SH_CLERK 2600 124 50

Elapsed: 00:00:00.01
SQL> @print_last
Wrote file settings.sql
Wrote file tmp.sql

ROW_NUM COL_NAME COL_VALUE
———- —————————— —————————————————————————————————-
1 EMPLOYEE_ID 198
FIRST_NAME Donald
LAST_NAME OConnell
EMAIL DOCONNEL
PHONE_NUMBER 650.507.9833
HIRE_DATE 21-JUN-07
JOB_ID SH_CLERK
SALARY 2600
MANAGER_ID 124
DEPARTMENT_ID 50
Press Enter to view next row…
[/sourcecode]

[collapse]

PS. if you will use xmltype(cursor(…)) on versions less that 11.2.0.3 you can get errors with xml rewriting. In this case you need to disable it:

alter session set events '19027 trace name context forever, level 0x1';

Update:
Vladimir Przyjalkowski rightly pointed out that such approach will be suboptimal in case of big amount of data:

A nice toy, but i’m sure it will not tolerate big data.

It is absolutely true, because at first, “xmltype(cursor(…))” aggregated data as xmltype, and only then xmltable returns data.
But I use print_table for small amount of data only, and in case of when i want to see sample of data, i usually add limitation by rownum, which also convenient because it automatically changes optimizer mode with enabled parameter “_optimizer_rownum_pred_based_fkr” (it is default):

SQL> @param_ rownum

NAME                               VALUE   DEFLT  TYPE     DESCRIPTION
---------------------------------- ------- ------ -------- ------------------------------------------------------
_optimizer_rownum_bind_default     10      TRUE   number   Default value to use for rownum bind
_optimizer_rownum_pred_based_fkr   TRUE    TRUE   boolean  enable the use of first K rows due to rownum predicate
_px_rownum_pd                      TRUE    TRUE   boolean  turn off/on parallel rownum pushdown optimization

However, we can easily change the our query to make it optimally with a lot of data too:

select row_num
      ,t2.*
from 
   (select rownum row_num
         , column_value x
    from table(xmlsequence(cursor( &1 )))
   ) t1
  ,xmltable( '/ROW/*'
             passing t1.x
             columns
                col_num for ordinality
               ,col_name  varchar2(30) path 'name()'
               ,col_value varchar2(100) path '.'
      )(+) t2;

Lets test it with pipelined function which will pipe rows infinitely and log count of fetched rows:

-- drop table xt_log purge;
-- drop function f_infinite;

create table xt_log(n int);

create function f_infinite 
   return sys.ku$_objnumset pipelined 
as
   i int:=0;
   pragma autonomous_transaction;
begin
   loop 
      i:=i+1;
      insert into xt_log values(i);
      commit;
      pipe row(i);
   end loop;

exception 
   when NO_DATA_NEEDED then
      commit;
end;
/

New version of print_table: print_table2.sql

[sourcecode language=”sql”]
— set arraysize for minimal value, so sqlplus did not fetch extra rows:
set arraysize 2
— show output
set termout on
— but without echo
set echo off
— without newpage on start:
set embedded on
— scrolling control
set pause on
— two lines between rows:
set newpage 2
— text for prompt after each page:
set pause "Press Enter to view next row…"
— new page on new "row_num"
break on row_num skip page

— main query:
select row_num
,t2.*
from
(select rownum row_num
, column_value x
from table(xmlsequence(cursor( &1 )))
) t1
,xmltable( ‘/ROW/*’
passing t1.x
columns
col_num for ordinality
,col_name varchar2(30) path ‘name()’
,col_value varchar2(100) path ‘.’
)(+) t2;
— disabling pause and breaks:
set pause off
clear breaks
[/sourcecode]

[collapse]

And look final test:

SQL> select * from xt_log;

no rows selected

SQL> @print_table2 "select * from table(f_infinite)"
Press Enter to view next row...


   ROW_NUM    COL_NUM COL_NAME                       COL_VALUE
---------- ---------- ------------------------------ -----------
         1          1 COLUMN_VALUE                   1
Press Enter to view next row...


   ROW_NUM    COL_NUM COL_NAME                       COL_VALUE
---------- ---------- ------------------------------ -----------
         2          1 COLUMN_VALUE                   2
Press Enter to view next row...


   ROW_NUM    COL_NUM COL_NAME                       COL_VALUE
---------- ---------- ------------------------------ -----------
         3          1 COLUMN_VALUE                   3
Press Enter to view next row...


   ROW_NUM    COL_NUM COL_NAME                       COL_VALUE
---------- ---------- ------------------------------ -----------
         4          1 COLUMN_VALUE                   4
Press Enter to view next row...

   ROW_NUM    COL_NUM COL_NAME                       COL_VALUE
         5          1 COLUMN_VALUE                   5
Press Enter to view next row...


6 rows selected.
-------------------------------
SQL> select * from xt_log;

         N
----------
         1
         2
         3
         4
         5
         6
         7

7 rows selected.

So we got what we wanted!

PS. Full scripts:
1. Simple: print_table.sql
2. For big queries: print_table2.sql

oracle SQL*Plus

SQL*Plus tips. #1

Posted on March 29, 2013 by Sayan Malakshinov Posted in oracle, SQL*Plus, SQL*PLus tips 3,709 Page views 5 Comments

If you are using SQL*Plus, you are likely to use the input parameters. And if you omit one of them, SQL*Plus will show prompt for it, like this:

SQL> get test.sql
  1  select 'Input variable 1 = &1' from dual
  2  union all
  3  select 'Input variable 2 = &2' from dual
  4  union all
  5* select 'Input variable 3 = &3' from dual
SQL> @test var1 var2
Enter value for 3:

'INPUTVARIABLE1=VAR1'
-----------------------
Input variable 1 = var1
Input variable 2 = var2
Input variable 3 =

Elapsed: 00:00:00.01

It is allright, if all variables are needed, but what if we don’t want to press enter for all omitted variables or specify they(especially if script can be start in silent, non interactive mode) and want to use default values for omitted variables or these variables can be unnecessary?
Strictly speaking, there are many different techniques for solving it, see some of them:

Comma-separated params

[sourcecode language=”sql”]
SQL> get test1
1 col var1 new_value var1 noprint
2 col var2 new_value var2 noprint
3 col var3 new_value var3 noprint
4 set termout off
5 with any_splitting_technique as (
6 select *
7 from xmltable(‘ora:tokenize(.,",")[position()>1]’
8 passing ‘,’||’&1′
9 columns
10 i for ordinality
11 ,"." varchar2(30)
12 )
13 )
14 select
15 nvl("1",’default1′) var1
16 ,nvl("2",’default2′) var2
17 ,nvl("3",’default3’) var3
18 from any_splitting_technique
19 pivot (max(".") for i in (1,2,3))
20 /
21 set termout on
22* prompt var1 = &var1, var2 = &var2, var3 = &var3;
SQL> @test1 1,2,3
var1 = 1, var2 = 2, var3 = 3
SQL> @test1 1,2
var1 = 1, var2 = 2, var3 = default3
[/sourcecode]

[collapse]

With SPOOL and DEFINE

[sourcecode language=”sql”]
SQL> get test2
1 set termout off
2 spool tmp.sql
3 def 1
4 def 2
5 def 3
6 spool off
7 col var1 new_value var1
8 col var2 new_value var2
9 col var3 new_value var3
10 with tmp as (
11 select ‘
12 @tmp.sql
13 ‘ params from dual
14 )
15 select
16 nvl(regexp_substr(params,’DEFINE 1\s+ = "([^"]*)’,1,1,’i’,1),’default1′) var1
17 , nvl(regexp_substr(params,’DEFINE 2\s+ = "([^"]*)’,1,1,’i’,1),’default2′) var2
18 , nvl(regexp_substr(params,’DEFINE 3\s+ = "([^"]*)’,1,1,’i’,1),’default3′) var3
19 from tmp
20 ;
21 col var1 clear;
22 col var2 clear;
23 col var3 clear;
24 set termout on
25 prompt var1 = &var1, var2 = &var2, var3 = &var3;
26 undef 1
27 undef 2
28* undef 3
29 .
SQL> @test2 1 2 3
var1 = 1, var2 = 2, var3 = 3
SQL> @test2 1 2
var1 = 1, var2 = 2, var3 = default3
[/sourcecode]

[collapse]

Last example also shows a very useful way to read file into a variable.
But i think, the best option for initializing parameters is the solution by Vladimir Begun:

SQL> get test3
  1  set termout off
  2  COLUMN 1 NEW_VALUE 1    noprint
  3  COLUMN 2 NEW_VALUE 2    noprint
  4  COLUMN 3 NEW_VALUE 3    noprint
  5  SELECT '' "1", '' "2", '' "3" FROM dual WHERE 1=0;
  6  SELECT nvl('&1','default1') "1"
  7       , nvl('&2','default2') "2"
  8       , nvl('&3','default3') "3"
  9    FROM dual;
 10  col var1 clear;
 11  col var2 clear;
 12  col var3 clear;
 13  set termout on
 14  prompt var1 = &1, var2 = &2, var3 = &3;
 15  undef 1
 16  undef 2
 17* undef 3
 18  .
SQL> @test3 1 2 3
var1 = 1, var2 = 2, var3 = 3
SQL> @test3 1 2
var1 = 1, var2 = 2, var3 = default3

So i can create 2 include files – for execution at the start and at the end of all scripts. I created directory “inc” for include files and files:

inc/s_begin.sql

[sourcecode language=”sql”]
store set splus_restore.sql replace
set termout off
COLUMN 1 NEW_VALUE 1 noprint
COLUMN 2 NEW_VALUE 2 noprint
COLUMN 4 NEW_VALUE 4 noprint
COLUMN 3 NEW_VALUE 3 noprint
COLUMN 5 NEW_VALUE 5 noprint
COLUMN 6 NEW_VALUE 6 noprint
COLUMN 7 NEW_VALUE 7 noprint
COLUMN 8 NEW_VALUE 8 noprint
COLUMN 9 NEW_VALUE 9 noprint
COLUMN 10 NEW_VALUE 10 noprint
COLUMN 11 NEW_VALUE 11 noprint
COLUMN 12 NEW_VALUE 12 noprint
COLUMN 13 NEW_VALUE 13 noprint
COLUMN 14 NEW_VALUE 14 noprint
COLUMN 15 NEW_VALUE 15 noprint
COLUMN 16 NEW_VALUE 16 noprint

SELECT ” "1", ” "5", ” "9", ” "13"
,” "2", ” "6", ” "10", ” "14"
,” "3", ” "7", ” "11", ” "15"
,” "4", ” "8", ” "12", ” "16"
FROM dual
WHERE 1=0;
set termout on;
[/sourcecode]

[collapse]

and
inc/s_end.sql

[sourcecode language=”sql”]
undef 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
@splus_restore;
[/sourcecode]

[collapse]
And for example get_index.sql:
get_index.sql

[sourcecode language=”sql”]
@inc/s_begin;
col table_owner format a12
col column_name format a30
col index_owner format a12
col index_name format a30

col "#" format 99
break on table_owner on table_name on index_owner on index_name on partition_name on mbytes on bytes on blocks
select
ic.table_owner
,ic.table_name
,ic.index_owner
,ic.index_name
,s.partition_name
,round(s.bytes/1024/1024) mbytes
,s.blocks
,ic.column_position "#"
,decode(ic.column_position,1,”,’ ,’)||ic.column_name column_name
from dba_ind_columns ic
,dba_segments s
where
upper(ic.table_name) like upper(‘&1’)
and upper(ic.table_owner) like nvl(upper(‘&2′),’%’)
and s.owner = ic.index_owner
and s.segment_name = ic.index_name
order by
1,2,3,4,8
/
clear break;
col "#" clear;
@inc/s_end;
[/sourcecode]

[collapse]

Sample output

[sourcecode language=”sql”]
SQL> @get_indexes wrh$%tab%stat xtender
Wrote file splus_restore.sql

no rows selected

Elapsed: 00:00:05.79
SQL> @get_indexes wrh$%undostat
Wrote file splus_restore.sql

TABLE_OWNER TABLE_NAME INDEX_OWNER INDEX_NAME PARTITION_NAME MBYTES BLOCKS # COLUMN_NAME
———— ————– ———— —————– ————– ——- ——- — ——————–
SYS WRH$_UNDOSTAT SYS WRH$_UNDOSTAT_PK 0 16 1 BEGIN_TIME
2 ,END_TIME
3 ,DBID
4 ,INSTANCE_NUMBER
[/sourcecode]

[collapse]
As you see i can omit owner parameter and in this case it will search in all schemas.

SQL*Plus

Example of controlling “direct path reads” decision through SQL profile hints (index_stats/table_stats)

Posted on March 18, 2013 by Sayan Malakshinov Posted in adaptive serial direct path reads, CBO, oracle, undocumented 2,842 Page views 1 Comment

Previously i showed not obvious example with hint “INDEX_STATS(“OWNER”.”TABLE_NAME”, “INDEX_NAME”, scale, blocks=X, rows=Y)“. Strictly speaking i don’t know how exactly cbo calculates number of index leaf blocks in that case: in those examples they was 1981 for “blocks=1, rows=50” and 49525 for “blocks=5, rows=10”.
But i know that with “INDEX_STATS(“OWNER”.”TABLE_NAME”, “INDEX_NAME”, scale, blocks=X)” i can set exact blocks number.
Also those test-cases didn’t show when occurs decision changing. So todays my test will show it.

UPD: btw previously I wrote that _very_large_object_threshold is a percentage of _db_block_buffers and Igor Usoltsev have tested and confirmed my guess

Test code

[sourcecode language=”sql” collapse=”true”]
/** SQLPLUS Example of controlling adaptive serial direct path reads through SQL profiles.
*/
set serverout on termout on timing off feed off;
clear scr;
def _IF_XT_RUNSTATS_EXISTS="–"
col if_xt_runstats_exists new_value _IF_XT_RUNSTATS_EXISTS noprint;
select decode(count(*),1,’ ‘,’–‘) if_xt_runstats_exists
from all_objects where object_name=’XT_RUNSTATS’ and rownum=1;
/**
* Main test.
* You can use it also in other clients,
* but in that case you have to manually
* set substitution variable _IF_XT_RUNSTATS_EXISTS:
* if you have not this package – to "–"
* otherwise – to space(" ")
* Latest version of XT_RUNSTATS you can get from:
* https://github.com/xtender/xt_runstats
*/

declare

C_SQL_TEXT constant varchar2(300):=’SELECT SUM(A) FROM XT_IFFS T’;
C_PROFILE_NAME constant varchar2(30) :=’PRF_ADPR_TEST’;
v_small_table_threshold int;
v_db_block_buffers int;
v_very_large_object_threshold int;
v__db_cache_size int;

procedure exec(p varchar2) is
e_table_is_not_created_yet exception;
e_index_is_not_created_yet exception;
pragma exception_init(e_table_is_not_created_yet, -942);
pragma exception_init(e_index_is_not_created_yet, -1418);
begin
execute immediate p;
exception
when e_table_is_not_created_yet
or e_index_is_not_created_yet
then null;
end;

/** Creating table and setting stats */
procedure create_table is
begin
exec(‘drop table xt_iffs purge’);
exec(‘create table xt_iffs as
with gen as(
select level a,mod(level,10) b,lpad(1,50,1) c
from dual
connect by level<=1e3
)
select gen.*
from gen,gen gen2′
);
–dbms_stats.set_table_stats(”,’XT_IFFS’,numrows => 1e6,numblks => 5e5,avgrlen => 800);
dbms_stats.gather_table_stats(”,’XT_IFFS’);
end;

/** Recreating index and setting stats */
procedure create_index is
begin
exec(‘drop index ix_iffs’);
exec(‘create index ix_iffs on xt_iffs(a,b)’);
dbms_stats.set_index_stats(”,’IX_IFFS’,numrows => 1e6,numlblks => 1e4);
end;

/** Setting index numblks for query through SQL profile */
procedure set_numblks(p_numblks int) is
e_profile_not_created_yet exception;
pragma exception_init( e_profile_not_created_yet, -13833);
begin
begin
dbms_sqltune.drop_sql_profile(C_PROFILE_NAME);
exception when e_profile_not_created_yet then null;
end;
dbms_sqltune.import_sql_profile(
sql_text => C_SQL_TEXT,
profile => sys.sqlprof_attr(
‘INDEX_FFS(@"SEL$1" "T"@"SEL$1" ("XT_IFFS"."A"))’
,’INDEX_STATS("’||user||’"."XT_IFFS", "IX_IFFS", scale, blocks=’||p_numblks||’)’
),
category => ‘DEFAULT’,
name => C_PROFILE_NAME,
force_match => false,
replace => true
);
end;

procedure test( p_description varchar2
, p_numblks int default null) is

type t_seg_stat is
table of number
index by v$segstat_name.name%type;
— segments statistics by index:
cursor c_stats is
select sn.name,nvl(st.value,0) value
from v$segstat_name sn
,v$segment_statistics st
where
sn.statistic# = st.statistic#(+)
and st.owner(+) = user
and st.object_name(+) =’IX_IFFS’;
— var for previous stats:
v_pre t_seg_stat;
v_delta number;
n number;
begin
dbms_output.put_line(‘-‘);
dbms_output.put_line(‘-‘);
dbms_output.put_line(lpad(‘-‘,150,’-‘));
dbms_output.put_line(lpad(‘-‘,150,’-‘));
dbms_output.put_line( ‘### ‘
|| p_description||’: ‘
||nvl(to_char(p_numblks),’default’)||’ blocks’);
dbms_output.put_line(‘-‘);

create_index;
— if p_numblks is null then default stats used
if p_numblks is not null then
set_numblks(p_numblks);
end if;
execute immediate C_SQL_TEXT into n;
exec(‘alter system flush buffer_cache’);
— saving segment statistics
for r in c_stats loop
v_pre(r.name) := r.value;
end loop;

&_IF_XT_RUNSTATS_EXISTS xt_runstats.init(p_latches => false);
— executing query
execute immediate C_SQL_TEXT into n;
&_IF_XT_RUNSTATS_EXISTS xt_runstats.snap;
&_IF_XT_RUNSTATS_EXISTS xt_runstats.print(
&_IF_XT_RUNSTATS_EXISTS p_stats_mask => ‘(reads (cache|direct)\.)|index fast full scans \((full|direct)’
&_IF_XT_RUNSTATS_EXISTS ,p_sta_diff_pct => 0);

— printing segments stats delta:
for r in c_stats loop
v_delta:= r.value – v_pre(r.name);
if v_delta!=0 then
dbms_output.put_line( rpad(r.name,40,’.’)||v_delta );
end if;
end loop;
end;

procedure load_and_print_params is
begin
select
max(decode(a.ksppinm, ‘_small_table_threshold’ ,b.ksppstvl))
,max(decode(a.ksppinm, ‘_db_block_buffers’ ,b.ksppstvl))
,max(decode(a.ksppinm, ‘_very_large_object_threshold’ ,b.ksppstvl))
,max(decode(a.ksppinm, ‘__db_cache_size’ ,b.ksppstvl))
into v_small_table_threshold,v_db_block_buffers,v_very_large_object_threshold,v__db_cache_size
from
sys.x$ksppi a
,sys.x$ksppcv b
where
a.indx = b.indx
and a.ksppinm in(‘_small_table_threshold’,’_db_block_buffers’,’_very_large_object_threshold’,’__db_cache_size’);
dbms_output.put_line(‘_small_table_threshold = ‘||v_small_table_threshold);
dbms_output.put_line(‘_db_block_buffers = ‘||v_db_block_buffers);
dbms_output.put_line(‘_very_large_object_threshold = ‘||v_very_large_object_threshold);
dbms_output.put_line(‘__db_cache_size = ‘||v__db_cache_size);

end;
begin
create_table;
load_and_print_params;

test( ‘_very_large_object_threshold + 1’
,v_db_block_buffers * v_very_large_object_threshold/100 + 1 );
test( ‘_very_large_object_threshold’
,v_db_block_buffers * v_very_large_object_threshold/100 );
test( ‘_very_large_object_threshold – 1’
,v_db_block_buffers * v_very_large_object_threshold/100 – 1 );

test( ‘_db_block_buffers + 1’
,v_db_block_buffers + 1 );
test( ‘_db_block_buffers – 1’
,v_db_block_buffers – 1 );

test( ‘_small_table_threshold * 5 + 1’
,v_small_table_threshold * 5 + 1 );
test( ‘_small_table_threshold * 5 – 1’
,v_small_table_threshold * 5 – 1 );

test( ‘ 1 block ‘, 1);
test( ‘ Default ‘, null);

test( ‘ Again _very_large_object_threshold + 1’
,v_db_block_buffers * v_very_large_object_threshold/100 + 1 );
test( ‘ Again _very_large_object_threshold’
,v_db_block_buffers * v_very_large_object_threshold/100 );

end;
/
[/sourcecode]

[collapse]

Test results

[sourcecode language=”sql” highlight=”9,17,19,25,31,41,52,62,73,83,94,104,115,125,136,146,157,167,178,188,199,209,221,231″ ruler=”true” collapse=”true”]
_small_table_threshold = 166
_db_block_buffers = 8347
_very_large_object_threshold = 500
__db_cache_size = 4194304
–
–
——————————————————————————
——————————————————————————
### _very_large_object_threshold + 1: 41736 blocks
–
################ Results: ##################
Run # 01 ran in 11 hsecs
##########################################################
Statistics | Run # 1
##########################################################
physical reads cache……………….. | 1
physical reads direct………………. | 2,491
index fast full scans (full)………… | 1
index fast full scans (direct read)….. | 1
##########################################################
–
logical reads………………………2496
physical reads……………………..2492
physical read requests………………86
physical reads direct……………….2491
segment scans………………………1
–
–
——————————————————————————
——————————————————————————
### _very_large_object_threshold: 41735 blocks
–
################ Results: ##################
Run # 01 ran in 6 hsecs
##########################################################
Statistics | Run # 1
##########################################################
physical reads cache……………….. | 2,494
physical reads direct………………. | 0
index fast full scans (full)………… | 1
index fast full scans (direct read)….. | 0
##########################################################
–
logical reads………………………2496
physical reads……………………..2494
physical read requests………………95
segment scans………………………1
–
–
——————————————————————————
——————————————————————————
### _very_large_object_threshold – 1: 41734 blocks
–
################ Results: ##################
Run # 01 ran in 11 hsecs
##########################################################
Statistics | Run # 1
##########################################################
physical reads cache……………….. | 3,386
physical reads direct………………. | 1
index fast full scans (full)………… | 1
index fast full scans (direct read)….. | 0
##########################################################
–
logical reads………………………2512
physical reads……………………..2494
physical read requests………………95
segment scans………………………1
–
–
——————————————————————————
——————————————————————————
### _db_block_buffers + 1: 8348 blocks
–
################ Results: ##################
Run # 01 ran in 6 hsecs
##########################################################
Statistics | Run # 1
##########################################################
physical reads cache……………….. | 2,494
physical reads direct………………. | 0
index fast full scans (full)………… | 1
index fast full scans (direct read)….. | 0
##########################################################
–
logical reads………………………2512
physical reads……………………..2494
physical read requests………………95
segment scans………………………1
–
–
——————————————————————————
——————————————————————————
### _db_block_buffers – 1: 8346 blocks
–
################ Results: ##################
Run # 01 ran in 8 hsecs
##########################################################
Statistics | Run # 1
##########################################################
physical reads cache……………….. | 2,494
physical reads direct………………. | 0
index fast full scans (full)………… | 1
index fast full scans (direct read)….. | 0
##########################################################
–
logical reads………………………2512
physical reads……………………..2494
physical read requests………………95
segment scans………………………1
–
–
——————————————————————————
——————————————————————————
### _small_table_threshold * 5 + 1: 831 blocks
–
################ Results: ##################
Run # 01 ran in 6 hsecs
##########################################################
Statistics | Run # 1
##########################################################
physical reads cache……………….. | 2,494
physical reads direct………………. | 0
index fast full scans (full)………… | 1
index fast full scans (direct read)….. | 0
##########################################################
–
logical reads………………………2512
physical reads……………………..2494
physical read requests………………95
segment scans………………………1
–
–
——————————————————————————
——————————————————————————
### _small_table_threshold * 5 – 1: 829 blocks
–
################ Results: ##################
Run # 01 ran in 7 hsecs
##########################################################
Statistics | Run # 1
##########################################################
physical reads cache……………….. | 2,494
physical reads direct………………. | 0
index fast full scans (full)………… | 1
index fast full scans (direct read)….. | 0
##########################################################
–
logical reads………………………2496
physical reads……………………..2494
physical read requests………………95
segment scans………………………1
–
–
——————————————————————————
——————————————————————————
### 1 block : 1 blocks
–
################ Results: ##################
Run # 01 ran in 6 hsecs
##########################################################
Statistics | Run # 1
##########################################################
physical reads cache……………….. | 2,494
physical reads direct………………. | 0
index fast full scans (full)………… | 1
index fast full scans (direct read)….. | 0
##########################################################
–
logical reads………………………2512
physical reads……………………..2494
physical read requests………………95
segment scans………………………1
–
–
——————————————————————————
——————————————————————————
### Default : default blocks
–
################ Results: ##################
Run # 01 ran in 7 hsecs
##########################################################
Statistics | Run # 1
##########################################################
physical reads cache……………….. | 2,494
physical reads direct………………. | 0
index fast full scans (full)………… | 1
index fast full scans (direct read)….. | 0
##########################################################
–
logical reads………………………2496
physical reads……………………..2494
physical read requests………………95
segment scans………………………1
–
–
——————————————————————————
——————————————————————————
### Again _very_large_object_threshold + 1: 41736 blocks
–
################ Results: ##################
Run # 01 ran in 6 hsecs
##########################################################
Statistics | Run # 1
##########################################################
physical reads cache……………….. | 1
physical reads direct………………. | 2,491
index fast full scans (full)………… | 1
index fast full scans (direct read)….. | 1
##########################################################
–
logical reads………………………2496
physical reads……………………..2492
physical read requests………………86
physical reads direct……………….2491
segment scans………………………1
–
–
——————————————————————————
——————————————————————————
### Again _very_large_object_threshold: 41735 blocks
–
################ Results: ##################
Run # 01 ran in 7 hsecs
##########################################################
Statistics | Run # 1
##########################################################
physical reads cache……………….. | 2,494
physical reads direct………………. | 0
index fast full scans (full)………… | 1
index fast full scans (direct read)….. | 0
##########################################################
–
logical reads………………………2496
physical reads……………………..2494
physical read requests………………95
segment scans………………………1

PL/SQL procedure successfully completed.
[/sourcecode]

[collapse]

Old test script for Index Fast Full Scan: controlling_direct_reads_with_profiles.sql

2015-05-26. Update #2: Thanks to Advait Deo for correcting common mistake about wrong calculation of the lower limit for direct path read decision: the right limit is just “_small_table_threshold” ( “_small_table_threshold” * 5 )

I have simplified the test for checking this behaviour:

Simplified test(table scan)

[sourcecode language=”sql”]
/** SQLPLUS Example of controlling adaptive serial direct path reads through SQL profiles.
*/
set serverout on termout on timing off feed off;
clear scr;
/**
* Main test.
*/

declare

C_SQL_TEXT constant varchar2(300):=’SELECT SUM(A) FROM XT_IFFS T’;
v_small_table_threshold int;
v_db_block_buffers int;
v_very_large_object_threshold int;
v__db_cache_size int;

procedure exec(p varchar2) is
e_table_is_not_created_yet exception;
e_index_is_not_created_yet exception;
pragma exception_init(e_table_is_not_created_yet, -942);
pragma exception_init(e_index_is_not_created_yet, -1418);
begin
execute immediate p;
exception
when e_table_is_not_created_yet
or e_index_is_not_created_yet
then null;
end;

/** Creating table and setting stats */
procedure create_table is
begin
exec(‘drop table xt_iffs purge’);
exec(‘create table xt_iffs as
with gen as(
select level a,mod(level,10) b,lpad(1,50,1) c
from dual
connect by level<=100
)
select gen.*
from gen,gen gen2′
);
dbms_stats.gather_table_stats(”,’XT_IFFS’,no_invalidate => false);
dbms_stats.set_table_stats(”,’XT_IFFS’,numrows => 1e6,numblks => 5e5,avgrlen => 800,no_invalidate => false);
end;

/** Setting index numblks for query through SQL profile */
procedure set_numblks(p_numblks int) is
e_profile_not_created_yet exception;
pragma exception_init( e_profile_not_created_yet, -13833);
begin
dbms_stats.set_table_stats(user,’XT_IFFS’,numrows => 1e6,numblks => p_numblks, avgrlen => 800,no_invalidate => false);
end;

procedure test( p_description varchar2
, p_numblks int default null) is
v_delta number;
n number;

type t_ses_stat is
table of number
index by v$segstat_name.name%type;

— var for previous stats:
v_pre t_ses_stat;

— segments statistics by segment name:
cursor c_stats is
select name,value
from v$mystat st join v$statname sn using(STATISTIC#)
where regexp_like(name,’^physical reads (cache|direct)$’);

procedure ses_stats_save is
begin
for r in c_stats loop
v_pre(r.name) := r.value;
end loop;
end ses_stats_save;

procedure ses_stats_print is
begin
for r in c_stats loop
v_delta:= r.value – v_pre(r.name);
if v_delta!=0 then
dbms_output.put_line( rpad(r.name,40,’.’)||v_delta );
end if;
end loop;
end ses_stats_print;

begin
dbms_output.put_line(chr(160));
dbms_output.put_line(lpad(‘-‘,150,’-‘));
dbms_output.put_line( ‘### ‘
|| p_description||’: ‘
||nvl(to_char(p_numblks),’default’)||’ blocks’);
dbms_output.put_line(chr(160));

–create_index;
— if p_numblks is null then default stats used
if p_numblks is not null then
set_numblks(p_numblks);
end if;
execute immediate C_SQL_TEXT into n;
exec(‘alter system flush buffer_cache’);
— saving segment statistics
ses_stats_save;

— executing query
execute immediate C_SQL_TEXT into n;

— printing segments stats delta:
ses_stats_print;
end;

procedure load_and_print_params is
begin
select
max(decode(a.ksppinm, ‘_small_table_threshold’ ,b.ksppstvl))
,max(decode(a.ksppinm, ‘_db_block_buffers’ ,b.ksppstvl))
,max(decode(a.ksppinm, ‘_very_large_object_threshold’ ,b.ksppstvl))
,max(decode(a.ksppinm, ‘__db_cache_size’ ,b.ksppstvl))
into v_small_table_threshold,v_db_block_buffers,v_very_large_object_threshold,v__db_cache_size
from
sys.x$ksppi a
,sys.x$ksppcv b
where
a.indx = b.indx
and a.ksppinm in(‘_small_table_threshold’,’_db_block_buffers’,’_very_large_object_threshold’,’__db_cache_size’);
dbms_output.put_line(‘_small_table_threshold = ‘||v_small_table_threshold);
dbms_output.put_line(‘_db_block_buffers = ‘||v_db_block_buffers);
dbms_output.put_line(‘_very_large_object_threshold = ‘||v_very_large_object_threshold);
dbms_output.put_line(‘__db_cache_size = ‘||v__db_cache_size);

end;
begin
create_table;
load_and_print_params;

test( ‘_very_large_object_threshold + 1’
,v_db_block_buffers * v_very_large_object_threshold/100 + 1 );
test( ‘_very_large_object_threshold’
,v_db_block_buffers * v_very_large_object_threshold/100 );
test( ‘_very_large_object_threshold – 1’
,v_db_block_buffers * v_very_large_object_threshold/100 – 1 );

test( ‘_db_block_buffers + 1’
,v_db_block_buffers + 1 );
test( ‘_db_block_buffers – 1’
,v_db_block_buffers – 1 );

test( ‘_small_table_threshold * 5 + 1’
,v_small_table_threshold * 5 + 1 );
test( ‘_small_table_threshold * 5 – 1’
,v_small_table_threshold * 5 – 1 );

test( ‘_small_table_threshold + 1’
,v_small_table_threshold + 1 );
test( ‘_small_table_threshold’
,v_small_table_threshold );
test( ‘_small_table_threshold – 1’
,v_small_table_threshold – 1 );

test( ‘ 1 block ‘, 1);
test( ‘ Default ‘, null);

test( ‘ Again _very_large_object_threshold + 1’
,v_db_block_buffers * v_very_large_object_threshold/100 + 1 );
test( ‘ Again _very_large_object_threshold’
,v_db_block_buffers * v_very_large_object_threshold/100 );

end;
/

[/sourcecode]

[collapse]

Results

[sourcecode language=”sql” highlight=”43,46,49,52,55,57″]
_small_table_threshold = 2246
_db_block_buffers = 112347
_very_large_object_threshold = 500
__db_cache_size = 939524096

————————————————————————
### _very_large_object_threshold + 1: 561736 blocks

physical reads cache………………..1
physical reads direct……………….87

————————————————————————
### _very_large_object_threshold: 561735 blocks

physical reads cache………………..1
physical reads direct……………….87

————————————————————————
### _very_large_object_threshold – 1: 561734 blocks

physical reads cache………………..1
physical reads direct……………….87

————————————————————————
### _db_block_buffers + 1: 112348 blocks

physical reads cache………………..1
physical reads direct……………….87

————————————————————————
### _db_block_buffers – 1: 112346 blocks

physical reads cache………………..1
physical reads direct……………….87

————————————————————————
### _small_table_threshold * 5 + 1: 11231 blocks

physical reads cache………………..1
physical reads direct……………….87

————————————————————————
### _small_table_threshold * 5 – 1: 11229 blocks

physical reads cache………………..1
physical reads direct……………….87

————————————————————————
### _small_table_threshold + 1: 2247 blocks

physical reads cache………………..1
physical reads direct……………….87

————————————————————————
### _small_table_threshold – 1: 2245 blocks

physical reads cache………………..88

————————————————————————
### 1 block : 1 blocks

physical reads cache………………..88

————————————————————————
### Default : default blocks

physical reads cache………………..88

————————————————————————
### Again _very_large_object_threshold + 1: 561736 blocks

physical reads cache………………..1
physical reads direct……………….87

————————————————————————
### Again _very_large_object_threshold: 561735 blocks

physical reads cache………………..1
physical reads direct……………….87
[/sourcecode]

[collapse]

Full test script: test_table.sql

direct path reads

Unresolved quiz: Avoiding in-list iterator

Posted on March 14, 2013 by Sayan Malakshinov Posted in CBO, oracle, query optimizing 3,631 Page views 11 Comments

A couple days ago i had very interesting quiz, which is not resolved yet.
Look at this simplified query:

  select *
  from xt1,xt2
  where
       xt1.b=10
   and xt1.a=xt2.a
   and xt2.b in (1,2);

PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------
Plan hash value: 2715236140

----------------------------------------------------------------------------------------
| Id  | Operation                     | Name   | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT              |        |   100 | 36900 |   501   (0)| 00:00:07 |
|   1 |  NESTED LOOPS                 |        |       |       |            |          |
|   2 |   NESTED LOOPS                |        |   100 | 36900 |   501   (0)| 00:00:07 |
|   3 |    TABLE ACCESS BY INDEX ROWID| XT1    |   100 | 31000 |   101   (0)| 00:00:02 |
|*  4 |     INDEX RANGE SCAN          | IX_XT1 |   100 |       |     1   (0)| 00:00:01 |
|   5 |    INLIST ITERATOR            |        |       |       |            |          |
|*  6 |     INDEX RANGE SCAN          | IX_XT2 |     1 |       |     3   (0)| 00:00:01 |
|   7 |   TABLE ACCESS BY INDEX ROWID | XT2    |     1 |    59 |     4   (0)| 00:00:01 |
----------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   4 - access("XT1"."B"=10)
   6 - access("XT1"."A"="XT2"."A" AND ("XT2"."B"=1 OR "XT2"."B"=2))

Full test case

[sourcecode language=”sql”]
create table xt1 as
select
level a
, mod(level,1000) b
, lpad(1,300,1) padding
from dual
connect by level<=1e5;

create index ix_xt1 on xt1(b);

create table xt2 as
select
level a
, mod(level,5) b
, lpad(1,50,1) padding
from dual
connect by level<=1e6;

alter table xt2
add constraint uq_xt2
unique (a)
using index(create index ix_xt2 on xt2(a,b));

exec dbms_stats.gather_table_stats(”,’XT1′,cascade=>true);
exec dbms_stats.gather_table_stats(”,’XT2′,cascade=>true);

explain plan for
select *
from xt1,xt2
where
xt1.b=10
and xt1.a=xt2.a
and xt2.b in (1,2);

@?/rdbms/admin/utlxpls.sql
[/sourcecode]

[collapse]

As you see, in such queries cbo always generating plans with INLIST ITERATOR, and it is reasonably in cases when there are many rows with different values of field B for most values of A, and this number is much larger than number of values in the “INLIST”. But in such case as shown, will be better to use index range scan with access by A and filter by B:

SQL> select *
  2  from xt1,xt2
  3  where
  4       xt1.b=10
  5   and xt1.a=xt2.a
  6   and xt2.b in (1,2);

no rows selected

Statistics
----------------------------------------------------------
        ...
        505  consistent gets
SQL> -- without inlist iterator:
SQL> select *
  2  from xt1,xt2
  3  where
  4       xt1.b=10
  5   and xt1.a=xt2.a
  6   and xt2.b+0 in (1,2);

no rows selected

Statistics
----------------------------------------------------------
        ...
        305  consistent gets

But how we can do it? I know 5 options:
1. Trace event 10157
2. Rewrite code. for example replacing “b in (1,2)” to “b+0 in (1,2)”
3. Changing query with “Advanced query rewrite” (DBMS_ADVANCED_REWRITE.DECLARE_REWRITE_EQUIVALENCE)
4. Recreating index from xt2(a,b) to xt2(a,1,b)
5. Changing optimizer_mode to “rule” through hint or SQL profile/baseline

But unfortunately all of them are inapplicable for the my real problem, because i cannot for some reasons rewrite query or change query with advanced rewrite, cannot recreate/add index, and can’t change optimizer_mode, because execution plan for the real query will become worst than plan generated with CBO with inlist iterator(some operations aren’t exists in RBO).

Could anybody suggest any another solution?

UPDATE #1:
This bug is fixed now in 12.2, and patch 16516751 is available now for 11.2.0.3 Solaris64.
Changes:
1. CBO can consider filters in such cases now
2. Hint NUM_INDEX_KEYS fixed and works fine

UPDATE #2:
Very interesting solution by Igor Usoltsev(in russian):
Ignored hint USE_CONCAT(OR_PREDICATES(N)) allows to avoid inlist iterator.
Example:

select--+ USE_CONCAT(OR_PREDICATES(32767))
 * from xt1,xt2
where
     xt1.b=10
 and xt1.a=xt2.a
 and xt2.b in (1,2)
/

Plan hash value: 2884586137          -- good plan:
 
----------------------------------------------------------------------------------------
| Id  | Operation                     | Name   | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT              |        |       |       |   401 (100)|          |
|   1 |  NESTED LOOPS                 |        |       |       |            |          |
|   2 |   NESTED LOOPS                |        |   100 | 36900 |   401   (0)| 00:00:01 |
|   3 |    TABLE ACCESS BY INDEX ROWID| XT1    |   100 | 31000 |   101   (0)| 00:00:01 |
|*  4 |     INDEX RANGE SCAN          | IX_XT1 |   100 |       |     1   (0)| 00:00:01 |
|*  5 |    INDEX RANGE SCAN           | IX_XT2 |     1 |       |     2   (0)| 00:00:01 |
|   6 |   TABLE ACCESS BY INDEX ROWID | XT2    |     1 |    59 |     3   (0)| 00:00:01 |
----------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
   4 - access("XT1"."B"=10)
   5 - access("XT1"."A"="XT2"."A")
       filter(("XT2"."B"=1 OR "XT2"."B"=2)) 

From 10053 trace on nonpatched 11.2.0.3:
inlist_concat_diff_10053

inlist iterator oracle query optimization

Deterministic function vs scalar subquery caching. Part 3

Posted on March 13, 2013 by Sayan Malakshinov Posted in deterministic functions, oracle, scalar subquery caching 2,795 Page views 1 Comment

In previous parts i already point out that:

  1. Both mechanisms are based on hash functions.
  2. Deterministic caching depends on fetch size(arraysize) – results cached only within one fetch call, ssc has no this limitation.
  3. Hash collisions depends on the single parameter “_query_execution_cache_max_size” for both mechanizms, but they are more frequent in SSC.
  4. Oracle doesn’t keep last result of deterministic functions as it does for scalar subquery caching
  5. Caching of deterministic functions results turns off after a certain number of attempts to get the value from the cache. But SSC always returns results from cache if values already cached.

Upd 2015-02-19:
A couple additions about deterministic functions:

  • Deterministic functions, result_cache and operators
  • How works optimization of loops in PL/SQL in 11g: Part 1. Deterministic functions

Today’s post is just addition to previous topics:

I wrote about turning caching off after many unsuccessfull attempts to get value from cache, but i didn’t say what it is the number. In fact caching of deterministic functions also depends on two another hidden parameters:

SQL> @param_ plsql%cach

NAME                                 VALUE        DEFLT    TYPE       DESCRIPTION
------------------------------------ ------------ -------- ---------- ------------------------------------------------------------------
_plsql_cache_enable                  TRUE         TRUE     boolean    PL/SQL Function Cache Enabled
_plsql_minimum_cache_hit_percent     20           TRUE     number     plsql minimum cache hit percentage required to keep caching active

First parameter “_plsql_cache_enable” is just a parameter which enables/disables this caching mechanism.
But the second parameter – “_plsql_minimum_cache_hit_percent” – is responsible for the percentage of unsuccessful attempts which disables caching.

I will show their effects with the example from the previous post:

-- set this parameter to big value for maximizing caching:
alter session set "_query_execution_cache_max_size" = 131072;
-- log table clearing:
truncate table t_params;
-- test with percentage = 50
alter session set "_plsql_minimum_cache_hit_percent"=50;
select sum(f_deterministic(n)) fd
from
  xmltable('1 to 10000,1 to 10000'
           columns n int path '.'
          );
 
select 10000-count(count(*)) "Count of cached results"
from t_params
group by p
having count(*)>1;
/*
Count of cached results
-----------------------
                      0
*/
-- now i change cache hit percentage parameter to 0:
alter session set "_plsql_minimum_cache_hit_percent"=0;
truncate table t_params;
select sum(f_deterministic(n)) fd
from
  xmltable('1 to 10000,1 to 10000'
           columns n int path '.'
          );
 
select 10000-count(count(*)) "Count of cached results"
from t_params
group by p
having count(*)>1;
/*
Count of cached results
-----------------------
                   2039
*/
deterministic functions oracle oracle undocumented behaviour query optimizing

How works optimization of loops in PL/SQL in 11g: Part 1. Deterministic functions

Posted on March 13, 2013 by Sayan Malakshinov Posted in deterministic functions, oracle, PL/SQL optimization 2,826 Page views 4 Comments

As you know, since 10g Oracle can optimize PL/SQL loops with moving code out of loops for reducing number of identical code executions.
If you have not read yet about it, I advise you to first read these two articles:
1. “PLSQL_OPTIMIZE_LEVEL: The optimization strategy of Oracle” by Saurabh K. Gupta
2. “PL/SQL optimisation in 10g” by Adrian Billington

But since 11g Oracle also can optimize code with deterministic functions too. For this to happen, code must meet the following conditions:
1. PLSQL_OPTIMIZE_LEVEL greater or equal 2
2. Parameters should not be changed in the loop body. Strictly speaking, there should not be any assignment of parameters.
3. Should not be any implicit conversions of parameters in function call, i.e. all variables/literals must be the same type as input parameters declared.
4. Should not be any call of non-deterministic functions (except some standard sql functions like to_date, to_char, nvl) or procedures in the loop

Note that this rules concerns only same scope level as this loop and not inner loops or another blocks.

Let me show how it works on simple examples:

11.2 11g deterministic functions pl/sql optimization
photo Sayan Malakshinov

Oracle ACE Pro Oracle ACE Pro Alumni

DEVVYOracle Database Developer Choice Award winner

Oracle performance tuning expert

UK / Cambridge

LinkedIn   Twitter
sayan@orasql.org

Recent Posts

  • Oracle Telegram Bot
  • Partition Pruning and Global Indexes
  • Interval Search: Part 4. Dynamic Range Segmentation – interval quantization
  • Interval Search Series: Simplified, Advanced, and Custom Solutions
  • Interval Search: Part 3. Dynamic Range Segmentation – Custom Domain Index

Popular posts

Recent Comments

  • Oracle SQL | Interval Search: Part 4. Dynamic Range Segmentation – interval quantization on Interval Search: Part 3. Dynamic Range Segmentation – Custom Domain Index
  • Oracle SQL | Interval Search: Part 4. Dynamic Range Segmentation – interval quantization on Interval Search: Part 2. Dynamic Range Segmentation – Simplified
  • Oracle SQL | Interval Search: Part 4. Dynamic Range Segmentation – interval quantization on Interval Search: Optimizing Date Range Queries – Part 1
  • Oracle SQL | Interval Search Series: Simplified, Advanced, and Custom Solutions on Interval Search: Part 2. Dynamic Range Segmentation – Simplified
  • Oracle SQL | Interval Search: Part 2. Dynamic Range Segmentation – Simplified on Interval Search: Part 3. Dynamic Range Segmentation – Custom Domain Index

Blogroll

  • Alex Fatkulin
  • Alexander Anokhin
  • Andrey Nikolaev
  • Charles Hooper
  • Christian Antognini
  • Coskan Gundogar
  • David Fitzjarrell
  • Igor Usoltsev
  • Jonathan Lewis
  • Karl Arao
  • Mark Bobak
  • Martin Bach
  • Martin Berger
  • Neil Chandler
  • Randolf Geist
  • Richard Foote
  • Riyaj Shamsudeen
  • Tanel Poder
  • Timur Akhmadeev
  • Valentin Nikotin
  • Prev
  • 1
  • 2
  • 3
  • Next
©Sayan Malakshinov. Oracle SQL