Oracle SQL
  • LICENSE

Tag Archives: oracle

Create Quarantine

Posted on August 16, 2018 by Roger MacNicol Posted in cell_offload, oracle, SmartScan 1,695 Page views Leave a comment

First if you want don’t know what an Exadata Quarantine is read this.

Someone asked whether you can create your own Exadata Cell quarantine and, if you can, why you might ever want to do it? 

The first step when you don’t know how to do something is try HELP in cellcli

CellCLI> HELP
...
ALTER QUARANTINE
...
CREATE QUARANTINE
...
DROP QUARANTINE
...
LIST QUARANTINE

So we see we can create a quarantine, so we use HELP again:

Continue reading→
Cell Offloading cellcli Offload Quarantine oracle Roger MacNicol SmartScan troubleshooting

Shining some light on Database In-Memory vs the Exadata Columnar Cache in 12.1.0.2

Posted on August 3, 2018 by Roger MacNicol Posted in cell_offload, inmemory, oracle, SmartScan, trace 1,556 Page views Leave a comment

I posted a while back on how to use Tracing Hybrid Columnar Compression in an offload server so this is a quick follow up.

  1. I have trouble remembering the syntax for setting a regular parameter in an offload server without bouncing it. Since I need to keep this written down somewhere I thought it might be use to support folks and dbas.
  2. I forgot to show you how to specify which offload group to set the trace event

So this example should do both: 

CellCLI > alter cell offloadGroupEvents = "immediate cellsrv.cellsrv_setparam('my_parameter, 'TRUE')", offloadGroupName = "SYS_122110_160621"

this will, of course, set a parameter temporarily until the next time the offload server is bounced, but also adding it to the offload group’s init.ora will take care of that.

Cell Offloading Cellmemory inmemory oracle Roger MacNicol SmartScan traces troubleshooting

Create External Table as Select

Posted on March 9, 2018 by Roger MacNicol Posted in curious, oracle, SmartScan 2,014 Page views Leave a comment

I was looking through a test script and saw something I didn’t know you could do in Oracle. I mentioned it to an Oracle ACE and he didn’t know it either. I then said to the External Table engineers “Oh I see you’ve added this cool new feature” and he replied dryly – “Yes, we added it in Oracle 10.1”. Ouch! So just in case you also didn’t know, you can create an External Table using a CTAS and the ORACLE_DATAPUMP driver.

This feature only work with the ORACLE_DATAPUMP access driver (it does NOT work with with the LOADER, HIVE, or HDFS drivers) and we can use it like this:

SQL> create table cet_test organization external
  2  (
  3    type ORACLE_DATAPUMP
  4    default directory T_WORK
  5    location ('xt_test01.dmp','xt_test02.dmp')
  6  ) parallel 2
  7  as select * from lineitem
 
Table created.

Checking the results shows us

-rw-rw---- ... 786554880 Mar 9 10:48 xt_test01.dmp 
-rw-rw---- ... 760041472 Mar 9 10:48 xt_test02.dmp

This can be a great way of creating a (redacted) sample of data to give to a developer to test or for a bug repro to give to support or to move between systems. 

oracle oracle undocumented behaviour Roger MacNicol SmartScan undocumented oracle

Understanding External Table URowids

Posted on March 8, 2018 by Roger MacNicol Posted in External tables, oracle, SmartScan 1,734 Page views Leave a comment

I finally found time to get back to External Tables and have a list of blog posts I need to write on this topic. Here’s a brief one.

DBMS_ROWID will nicely break down a heap table’s rowid for you into file number, block number, and row number but it doesn’t handle the rowids coming from External Tables. So let’s look at how to make sense of them. They fall under the datatype UROWID which is a nominally opaque rowid defined by the data source. The first byte of a UROWID tells you which data source it came from and consequently how to deconstruct it.

The easiest way to see what is happening is via the SQL Dump function:

SQL> column xtrowid format a55
SQL> select c_custkey, dump(rowid,16) "XTROWID" from c_et 
  2> where c_custkey < 10;

 C_CUSTKEY XTROWID
---------- -------------------------------------------------------
         1 Typ=208 Len=17: 4,0,1,27,a2,0,0,0,0,0,0,0,0,0,0,0,1
         2 Typ=208 Len=17: 4,0,1,27,a2,0,0,0,0,0,0,0,0,0,0,0,2
         3 Typ=208 Len=17: 4,0,1,27,a2,0,0,0,0,0,0,0,0,0,0,0,3
         4 Typ=208 Len=17: 4,0,1,27,a2,0,0,0,0,0,0,0,0,0,0,0,4
         5 Typ=208 Len=17: 4,0,1,27,a2,0,0,0,0,0,0,0,0,0,0,0,5
         6 Typ=208 Len=17: 4,0,1,27,a2,0,0,0,0,0,0,0,0,0,0,0,6
         7 Typ=208 Len=17: 4,0,1,27,a2,0,0,0,0,0,0,0,0,0,0,0,7
         8 Typ=208 Len=17: 4,0,1,27,a2,0,0,0,0,0,0,0,0,0,0,0,8
         9 Typ=208 Len=17: 4,0,1,27,a2,0,0,0,0,0,0,0,0,0,0,0,9

9 rows selected.
Continue reading→
external tables oracle Roger MacNicol SmartScan

How to tell if the Exadata column cache is fully loaded

Posted on January 23, 2018 by Roger MacNicol Posted in oracle, SmartScan, statistics, troubleshooting 1,920 Page views 1 Comment

When a background activity is happening on the cell you typically can’t use RDBMS v$ views to monitor it in the same way. One such question is how to tell if a segment is fully loaded in the Exadata column cache since this does not appear in the equivalent In-Memory v$ views.

When a segment is scanned by Smart Scan sufficiently often to be eligible the AUTOKEEP pool (typically that means at least twice an hour), the eligible 1MB chunks are written to flash in 12.1.0.2 style format, and put on a background queue. Lower priority tasks pick up the queued 1MB 12.1.0.2 format chunks from the flash cache, run them though the In-Memory loader, and rewrite the pure columnar representation in place of the old 12.1.0.2 style column cache chunks.

The easiest way that I know of to tell when this completes is to monitor that background activity is to use the following query until it shows zero:

select name, sum(value) value from (
      select extractvalue(value(t),'/stat/@name') name,
            extractvalue(value(t),'/stat') value
      from v$cell_state cs,
           table(xmlsequence(extract(xmltype(cs.statistics_value),
                                     '//stats[@type="columnarcache"]/stat'))) t
     where statistics_type='CELL')
     where name in ('outstanding_imcpop_requests')
     group by name;

oracle Roger MacNicol SmartScan v$cell_state

“Collection iterator pickler fetch”: pipelined vs simple table functions

Posted on December 13, 2017 by Sayan Malakshinov Posted in oracle, PL/SQL, PL/SQL optimization, query optimizing, SQL, troubleshooting 2,576 Page views 2 Comments

Alex R recently discovered interesting thing: in SQL pipelined functions work much faster than simple non-pipelined table functions, so if you already have simple non-pipelined table function and want to get its results in sql (select * from table(fff)), it’s much better to create another pipelined function which will get and return its results through PIPE ROW().

A bit more details:

Assume we need to return collection “RESULT” from PL/SQL function into SQL query “select * from table(function_F(…))”.
If we create 2 similar functions: pipelined f_pipe and simple non-pipelined f_non_pipe,

create or replace function f_pipe(n int) return tt_id_value pipelined 
as
  result tt_id_value;
begin
  ...
  for i in 1..n loop
    pipe row (result(i));
  end loop;
end f_pipe;
/
create or replace function f_non_pipe(n int) return tt_id_value 
as
  result tt_id_value;
begin
  ...
  return result;
end f_non_pipe;
/

Full functions definitions

[sourcecode language=”sql”]
create or replace type to_id_value as object (id int, value int)
/
create or replace type tt_id_value as table of to_id_value
/
create or replace function f_pipe(n int) return tt_id_value pipelined
as
result tt_id_value;

procedure gen is
begin
result:=tt_id_value();
result.extend(n);
for i in 1..n loop
result(i):=to_id_value(i, 1);
end loop;
end;
begin
gen();
for i in 1..n loop
pipe row (result(i));
end loop;
end f_pipe;
/
create or replace function f_non_pipe(n int) return tt_id_value
as
result tt_id_value;

procedure gen is
begin
result:=tt_id_value();
result.extend(n);
for i in 1..n loop
result(i):=to_id_value(i, 1);
end loop;
end;
begin
gen();
return result;
end f_non_pipe;
/
create or replace function f_pipe_for_nonpipe(n int) return tt_id_value pipelined
as
result tt_id_value;
begin
result:=f_non_pipe(n);
for i in 1..result.count loop
pipe row (result(i));
end loop;
end;
/
create or replace function f_udf_pipe(n int) return tt_id_value pipelined
as
result tt_id_value;

procedure gen is
begin
result:=tt_id_value();
result.extend(n);
for i in 1..n loop
result(i):=to_id_value(i, 1);
end loop;
end;
begin
gen();
for i in 1..n loop
pipe row (result(i));
end loop;
end;
/
create or replace function f_udf_non_pipe(n int) return tt_id_value
as
result tt_id_value;

procedure gen is
begin
result:=tt_id_value();
result.extend(n);
for i in 1..n loop
result(i):=to_id_value(i, 1);
end loop;
end;
begin
gen();
return result;
end;
/
[/sourcecode]

[collapse]
Test queries

[sourcecode language=”sql”]
set echo on feed only timing on;
–alter session set optimizer_adaptive_plans=false;
–alter session set "_optimizer_use_feedback"=false;

select sum(id * value) s from table(f_pipe(&1));
select sum(id * value) s from table(f_non_pipe(&1));
select sum(id * value) s from table(f_pipe_for_nonpipe(&1));
select sum(id * value) s from table(f_udf_pipe(&1));
select sum(id * value) s from table(f_udf_non_pipe(&1));
with function f_inline_non_pipe(n int) return tt_id_value
as
result tt_id_value;
begin
result:=tt_id_value();
result.extend(n);
for i in 1..n loop
result(i):=to_id_value(i, 1);
end loop;
return result;
end;
select sum(id * value) s from table(f_inline_non_pipe(&1));
/
set timing off echo off feed on;
[/sourcecode]

[collapse]

we’ll find that the function with simple “return result” works at least twice slower than pipelined function:

Function 1 000 000 elements 100 000 elements
F_PIPE 2.46 0.20
F_NON_PIPE 4.39 0.44
F_PIPE_FOR_NONPIPE 2.61 0.26
F_UDF_PIPE 2.06 0.20
F_UDF_NON_PIPE 4.46 0.44

I was really surprised that even “COLLECTION ITERATOR PICKLER FETCH” with F_PIPE_FOR_NONPIPE that gets result of F_NON_PIPE and returns it through PIPE ROW() works almost twice faster than F_NON_PIPE, so I decided to analyze it using stapflame by Frits Hoogland.

I added “dbms_lock.sleep(1)” into both of these function after collection generation, to compare the difference only between “pipe row” in loop and “return result”:

Modified functions

[sourcecode language=”sql”]
create or replace function f_pipe(n int) return tt_id_value pipelined
as
result tt_id_value;

procedure gen is
begin
result:=tt_id_value();
result.extend(n);
for i in 1..n loop
result(i):=to_id_value(i, 1);
end loop;
end;
begin
gen();
dbms_lock.sleep(1);
for i in 1..n loop
pipe row (result(i));
end loop;
end f_pipe;
/
create or replace function f_non_pipe(n int) return tt_id_value
as
result tt_id_value;

procedure gen is
begin
result:=tt_id_value();
result.extend(n);
for i in 1..n loop
result(i):=to_id_value(i, 1);
end loop;
end;
begin
gen();
dbms_lock.sleep(1);
return result;
end f_non_pipe;
/
[/sourcecode]

[collapse]

And stapflame showed that almost all overhead was consumed by the function “kgmpoa_Assign_Out_Arguments”:

I don’t know what this function is doing exactly, but we can see that oracle assign collection a bit later.
From other functions in this stack(pmucpkl, kopp2isize, kopp2colsize, kopp2atsize(attribute?), kopuadt) I suspect that is some type of preprocessiong of return arguments.
What do you think about it?

Full stapflame output:
stapflame_nonpipe
stapflame_pipe

oracle pipelined functions pl/sql pl/sql functions pl/sql optimization

SQL*Plus tips #8: How to read the output of dbms_output without “serveroutput on”

Posted on December 10, 2017 by Sayan Malakshinov Posted in oracle, SQL*Plus, SQL*PLus tips 2,581 Page views 1 Comment

When “serveroutput” is enabled, SQL*Plus executes “BEGIN DBMS_OUTPUT.GET_LINES(:LINES, :NUMLINES); END;” after each command.
That’s why I don’t like when it is always enabled: it adds extra calls and round-trips and it is inconvenient when I want to get a plan of the last executed query:

SQL> set serverout on;
SQL> select * from dual;

D
-
X

SQL> select * from table(dbms_xplan.display_cursor('','','allstats last'));

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------
SQL_ID  9babjv8yq8ru3, child number 0

BEGIN DBMS_OUTPUT.GET_LINES(:LINES, :NUMLINES); END;

NOTE: cannot fetch plan for SQL_ID: 9babjv8yq8ru3, CHILD_NUMBER: 0
      Please verify value of SQL_ID and CHILD_NUMBER;
      It could also be that the plan is no longer in cursor cache (check v$sql_plan)

So usually I switch “serveroutput” on only if needed, but sometimes I can forget to enable it. In such cases I use very simple script that reads the output using dbms_output.get_lines and prints it using refcursor:
https://github.com/xtender/xt_scripts/blob/master/output_print.sql

When you set “serveroutput on“, SQL*Plus also executes “dbms_output.enable” and if you set “serverout off” it executes “dbms_output.disable”, that’s why my glogin.sql contains “call dbms_output.enable(1e6);” and you need to execute it after each “set serverout off” if you want to use this script.

oracle SQL*Plus

Bug with integer literals in PL/SQL

Posted on December 9, 2017 by Sayan Malakshinov Posted in curious, oracle, PL/SQL, PL/SQL optimization, undocumented 1,784 Page views Leave a comment

This interesting question was posted on our russian forum yesterday:

We have a huge PL/SQL package and this simple function returns wrong result when it’s located at the end of package body:

create or replace package body PKGXXX as
  ...
  function ffff return number is
  nRes number;
  begin        
    nRes :=  268435456;
    return nRes;
  end;
end;
/

But it works fine in any of the following cases:
* replace 268435456 with power(2, 28), or
* replace 268435456 with small literal like 268, or
* move this function to the beginning of package body

The one of the interesting findings was that the returned value is equal to the one of literals in another function.
We can reproduce this bug even with an anonymous pl/sql block. The following test case uses 32768 integer literals from 1000001 to 1032768 and prints 5 other integers:

declare n number;
begin
  n:=1000001; -- this part
  n:=1000002; -- creates
  n:=1000003; -- 32768 
   ...        -- integer
  n:=1032768; -- literals
    dbms_output.put_line('100000='||100000); -- it should print: 100000=100000
    dbms_output.put_line('32766 ='||32766);
    dbms_output.put_line('32767 ='||32767);    
    dbms_output.put_line('32768 ='||32768);
    dbms_output.put_line('32769 ='||32769);
end;

Test code

[sourcecode language=”sql”]
declare
c clob:=’declare n number;begin’||chr(10);
f varchar2(100):=’n:=%s;’||chr(10);
v varchar2(32767);
n number:=32768;
begin
for i in 1..n loop
v:=v||utl_lms.format_message(f,to_char(1e7+i));
if length(v)>30000 then
c:=c||v;
v:=”;
end if;
end loop;
v:=v||q'[
dbms_output.put_line(‘100000=’||100000);
dbms_output.put_line(‘32766 =’||32766);
dbms_output.put_line(‘32767 =’||32767);
dbms_output.put_line(‘32768 =’||32768);
dbms_output.put_line(‘32769 =’||32769);
end;
]’;
c:=c||v;
execute immediate c;
end;
/
[/sourcecode]

[collapse]
It produces the following output:

100000=10000001
32766 =32766
32767 =32767
32768 =10000002
32769 =10000003

This test case well demonstrates wrong results:
* instead of 100000 we get 10000001, which is the value from first line after “begin”, ie 1st integer literal in the code,
* for 32766 and 32767 oracle returns right values
* instead of 32768 (==32767+1) it returns 10000002, which is the integer from 2nd line, ie 2nd integer literal in the code,
* instead of 32769 (==32767+2) it returns 10000003, which is the integer from 3rd line, ie 3rd integer literal in the code
After several tests I can make a conclusion:

  • It doesn’t matter what plsql_optimize_level or plsql_code_type you set, was debug enabled or not, the behaviour is the same.
  • It seems that this is a kind of PL/SQL optimization: during parsing, oracle leaves integer literal in place if its value is in range -32768..32767 (16bit signed int), but if its value is out of this range, oracle adds this value into array of integers’ constants and replaces the value with the index of this element in this array. But because of index value overflow in cases when a count of such integer literals becomes larger than 32768, instead of Nth element of this array, oracle returns Mth element, where M is mod(N,32767).

So we can describe this behaviour using first test case:

declare n number;
begin
  n:=1000001; -- this part
  n:=1000002; -- creates
  n:=1000003; -- 32768 
   ...        -- integer
  n:=1032768; -- literals
    dbms_output.put_line('100000='||100000); -- it should print 100000, ie 32768th element of array, but prints 10000001
                                             -- where 10000001 is the 1st element of array (1==mod(32768,32767))
    dbms_output.put_line('32766 ='||32766);  -- these 2 lines print right values,
    dbms_output.put_line('32767 ='||32767);  -- because their values are in the range of -32768..32767
    dbms_output.put_line('32768 ='||32768);  -- this line contains 32769th element and prints 2nd element of array (2==mod(32769,32767))
    dbms_output.put_line('32769 ='||32769);  -- this line contains 32770th element and prints 3nd element of array (3==mod(32770,32767))
end;

The following query can help you to find objects which can potentially have this problem:

select
  s.owner,s.name,s.type
 ,sum(regexp_count(text,'(\W|^)3\d{4,}([^.0-9]|$)')) nums_count -- this regexp counts integer literals >= 30000
from dba_source s 
where 
    owner='&owner'
and type in ('FUNCTION','PROCEDURE','PACKAGE','PACKAGE BODY')
group by s.owner,s.name,s.type
having sum(regexp_count(text,'(\W|^)3\d{4,}([^.0-9]|$)'))>32767 -- filter only objects which have >=32767 integer literal

Workaround:
You may noticed that I wrote about INTEGER literals only, so the easiest workaround is to make them FLOAT – just add “.” to the end of each literal:

declare n number;
begin
  n:=1000001.;
  n:=1000002.;
  n:=1000003.;
   ...       
  n:=1032768.;
    dbms_output.put_line('100000='||100000.);
    dbms_output.put_line('32766 ='||32766.);
    dbms_output.put_line('32767 ='||32767.);    
    dbms_output.put_line('32768 ='||32768.);
    dbms_output.put_line('32769 ='||32769.);
end;

Fixed test cases

[sourcecode language=”sql”]
declare
c clob:=’declare n number;begin’||chr(10);
f varchar2(100):=’n:=%s.;’||chr(10); — I’ve added here "."
v varchar2(32767);
n number:=32768;
begin
for i in 1..n loop
v:=v||utl_lms.format_message(f,to_char(1e7+i));
if length(v)>30000 then
c:=c||v;
v:=”;
end if;
end loop;
v:=v||q'[
dbms_output.put_line(‘100000=’||100000.); — .
dbms_output.put_line(‘32766 =’||32766.);
dbms_output.put_line(‘32767 =’||32767.);
dbms_output.put_line(‘32768 =’||32768.);
dbms_output.put_line(‘32769 =’||32769.);
end;
]’;
c:=c||v;
execute immediate c;
end;
/
[/sourcecode]

[collapse]

bug oracle oracle undocumented behaviour pl/sql pl/sql optimization

Oracle issues after upgrade to 12.2

Posted on November 24, 2017 by Sayan Malakshinov Posted in 12.2, bug, oracle 2,494 Page views 2 Comments

Sometimes it’s really hard even to create reproducible test case to send it to oracle support, especially in case of intermittent errors.
In such cases, I think it would be really great to have access to similar service requests or bugs of other oracle clients.
So while my poll about knowledge sharing is still active, I want to share a couple of bugs we have faced after upgrade to 12.2 (and one bug from Eric van Roon). I’m going to remove the bugs from this list when they become “public” or “fixed”.
If you want to add own findings into this list, you can add them into comments. To make this process easier, you can provide just symptomps, short description and the link to own post with details – I’ll add it just as a link.
Continue reading→

12.2.0.1 bind variable bug deterministic functions oracle troubleshooting undocumented oracle

Revisiting buffer cache Very Large Object Threshold

Posted on November 15, 2017 by Roger MacNicol Posted in adaptive serial direct path reads, oracle, SmartScan 1,910 Page views Leave a comment

If you turn on NSMTIO tracing you will see references to VLOT:

qertbFetch:[MTT &lt; OBJECT_SIZE &lt; VLOT]: Checking cost to read from caches (local/remote) and checking storage reduction factors (OLTP/EHCC Comp)

I had said you could ignore VLOT and Frits Hoogland pointed out that tracing showed it had some impact, so let me clarify:

VLOT is the absolute upper bound that cached reads can even be considered. 

This defaults to 500% of the number of buffers in the cache i.e.

_very_large_object_threshold = 500

While this number is not used in any calculations, it is used in two places as a cutoff to consider those calculations

1) Can we consider using Automatic Big Table Caching (a.k.a. DWSCAN) for this object?

2) Should we do a cost analysis for Buffer Cache scan vs Direct Read scan on tables larger than the MTT? 

The logic for tables above the calculated medium table threshold (MTT) and that are NOT part of searched DMLs and are NOT on Exadata with statistics based storage reduction factor enabled (_statistics_based_srf_enabled) is:

  • If _serial_direct_read == ALWAYS, use Direct Read
  • If _serial_direct_read == NEVER, use Buffer Cache
  • If _serial_direct_read == AUTO and #blocks in table < VLOT, use cost model
  • Else use Direct Read “qertbFetch:DirectRead:[OBJECT_SIZE>VLOT]”

In practice 5X buffer cache is so large the cost based decision will come to the same conclusion anyway – the default VLOT simply saves time spent doing the analysis.

For example, I got a quick count of the number of  blocks in non-partitioned TPC_H Scale 1 lineitem

select segment_name,sum(blocks),sum(bytes) from user_extents where segment_name='LINEITEM'

and created my buffer cache to be exactly the same size. With this setup, setting _very_large_object_threshold=100 bypassed the cost model and went straight to DR scan, while setting it to 200 forced the use of the cost model. 

The moral of this is that the default value of VLOT rarely changes the decisions made unless you reduce VLOT to a much smaller multiplier of the cache size and can start to see it cause a few more of your larger buffer cache scans move to direct read when they are no longer eligible for cost analysis. If you wish to stop some of the largest buffer cache scans from happening you would need to set _very_large_object_threshold less than 200.

direct path reads oracle Roger MacNicol SmartScan troubleshooting
photo Sayan Malakshinov

Oracle ACE Pro Oracle ACE Pro Alumni

DEVVYOracle Database Developer Choice Award winner

Oracle performance tuning expert

UK / Cambridge

LinkedIn   Twitter
sayan@orasql.org

Recent Posts

  • Oracle Telegram Bot
  • Partition Pruning and Global Indexes
  • Interval Search: Part 4. Dynamic Range Segmentation – interval quantization
  • Interval Search Series: Simplified, Advanced, and Custom Solutions
  • Interval Search: Part 3. Dynamic Range Segmentation – Custom Domain Index

Popular posts

Recent Comments

  • Oracle SQL | Interval Search: Part 4. Dynamic Range Segmentation – interval quantization on Interval Search: Part 3. Dynamic Range Segmentation – Custom Domain Index
  • Oracle SQL | Interval Search: Part 4. Dynamic Range Segmentation – interval quantization on Interval Search: Part 2. Dynamic Range Segmentation – Simplified
  • Oracle SQL | Interval Search: Part 4. Dynamic Range Segmentation – interval quantization on Interval Search: Optimizing Date Range Queries – Part 1
  • Oracle SQL | Interval Search Series: Simplified, Advanced, and Custom Solutions on Interval Search: Part 2. Dynamic Range Segmentation – Simplified
  • Oracle SQL | Interval Search: Part 2. Dynamic Range Segmentation – Simplified on Interval Search: Part 3. Dynamic Range Segmentation – Custom Domain Index

Blogroll

  • Alex Fatkulin
  • Alexander Anokhin
  • Andrey Nikolaev
  • Charles Hooper
  • Christian Antognini
  • Coskan Gundogar
  • David Fitzjarrell
  • Igor Usoltsev
  • Jonathan Lewis
  • Karl Arao
  • Mark Bobak
  • Martin Bach
  • Martin Berger
  • Neil Chandler
  • Randolf Geist
  • Richard Foote
  • Riyaj Shamsudeen
  • Tanel Poder
  • Timur Akhmadeev
  • Valentin Nikotin
  • Prev
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • …
  • 8
  • Next
©Sayan Malakshinov. Oracle SQL