Using machine resources (including memory and processor), Oracle can support tens of thousands of concurrent database users. The memory management technologies of Oracle in recent years have become quite complex and difficult to master. And although the automation tools that have appeared in recent versions, greatly simplify the work of database administrators, PL/SQL developers must understand the basic principles of memory management (especially about cursors and batch variables) to avoid unproductive memory consumption.
SGA, PGA, and UGA
When a client program (say, SQL*Plus or SQL Developer) interacts with a database, three memory structures are used.
System Global Area
SGA is a group of interrelated memory structures or SGA components that contain data and control information for one instance of an Oracle Database. All server and background processes share SGA. Cache data blocks and shared SQL regions are two data instances in SGA.
PGA (Process Global Area)
PGA is a memory area containing data and control information used monopolistically by each server process. Oracle Database creates a PGA when the Oracle process starts.
Each server and background procedure has its PGA area. The PGA instance area is the collection of distinct PGA areas.
The database startup parameters determine the size of the PGA instance area but not that of individual PGAs.
UGA (User Global Area)
Insert the data retained in UGA between database queries (packet variables, confidential SQL areas, etc.)
(User Global Area). In essence, UGA shall be used to preserve the session’s state.
The location of UGA in memory shall depend on the selected method of connection to the database:
- Dedicated server. For each session, Oracle shall create a dedicated server process. This configuration is useful under heavy loads, such as intensive calculations or long-running queries to the database. Since UGA doesn’t require access from other server processes, it houses in a PGA.
- Shared server. Will queue up a group of standard server processes handling access requests from any session to handle database accesses. This arrangement works well for hundreds of concurrent sessions that produce brief messages with a considerable amount of idle time. Since UGA locates in the SGA region, all shared server processes have access to the data.
The total amount of PGA depends significantly on the type of operations performed by the server for the application. For example, PL/SQL packages that fill large collections may require large amounts of UGA memory. If the application works with shared servers, the user processes have to wait for their service queue. If your user process runs long-running PL/SQL blocks of SQL commands, the database administrator must either configure a server with a large number of shadow processes or run these sessions on a dedicated server.
Now let’s look at what memory looks like from the perspective of the program being executed.
Cursors and Oracle memory
You may have already written hundreds of programs that announce and open cursors, select lines from them, and then close again. Neither SQL nor PL/SQL can work without cursors; many operators implicitly execute recursive calls that open additional cursors. The cursor, explicit or implicit, occupies the database server’s memory. The Oracle configuration optimization process often involves reducing an application’s number of cursors. Although this section dedicates to memory management, remember that this is just one aspect of database optimization. You may be able to improve overall performance by increasing the number of cursors.
Oracle associates cursors with PL/SQL anonymous blocks in much the same way as with SQL commands. For instance, while handling the initial call in the active user session, Oracle creates a space in UGA memory in which to store data relevant to that call. The server first determines whether the contents of the library cache contain a ready representation of this code before executing a SQL query or PL/SQL block. If it discovers, the execution kernel links a shared PL/SQL area to a private SQL area. If such an area does not exist, Oracle parses the command or block. (Also, Oracle prepares and caches an anonymous PL/SQL block execution plan that includes a PL/SQL kernel call to interpret the byte code).
Oracle interprets the simplest PL/SQL blocks using only the memory allocated to the main cursor. If the program contains PL/SQL calls or SQL calls, Oracle requires additional private areas in UGA memory. PL/SQL manages them on behalf of your application. We approach an important fact regarding cursor handling: there are two ways to close the cursor. If this is done programmatically, it becomes impossible to continue using the cursor in your application without reopening it. The CLOSE command closes the cursor programmatically:
CLOSE command: Cursor_name;
or when closing the implicit cursor automatically. However, PL/SQL shall not immediately release the memory associated with this cursor. The cursor shall continue as a database object to avoid processing it at possible re-opening. When you look at the V$OPEN_CURSOR view, you will see that performing CLOSE does not reduce the number of open session cursors. In version 11.1, you can also select from the new CURSOR_TYPE column for more cursor information.
PL/SQL supports its own “session cache of cursors,” i.e., it decides when should release which cursor. The initialized OPEN_CURSORS parameter sets the maximum number of cursors in that cache. The choice of a cursor for memory release is based on the LRU algorithm (Least Recently Used – “the longest not used”).
However, the internal PL/SQL process only performs at its best when all cursors are immediately closed after data selection finishes. So remember to close any cursors you open in the software as soon as you do with them. The internal algorithm works optimally only if your programs close the cursors immediately after the completion of data sampling. So, remember:
If the program explicitly opens the cursor, always explicitly close it right after you finish (but not before)! The programmer can interfere with the standard Oracle behavior. Of course, you can end the session itself to close all cursors! There are other, less radical ways:
to reset the package state (see below "Large collections in PL/SQL");
low-level management of cursor behavior with DBMS_SQL package (however, the benefit of this approach can be much less than the loss of performance and complexity of programming).
Tips for saving memory
Once you understand the theory, we move on to practical tips that you can use in your daily work. Also, take a look at the more general tips for optimizing programs (or general tips for optimizing PL/SQL code here). Besides, it is useful to be able to measure the amount of memory used by the session at any given time from the application code. To do this, you can query different V$ views. The plsql_memory package (see plsql_memory.pkg on the book site) will help you with this.
Command sharing
The database can provide programs with shared access to compiled versions of SQL commands and anonymous blocks even if received from different sessions and users. The optimizer determines the execution plan during parsing, so the factors affecting parsing (including the optimizer settings) will affect the sharing of SQL commands. For the system to share SQL commands, it must follow several basic rules:
- Must set variable values via binding variables, not literals, so the instruction text remains unchanged. The binding variables themselves must have the appropriate names and data types.
- The rules for character cases and formatting in the source code must be the same. If you run the same programs, this will happen automatically. The “one-time” commands may not coincide 100% with the orders from the plan.
- References to database objects shall be allowed as references to the same thing.
- The database parameters affecting the SQL query optimizer shall be the same for SQL. For example, the same optimization criterion (ALL_ROWS or FIRST_ROWS) shall be set in calling sessions.
- The calling sessions must support the same national languages (National Language Support, NLS).
We will not dwell on the last two rules; we can find specific reasons for preventing SQL command sharing in the V$SQL_SHARED_CURSOR view. We are interested in influencing the first three rules for PL/SQL programs.
The first rule (bind) is so critical that a separate subsection devotes itself to it.
The second rule (case and formatting) is a well-known condition for sharing instructions. The text must match exactly because the database calculates the hash code for searching and locking an object in the library cache. Even though PL/SQL usually ignores the case, the following three blocks are not perceived as identical:
BEGIN NULL; END;
begin null; end;
BEGIN NULL; END;
The instructions consider separate even if they are conceptually similar since they generate different hash codes. However, it is much less probable that you will unintentionally suppress their shared use if your anonymous blocks are short and all of your “actual programs” are implemented as stored code.
Implement SQL and PL/SQL code as stored programs. In general, Anonymous blocks should be as short as possible – consisting of a single cell of the stored program. Also, you can give one more recommendation related to SQL: to make it possible to share SQL instructions, place them in programs that call from different places in the application. It saves you the trouble of writing the exact instructions several times. The third rule states that external references (tables, procedures, etc.) must resolve as references to the same object. Let’s say that the Scott user and I connected to Oracle and both started such a block:
BEGIN
XYZ;
END;
If the term “xyz” corresponds to the same stored procedure, Oracle will decide whether or not you both should be permitted to utilize the cacheable form of this block. If a Scott user indicates our version of the process when defining a synonym for “xyz”. Oracle will then permit the distribution of the anonymous block. If we have independent copies of the procedure, each will work with its block. And even if both copies of the xyz procedure are identical, Oracle will cache them as different objects. In the same way as different objects, identical triggers of different tables are cached.
We can conclude from the above: avoid creating identical copies of tables and programs under different accounts. According to the common opinion, to save memory, it is necessary to separate the program code common for several programs (and especially for triggers) and implement it in a separate call. In other words, the owner of the PL/SQL programs assigns one database account, and other users who require the programs give EXECUTE access. Although this practice perfectly affects maintainability, it is unlikely to provide actual memory savings. Moreover, a new object creates for each caller with an additional memory usage of several kilobytes per session. Of course, we will observe significant memory usage only with many users.
There is another problem with well-established stereotypes. It manifests an environment with multiple users running one PL/SQL program simultaneously.
When invoking shared code, a mechanism is required to lock and unlock the library cache. It can cause delays. Code duplication might be beneficial in these circumstances. It lessens the chance of performance degradation and prevents the setting of pointless locks. Let’s return to the first binding variable rule, though.
Variable binding
The critical variable, known as the input variable in the command in the Oracle environment, receives its value from the calling party’s background. Binding variables are essential if the SQL commands are shared using PL/SQL, Java, SQL*Plus, or OCI instructions. Bindings variables make it easier to scale an application, prevent code injection, and promote SQL command sharing.
For two instructions to be regarded as similar, the binding variables’ names, data types, and maximum length must all be equal. For example, the following two instructions are not regarded as exact:
SELECT col FROM tab1 WHERE col = :bind1;
SELECT col FROM tab1 WHERE col = :bind_1;
However, this requirement only applies to the instruction text perceived by the SQL kernel. As mentioned earlier, PL/SQL will reformulate static SQL instructions even before SQL sees them. Example:
FUNCTION plsql_bookcount (author IN VARCHAR2)
RETURN NUMBER
IS
title_pattern VARCHAR2(10) := '%PL/SQL%';
l_count NUMBER;
BEGIN
SELECT COUNT(*) INTO l_count
FROM books
WHERE title LIKE title_pattern
AND author = plsql_bookcount.author;
RETURN l_count;
END;
After running plsql_bookcount, the V$SQLAREA view in Oracle 11g shows that PL/SQL reformulated the query as follows:
SELECT COUNT(*) FROM BOOKS WHERE TITLE LIKE :B2 AND AUTHOR = :B1
The author parameter and the local title pattern variable are replaced by bind variables: B1 and: B2. Thus, in static SQL code, you don’t have to worry about matching binding variable names; PL/SQL replaces the variable name with the generated binding variable name. Automatic input of binding variables in PL/SQL applies to variables used in WHERE and VALUES sentences of static instructions INSERT, UPDATE, MERGE, DELETE, and of course SELECT.
Additional experiments showed that changing the maximum length of the PL/SQL variable did not lead to additional instruction in the SQL area, but with a change in the variable data type such an instruction appears. However, I’m not asking you to take my word for it; if you have the necessary privileges, you can do your experiments and determine whether SQL commands share according to your plans. Take a look at the V$SQLAREA view. The result of the sampling for the above code:
SQL> SELECT executions, sql_text
2 FROM v$sqlarea
3 WHERE sql_text like 'SELECT COUNT(*) FROM BOOKS%'
ENTRIES SQL_TEXT
---------- --------------------------------------------------
1 SELECT COUNT(*) FROM BOOKS WHERE TITLE LIKE: B2
AND AUTHOR = :B1
If PL/SQL is so smart, then you don’t have to worry about binding variables? Take your time: although PL/SQL automatically binds variables in static SQL tools, this feature is not available in dynamic SQL. Inaccurate programming will easily lead to the formation of commands with literals. Example:
FUNCTION count_recent_records (tablename_in IN VARCHAR2,
since_in IN VARCHAR2)
RETURN PLS_INTEGER
AS
l_count PLS_INTEGER;
BEGIN
EXECUTE IMMEDIATE 'SELECT COUNT(*) FROM '.
|| DBMS_ASSERT.SIMPLE_SQL_NAME(tablename_in)
|| ' WHERE lastupdate > TO_DATE(')
|| DBMS_ASSERT.ENQUOTE_LITERAL(since_in)
|| ', ''YYYYMMDD'').
INTO l_count;
RETURN l_count;
END;
The following commands are built dynamically when it is executed:
SELECT COUNT(*) FROM tabname WHERE lastupdate > TO_DATE('20090315', 'YYYYMMDD')
Repeated calls with different since_in arguments may generate a large number of instructions that are unlikely to be shared:
SELECT COUNT(*) FROM tabname WHERE lastupdate > TO_DATE('20090105','YYYYMMDD')
SELECT COUNT(*) FROM tabname WHERE lastupdate > TO_DATE('20080704', 'YYYYMMDD')
SELECT COUNT(*) FROM tabname WHERE lastupdate > TO_DATE('20090101', 'YYYYMMDD')
Naturally, this leads to the very wasteful use of memory and other server resources.
USING DBMS_ASSERT TO FIGHT CODE INJECTION
What are these DBMS_ASSERT calls in the example with binding variables? Dynamic SQL, which uses data directly entered by the user, should be checked before performing it without looking back. Calling DBMS_ASSERT helps to ensure that the code gets exactly the data it expects to get.
If you try to call the count_recent_records function for a “strange” table name like “books where 1=1;–“, DBMS_ASSERT will issue an exception and stop the program before it can do any harm. DBMS_ASSERT.SIMPLE_SQL_NAME ensures that the input data meets the criteria for a valid SQL name.
DBMS_ASSERT.ENQUOTE_LITERAL encloses the input data in quotes and checks that it does not contain built-in quotes. Full description of DBMS_ASSERT can be found in Oracle PL/SQL Packages and Types Reference documentation.
If you rewrite the same function using a bind variable, you get it:
FUNCTION count_recent_records (tablename_in IN VARCHAR2,
since_in IN VARCHAR2)
RETURN PLS_INTEGER
AS
count_l PLS_INTEGER;
BEGIN
EXECUTE IMMEDIATE 'SELECT COUNT(*) FROM '.
|| DBMS_ASSERT.SIMPLE_SQL_NAME(tablename_in)
|| ' WHERE lastupdate > :thedate'.
INTO count_l
USING TO_DATE(since_in,'YYYYMMDD');
RETURN count_l;
END;
The SQL compiler will get the following instructions:
SELECT COUNT(*) FROM tabname WHERE lastupdate > :thedate
The second version is not only simpler and easier to understand but also provides significantly better performance for repeated calls with the same tablename_in argument values for different since_in values. Oracle also supports the initialization parameter CURS0R_SHARING, which can provide some advantages in applications with large amounts of code without binding variables. By assigning FORCE or SIMILAR to this parameter, you can require the database to replace SQL literals (fully or partially) with binding variables, preventing parsing costs. Unfortunately, this is one of the features that works better in theory than in practice.
On the other hand, if you carefully use significant binding variables in dynamic SQL code, your efforts will be rewarded at runtime. Even if you achieve some performance improvements by using CURSOR_SHARING, consider this method halfway through. In terms of efficiency, it cannot even compare to full-blown binding variables and can generate several unexpected and unwanted side effects.
If you have to use this feature because of software anomalies (often by third parties), do so only until you can modify the code to move to fully-fledged binding variables. Also, note that the appropriate setting can enable at the session level with the LOGON trigger. When it executes, commands of the following kind are dynamically built:
SELECT COUNT(*) FROM tabname WHERE lastupdate > TO_DATE('20090315', 'YYYYMMDD')
Repeated calls with different since_in arguments may generate a large number of instructions that are unlikely to be shared:
SELECT COUNT(*) FROM tabname WHERE lastupdate > TO_DATE('20090105','YYYYMMDD')
SELECT COUNT(*) FROM tabname WHERE lastupdate > TO_DATE('20080704', 'YYYYMMDD')
SELECT COUNT(*) FROM tabname WHERE lastupdate > TO_DATE('20090101', 'YYYYMMDD')
Naturally, this leads to the very wasteful use of memory and other server resources.
USING DBMS_ASSERT TO FIGHT CODE INJECTION
What are the binding variable examples DBMS ASSERT calls represent? Before executing Dynamic SQL, which relies on data the user directly enters, should validate it. Calling DBMS_ASSERT helps to ensure that the code gets exactly the data it expects to get.
If you try to call the count_recent_records function for a “strange” table name like “books where 1=1;–“, DBMS_ASSERT will issue an exception and stop the program before it can do any harm. DBMS_ASSERT.SIMPLE_SQL_NAME ensures that the input data meets the criteria for a valid SQL name.
DBMS_ASSERT.ENQUOTE_LITERAL encloses the input data in quotes and checks that it does not contain built-in quotes. Full description of DBMS_ASSERT can be found in Oracle PL/SQL Packages and Types Reference documentation.
If you rewrite the same function using a bind variable, you get it:
FUNCTION count_recent_records (tablename_in IN VARCHAR2,
since_in IN VARCHAR2)
RETURN PLS_INTEGER
AS
count_l PLS_INTEGER;
BEGIN
EXECUTE IMMEDIATE 'SELECT COUNT(*) FROM '.
|| DBMS_ASSERT.SIMPLE_SQL_NAME(tablename_in)
|| ' WHERE lastupdate > :thedate'.
INTO count_l
USING TO_DATE(since_in,'YYYYMMDD');
RETURN count_l;
END;
The SQL compiler will get the following instructions:
SELECT COUNT(*) FROM tabname WHERE lastupdate > :thedate
The second version is not only simpler and easier to understand but also provides significantly better performance for repeated calls with the same tablename_in argument values for different since_in values. Oracle also supports the initialization parameter CURS0R_SHARING, which can provide some advantages in applications with large amounts of code without binding variables. By assigning FORCE or SIMILAR to this parameter, you can require the database to replace SQL literals with binding variables, preventing parsing costs. Unfortunately, this is one of the features that works better in theory than in practice.
On the other hand, if you carefully use significant binding variables in dynamic SQL code, your efforts will be rewarded at runtime. Even if you achieve some performance improvements by using CURSOR_SHARING, consider this method halfway through. In terms of efficiency, it cannot even compare it to full-blown binding variables, and it can generate unexpected and unwanted side effects. If you have to use this feature because of software anomalies (often by third parties), do so only until you can modify the code to move to fully-fledged binding variables. Also, note that the appropriate setting can enable at the session level with the LOGON trigger.
Packages and efficient memory usage
The entire program reads when retrieving the byte code of a stored PL/SQL program. It is not only about procedures and functions but also about database packages. In other words, making only part of the package read is impossible. The box’s compiled code is loaded when any element, not even a single variable, is accessed from the library cache.
Therefore dividing the code by a smaller number of larger packages results in less memory (and disk space) than with many smaller containers. Thus, the logical grouping of package elements is beneficial not only in terms of architecture but also in terms of system performance.
Since Oracle reads the entire package into memory, only functionally linked elements should group into packages, i.e., those that are likely to be called in one session.
Extensive collections in PL/SQL
Sharing objects is a great solution, but it is not allowed for all objects in the program.
Even if several users execute the same program belonging to the same Oracle scheme, each session has its memory area, which contains data specific to this call – local and packet variables values, constants, and cursors.
And it is senseless to try to organize the sharing of these data related to a particular session. The most typical problems arise when working with collections. Suppose that an associative PL/SQL array is declared in the program as follows:
DECLARE
TYPE number_tab_t IS TABLE OF NUMBER INDEX BY PLS_INTEGER;
number_tab number_tab_t;
empty_tab number_tab_t;
A large number of elements are included in an array:
FOR i IN 1...100000
LOOP
number_tab(i) := i;
END LOOP;
These components need to keep somewhere. The memory for this array will allocate by the regulations above, whether it is data from an anonymous block, procedure, top-level function, data declared at the packet level, in the global area of the UGA user, or the international scope of the CGA call. In any case, working with a large collection will require a very large amount of memory.
The question is how to free this memory when you have finished working with the collection.
You need to execute one of two commands:
number_tab.DELETE;
or
number_tab := empty_tab;
In any of these cases, Oracle will free memory in its list of free objects.
It means that it will return the memory for storing the package variable to the dynamic session state pool and the memory of the call level variable to CGA.
The same will happen if a variable declared as a collection goes out of scope. For example, if a group claims in a separate procedure, Oracle will release the memory it occupies as soon as it finishes. In any case, such memory will not be available to other sessions, nor the current session, if it needs memory from the CGA region.
If subsequent DML operations will, say, sort a large amount of data, the application may need a huge amount of memory. And only after the session is over will the memory be completely freed and returned to the parent dynamic pool (heap).
Should emphasize that it will not be difficult for the virtual memory management system to manage a large swap file, especially if the process keeps a large amount of inactive virtual memory in its address space.
This inactive memory takes up only hard disk space, not real memory. However, it may sometimes be undesirable to fill the page space, so it is better if Oracle frees up the memory. For such cases there is a special “garbage collection” procedure with a very simple syntax:
DBMS_SESSION.FREE_UNUSED_USER_MEMORY;
This built-in procedure will find most of the UGA memory not used by program variables and return it to the parent heap: if there is a dedicated server, to PGA, and if there is a shared server, to SGA.
I thoroughly tested the process of releasing memory occupied by collections in different situations, in particular when using associative arrays and nested tables for a dedicated and shared server, anonymous blocks, and packet data. As a result, the following conclusions were drawn:
- It is not enough to assign a nested table or VARRAY array to NULL to free up the memory occupied. Either call the DELETE collection method, assign another empty but initialized collection, or wait for it to come out of scope.
- To free up memory and return it to the parent heap, use the dbms_session.free_unused_user_memory for the procedure. When your program has filled one or more large PL/SQL tables, mark them as unused, and it will not have to allocate large memory blocks for similar operations.
- In shared server mode, errors of memory shortage usually occur more frequently than in dedicated server mode. The UGA region allocates from the system global SGA region, which has a limited size. As stated in the section “What to do if there is a memory shortage”, this may result in an ORA-04031 error.
- In the shared server mode, it is impossible to free the memory occupied by PL/SQL tables.
In practice, the amount of memory occupied by a collection of NUMBER elements does not depend on whether the elements contain NULL values or, say, 38-digit numbers. But for values of the VARCHAR2 type, declared with a length of more than 30 characters, Oracle seems to allocate memory dynamically.
When filling an associative array in the dedicated server mode, an array containing a million values of the NUMBER type takes about 38 MB. And even if the array elements are of the BOOLEAN type, Oracle9i uses almost 15 MB of memory for it. Multiply this value by the number of users, for example, 100, and the result will be huge, especially if you don’t want to upload a memory to disk for performance reasons.
To find out how much memory UGA and PGA are using the current session, please make the following request:
SELECT n.name, ROUND(m.value/1024) kbytes
FROM V$STATNAME n, V$MYSTAT m
WHERE n.statistic# = m.statistic#
AND n.name LIKE 'session%memory%'
(Default privileges are not enough for reading views from this example). The query will show current and maximum memory usage in the session. If you want to free up the memory used by batch collections without session termination, call one of the two built-in procedures.
- DBMS_SESSION.RESET_PACKAGE – frees up all memory allocated to store information about the state of a package. As a result, all variables of the package get default values. For packages, this built-in procedure does more than the FREE_UNUSED_USER_MEMORY package because it does not pay attention to whether the memory is used or not.
- dbms_session.modify_package_state – in the operation flags parameter you can specify one of two constants: DBMS_SESSlON.free_all_ resources or DBMS_SESSION.reinitialize. Using the first one results in the same effect as using the RESET_PACKAGE procedure. The second constant restores the state variables by assigning them default values but does not release or recreate the package data from zero. Besides, it programmatically closes open cursors and does not clear their cache. If this is acceptable in your situation, use the second constant, because it is faster than the full package reset.
BULK COLLECT;LIMIT operations
BULK operations increase the efficiency of data processing, but you must make sure that memory usage remains moderate and that collections do not grow to too large a size.
When BULK COLLECT is selected, all lines are loaded into the collection by default. At large data sizes, the group is too large. The LIMIT design helps in these circumstances.
During testing, we discovered that the software speeds up the programs and reduces memory consumption, as was intended. The following example makes use of a test table with one million rows.
To obtain reliable data for comparison, the program was first to pre-fill the cache and then run again after reconnecting to the database. The plsql_memory package uses to display memory usage information (see plsql_memory.pkg on the book site):
DECLARE
-- Preparation of collections
TYPE numtab IS TABLE OF NUMBER INDEX BY PLS_INTEGER;
TYPE nametab IS TABLE OF VARCHAR2(4000) INDEX BY PLS_INTEGER;
TYPE tstab IS TABLE OF TIMESTAMP INDEX BY PLS_INTEGER;
CURSOR test_c IS
SELECT hi_card_nbr,hi_card_str,hi_card_ts
FROM data_test
;
nbrs numtab;
txt nametab;
tstamps tstab;
counter number;
strt number;
fnsh number;BEGIN
plsql_memory.start_analysis; -- Initialization of memory usage data output
strt := dbms_utility.get_time; -- Saving start time
OPEN test_c;
LOOP
FETCH test_c BULK COLLECT INTO nbrs,txt,tstamps LIMIT 10000;
EXIT WHEN nbrs.COUNT = 0;
FOR i IN 1...nbrs.COUNT LOOP
counter := counter + i; -- Data processing
END LOOP;
END LOOP;
plsql_memory.show_memory_usage;
CLOSE test_c;
fnsh := dbms_utility.get_time;
-- Converting hundredths of a second to millisecond
DBMS_OUTPUT.PUT_LINE('Run time = '||(fnsh-strt)*10||' ms'));
END;
/
The results that I received:
- Change in UGA memory: 394272 (Current = 2459840)
- Change in PGA memory: 1638400 (Current = 5807624)
- Run time = 1530 ms
We see that with a limit of 10,000 records, PGA memory usage has grown by 1,638,400 bytes. When information is output again after the PL/SQL block is finished, most of this memory is freed:
- EXEC plsql_memory.show_memory_usage;
- Change in UGA memory: 0 (Current =2394352)
- Change in PGA memory: -458752 (Current = 3907080)
Then a re-test was carried out without LIMIT so that all rows of the table were loaded at once:
- Change in UGA memory: 0 (Current = 1366000)
- Change in PGA memory: 18153472 (Current = 22519304)
As you can see, without LIMIT much more memory is used. So, I strongly recommend to include LIMIT in the final version of your applications.
Saving the state of objects
Usually, Oracle saves the values of package-level constants, variables, and cursor state in the UGA area until the session is over. Variables declared in a module’s declaration section are an exception since the module limits its scope and the RAM that the data utilized during the session frees after. From now on, they no longer exist.
Apart from disconnecting the database from the server, the package state loss may occur for several other reasons:
- From the perspective of the database, the software has been recompiled or has rendered invalid.
- The current session has used the built-in DBMS SESSI0N method. RESET PACKAGE;
- The program code now has a SERIALLY REUSABLE compiler directive that instructs Oracle to save status parameters just for the call time and not for the duration of the session;
- In its default configuration, the application employs a web gateway.
- Considering these limitations, the data structures of the package can act as global in the PL/SQL environment. In other words, they can use by PL/SQL programs executed in one session for data exchange.
Open data. The data structure declared in the package specification is a global open data structure. Any program or user with the EXECUTE privilege can access it. Package variables not defined as constants can even have arbitrary values by programs. As we know, open global data is a potential cause of many errors: it’s convenient to declare it, but using it “in a hurry” generates unstructured code with dangerous side effects.
Private data. Private global data structures do not cause so many problems. Since they are missing from the package specification, no one outside the package may refer to them. These data are only utilized by the package’s elements and inside.
The package data is global only within a single session or connection to the database. They are not used together by several sessions. If you need to share data between sessions, there are other means for this: DBMS_PIPE package, Oracle Advanced Queuing, UTL_TCP package… not to mention the database tables!
What to do when there is a lack of memory
Take the following scenario: You are working with a database; everything is going smoothly; numerous SQL and PL/SQL commands issue; and then, all of a sudden, like a thunderclap from a clear sky: You cannot allocate n bytes of shared memory due to the ORA-04031 issue.
This error is more common in shared server mode with its increased UGA memory consumption of the shared server. In dedicated server mode, the database can usually get more virtual memory from the operating system, but in this case, a similar error ORA-04030 may occur.
Actually, with a dedicated server, the maximum memory size is about 4 Gbytes per session, while with a shared server, you can manually assign any desired size to the pool.
There are several ways to fix this situation. If you are an application developer, try to reduce the use of shared memory. Some possible actions (approximately in order of application):
- Modify the code and ensure that most SQL commands are shared.
- Reduce the size or number of collections stored in memory.
- Reduce the size of the application code in memory.
- Change the database level settings and/or buy additional memory for the server.
Consider item 3, as items 1 and 2 discussed. After being loaded into memory, how can the size of the source code be estimated? And how to reduce it?
Before starting a PL/SQL program, a database must load its entire byte code into memory. To find out how much space a program object occupies in a common pool, ask your database administrator to perform the built-in DBMS_SHARED_POOL procedure. SIZES, which lists all objects larger than the specified size.
The following example shows the memory consumption required by the objects in the shared pool immediately after the database is started1:
SQL> SET SERVEROUTPUT ON SIZE 1000000
SQL> EXEC DBMS_SHARED_POOL.sizes(minsize => 125)
SIZE(K) KEPT NAME
------- ------ ---------------------------------------------------------------
433 SYS.STANDARD (PACKAGE)
364 SYS.DBMS_RCVMAN (PACKAGE BODY)
249 SYSMAN.MGMT_JOB_ENGINE (PACKAGE BODY)
224 SYS.DBMS_RCVMAN (PACKAGE)
221 SYS.DBMS_STATS_INTERNAL (PACKAGE)
220 SYS.DBMS_BACKUP_RESTORE (PACKAGE)
125 MERGE INTO cache_stats_1$ D USING (select * from table(dbms_sta)
ts_internal.format_cache_rows(CURSOR((select dataobj# o, st
atistic# stat, nvl(value, 0) val from gv$segstat where stat
istic# in (0, 3, 5) and obj# > 0 and inst_id = 1) union all
(select obj# o, 7 stat,nvl(sum(num_buf), 0) val from x$kcb
oqh x where inst_id = 1 group by obj#) order by o))) wh
(20B5C934,3478682418) (CURSOR)
The condition minsize => 125 means “output only objects with size 125 Kbytes and above”. From the output data, you can see that the STANDARD package [1] [2] occupies most of the total memory (433 Kbytes).
If you want to get rid of errors 4031 or 4030, you need:
- to know the amount of memory used by programs, but not enough;
- you also need to know the size of the total pool and the amount of memory occupied by “recreated” objects – objects that may become obsolete, be displaced from memory, and reloaded if necessary.
Some of this information is difficult to obtain from the database; you may need to know the mysterious X$ views. However, versions 9.2.0.5 and higher automatically generate a heap dump in the USER_DUMP_DEST directory when an error 4031 occurs. See what you can extract from the obtained information or just pass it to Oracle technical support.
Additionally, sharing the application’s code would be preferable if it has a significant amount of unbroken code. It might significantly affect how much RAM use. The database permits RAM execution even if PL/SQL programs that translate to low-level code keep in shared library files.
A privileged user may use operating system tools to change the amount of memory occupied outside the database. Now, let us turn to step 4 – configure the database or purchase additional memory. A competent database administrator knows how to configure a shared pool using the following options:
- SHARED_POOL_SlZE – Bytes reserved for the shared pool.
- DB_CACHE_SIZE – Memory bytes reserved for storing strings from the database.
- LARGE_POOL_SlZE – Memory bytes reserved for an optional block that stores the UGA connection area of the shared server.
- 3AVA_POOL_SlZE – bytes used by the Java memory manager.
- STREAMS_POOL_SIZE – bytes used by Oracle Streams technology.
- SGA_TARGET – the size of the SGA area from which the database will automatically allocate the cache and pool mentioned above.
- PGA_AGGREGATE_TARGET – the total amount of memory used by all server processes in an instance. Usually equal to the amount of server memory available for the database, less the SGA size.
- PGA_AGGREGATE_LIMIT (appeared in Oracle Database 12c) – specifies the limit of aggregate PGA memory consumed by an instance. Calls to the sessions that consume the most memory will cancel if the limit exceeds. Will handle all concurrent requests collectively. The sessions using the most memory will terminate if the overall PGA memory use is still above the cap. The described actions do not apply to SYS processes and critical background processes.
You can also ask your administrator to force PL/SQL programs, sequences, tables, or cursors into memory using the DBMS_SHARED_POOL.KEEP procedure.
For example, the following block requires that the database fixes the STANDARD package in memory:
BEGIN
DBMS_SHARED_POOL.KEEP('SYS.STANDARD');
END;
Fixing in memory is especially useful for large programs that are relatively rare to run.
Without committing, partially compiled will likely force code out of the pool due to prolonged inactivity. When called again, loading may lead to the displacement of many smaller objects from the collection.
If a small subset of users or applications has ORA-04031 errors – try to put the “intruders” in dedicated server mode.
About Enteros
Enteros offers a patented database performance management SaaS platform. It proactively identifies root causes of complex business-impacting database scalability and performance issues across a growing number of clouds, RDBMS, NoSQL, and machine learning database platforms.
The views expressed on this blog are those of the author and do not necessarily reflect the opinions of Enteros Inc. This blog may contain links to the content of third-party sites. By providing such links, Enteros Inc. does not adopt, guarantee, approve, or endorse the information, views, or products available on such sites.
Are you interested in writing for Enteros’ Blog? Please send us a pitch!
RELATED POSTS
Revolutionizing Healthcare IT: Leveraging Enteros, FinOps, and DevOps Tools for Superior Database Software Management
- 21 November 2024
- Database Performance Management
In the fast-evolving world of finance, where banking and insurance sectors rely on massive data streams for real-time decisions, efficient anomaly man…
Optimizing Real Estate Operations with Enteros: Harnessing Azure Resource Groups and Advanced Database Software
In the fast-evolving world of finance, where banking and insurance sectors rely on massive data streams for real-time decisions, efficient anomaly man…
Revolutionizing Real Estate: Enhancing Database Performance and Cost Efficiency with Enteros and Cloud FinOps
In the fast-evolving world of finance, where banking and insurance sectors rely on massive data streams for real-time decisions, efficient anomaly man…
Enteros in Education: Leveraging AIOps for Advanced Anomaly Management and Optimized Learning Environments
In the fast-evolving world of finance, where banking and insurance sectors rely on massive data streams for real-time decisions, efficient anomaly man…