`

flexible pipelined functions(原创)

 
阅读更多

flexible pipelined functions
Pipelined functions, type polymorphism (type hierarchies and substitution) and multi-table inserts are three features from the Oracle 9i timeframe. Pipelined functions are used primarily in ETL applications (extract, transform and load) for efficient processing of complex transformations. Type substitution is used for modelling complex data types and relationships in more object-oriented applications. Multi-table inserts (another ETL feature of Oracle) are used to load one or more tables from a single rowsource.
In this article we will combine these features by using type polymorphism to return multiple record structures from a single pipelined function. We will load the resulting dataset into multiple tables using multi-table insert.
It is assumed that readers are familiar with the technologies being described in this article.
why is this technique significant
This technique is significant on two counts.

First, it is usually the case that pipelined functions will return a single, known record structure (unless using complex ANYDATASET structures made available by Oracle's Data Cartridge). The Oracle documentation and other articles often show pipelined functions transforming a single input record into two output records, but always of the same structure. This article will show how single input records can be transformed into multiple output rows of different structures.

--可以返回不同结构的type
Second, this technique solves a very particular, but common, performance problem. It is quite common in traditional batch environments (such as data warehouses) to receive source data in flat-files. It is equally common that some of these files contain data for more than one target table in the data warehouse. Assuming that the data transformations that must take place are complex (i.e. too complex for SQL), a typical loading approach is as follows (in pseudo-code).

--解决了数据源进行多次转换和导入到性能问题
FOR rec IN (SELECT * FROM source_data) LOOP
   ...prepare table_A variables...
   ...prepare table_B variables...
   ...prepare table_C variables...
   INSERT INTO table_A VALUES (...);
   INSERT INTO table_B VALUES (...);
   INSERT INTO table_C VALUES (...);
END LOOP;
This is a simple process to code and understand, yet it is inefficient and slow. Using the techniques we will describe in this article, we will combine the efficiency of bulk SQL, parallel pipelined functions and substitutable types to change this load to something that resembles the following pseudo-code.
INSERT FIRST
   --
   WHEN (record is of table_A format)
   INTO table_A VALUES (...)
   --
   WHEN (record is of table_B format)
   INTO table_B VALUES (...)
   --
   WHEN (record is of table_C format)
   INTO table_C VALUES (...)
   --
SELECT ...
FROM   TABLE(parallel_pipelined_function(CURSOR(...)));
setup
We will begin by setting up our sample application. We are going use a simplified investment trading model for our demonstrations. This will include:
    TRADES: this will store information on a trade that takes place between two counterparties;
    TRADE_LEGS: child of TRADES. A single trade might comprise multiple legs (e.g. a swap trade). In our application, we are going to ensure that every trade has two legs; and
    LEG_VALUATIONS: child of TRADE_LEGS. Each leg of a trade will be valued every day that it is active (i.e. not matured).
target tables
Given this simple model, we can now create our target tables. Note that all tables are defined with PARALLEL because we will exploit this later when we come to using our pipelined function.
SQL> CREATE TABLE trades
  2  ( trade_id           NUMBER
  3  , product_type       VARCHAR2(10)
  4  , counterparty       VARCHAR2(30)
  5  , trade_timestamp    TIMESTAMP
  6  , trading_book       VARCHAR2(30)
  7  , maturity_date      DATE
  8  , CONSTRAINT trades_pk
  9       PRIMARY KEY (trade_id)
 10  )
 11  PARALLEL;
Table created.
SQL> CREATE TABLE trade_legs
  2  ( trade_id           NUMBER
  3  , leg_no             NUMBER
  4  , trade_amount       NUMBER
  5  , currency           VARCHAR2(3)
  6  , trade_price        NUMBER
  7  , CONSTRAINT trade_legs_pk
  8       PRIMARY KEY (trade_id, leg_no)
  9  , CONSTRAINT trade_legs_fk01
 10       FOREIGN KEY (trade_id)
 11       REFERENCES trades (trade_id)
 12  )
 13  PARALLEL;
Table created.
SQL> CREATE TABLE leg_valuations
  2  ( trade_id           NUMBER
  3  , leg_no             NUMBER
  4  , valuation_date     DATE
  5  , market_value       NUMBER
  6  , CONSTRAINT leg_valuations_pk
  7       PRIMARY KEY (trade_id, leg_no, valuation_date)
  8  , CONSTRAINT leg_valuations_fk01
  9       FOREIGN KEY (trade_id, leg_no)
 10       REFERENCES trade_legs (trade_id, leg_no)
 11  )
 12  PARALLEL;
Table created.
source data
For our demonstration, we require some source data. This is typically provided in flat-files and loaded into a staging table (or made available directly via an external table). For simplicity, we will create a staging table and populate it with data as follows. Again, this table is defined with PARALLEL.
SQL> CREATE TABLE trades_staging
  2  ( trade_id           NUMBER
  3  , leg_no             NUMBER
  4  , valuation_date     DATE
  5  , product_type       VARCHAR2(10)
  6  , counterparty       VARCHAR2(30)
  7  , trade_timestamp    TIMESTAMP
  8  , trading_book       VARCHAR2(30)
  9  , maturity_date      DATE
 10  , trade_amount       NUMBER
 11  , currency           VARCHAR2(3)
 12  , trade_price        NUMBER
 13  , market_value       NUMBER
 14  )
 15  PARALLEL;
Table created.
SQL> INSERT INTO trades_staging
  2  SELECT object_id                                       AS trade_id
  3  ,      ROW_NUMBER() OVER
  4            (PARTITION BY object_id ORDER BY 1)          AS leg_no
  5  ,      TRUNC(SYSDATE)-1                                AS valuation_date
  6  ,      SUBSTR(object_type,1,10)                        AS product_type
  7  ,      owner                                           AS counterparty
  8  ,      TO_TIMESTAMP(timestamp,'YYYY-MM-DD:HH24:MI:SS') AS trade_timestamp
  9  ,      object_type                                     AS trading_book
 10  ,      created + MOD(object_id,500)                    AS maturity_date
 11  ,      ABS(DBMS_RANDOM.RANDOM)                         AS trade_amount
 12  ,      'GBP'                                           AS currency
 13  ,      DBMS_RANDOM.VALUE                               AS trade_price
 14  ,      ABS(DBMS_RANDOM.RANDOM)                         AS market_value
 15  FROM   all_objects
 16  ,     (SELECT NULL FROM all_objects WHERE ROWNUM <= 2)
 17  WHERE  object_id IS NOT NULL;
99716 rows created.
The source data is manufactured in such a way that there are two legs and valuations for every trade. Because of the denormalised nature of the source data, we will have duplicated TRADES data denormalised on each record-pairing. As with the scenario described in the introduction, we will need to transform and load this source data into each of our three trading tables.
type hierarchy
Pipelined functions require two types: one to define a output record structure and one to buffer a collection of this structure. Usually, we create a single object type and corresponding collection type for this purpose. As described earlier, however, we are going to use type polymorphism to enable us to pass around record types of different structures under the guise of a single object type. To do this, we need to create a type hierarchy. Our hierarchy will comprise the following:
a single supertype, containing a single attribute that each subtype will inherit. This will be a "generic" type, used to define parameters and variables in our PL/SQL APIs; and
three subtypes, each matching one of the structures of our sample trading tables (minus the supertype attribute). Because these are defined as subtypes, they can be used in place of the supertype (i.e. substitute for the supertype).
We will begin by creating our "generic" supertype as follows. This will be the object type that the pipelined function will stream from the PIPE ROW statements.
SQL> CREATE TYPE transaction_ot AS OBJECT
  2  ( transaction_id NUMBER
  3  )
  4  NOT FINAL;
  5  /
Type created.
Note that we have declared this as NOT FINAL to indicate that we are creating a hierarchy that will be implemented through subtypes. We have included a single attribute of TRANSACTION_ID to represent a generic primary key attribute that may or may not be extended in the subtypes (in our trading model, we have a TRADE_ID as the primary transaction key). We could also define this as NOT INSTANTIABLE if we wished to ensure that no direct instances of the supertype were coded into our programs.
This object type defines the structure of a single record returned by our pipelined function. As with all pipelined function implementations, we must create a corresponding collection type, as follows.
SQL> CREATE OR REPLACE TYPE transaction_ntt
  2     AS TABLE OF transaction_ot;
  3  /
Type created.
In "regular" pipelined function implementations, we would be ready to code the function at this point. However, such a function would only pipe out arrays of a single record structure containing a single attribute (TRANSACTION_ID). We want to be able to pipe back multiple record structures from the function, hence we need a type hierarchy. To complete this hierarchy, we will create three further object types: one for each of our target trading tables described earlier. We will begin by creating a subtype for TRADES records as follows. Note that the TRADE_ID column will be accounted for by the supertype's generic TRANSACTION_ID attribute.
SQL> CREATE TYPE trade_ot UNDER transaction_ot
  2  ( product_type     VARCHAR2(10)
  3  , counterparty     VARCHAR2(30)
  4  , trade_timestamp  TIMESTAMP
  5  , trading_book     VARCHAR2(30)
  6  , maturity_date    DATE
  7  , CONSTRUCTOR FUNCTION trade_ot
  8       RETURN SELF AS RESULT
  9  )
 10  FINAL;
 11  /
Type created.
Note that we have defined this as a subtype of our generic supertype using the UNDER syntax. Note also that we have declared a non-default constructor function. This is purely for convenience later, when we will be able to initialise an instance of this type without having to pass a value for each attribute in the type.

--关于constructor function可以参考如下解释

Constructors and Type Evolution
The attribute value constructor function saves you the trouble of defining your own constructors for a type. However, with an attribute-value constructor, you must supply a value for every attribute declared in the type. Otherwise the constructor call will fail to compile.
This requirement of an attribute-value constructor can create a problem if you evolve the type later on—by adding an attribute, for example. When you change the attributes of a type, the type's attribute-value constructor changes, too. If you add an attribute, the updated attribute-value constructor expects a value for the new attribute as well as the old ones. As a result, all the attribute-value constructor calls in your existing code, where values for only the old number of attributes are supplied, will fail to compile.
Advantages of User-Defined Constructors
User-defined constructors avoid the problem with the attribute-value constructor because user-defined constructors do not need to explicitly set a value for every attribute of a type. A user-defined constructor can have any number of arguments, of any type, and these do not need to map directly to type attributes. In your definition of the constructor, you can initialize the attributes to any appropriate values. Any attributes for which you do not supply values are initialized by the system to NULL.
If you evolve a type—for example, by adding an attribute—calls to user-defined constructors for the type do not need to be changed. User-defined constructors, like ordinary methods, are not automatically modified when the type evolves, so the call signature of a user-defined constructor remains the same. You may, however, need to change the definition of the constructor if you do not want the new attribute to be initialized to NULL.
http://docs.oracle.com/cd/B28359_01/appdev.111/b28371/adobjadv.htm#CHDJCFAG

We therefore need a type body, as follows.
SQL> CREATE TYPE BODY trade_ot AS
  2     CONSTRUCTOR FUNCTION trade_ot
  3        RETURN SELF AS RESULT IS
  4     BEGIN
  5        RETURN;
  6     END;
  7  END;
  8  /
Type body created.
We will now continue the pattern and complete our type hierarchy by creating the remaining types.
SQL> CREATE TYPE trade_leg_ot UNDER transaction_ot
  2  ( leg_no             NUMBER
  3  , trade_amount       NUMBER
  4  , currency           VARCHAR2(3)
  5  , trade_price        NUMBER
  6  , CONSTRUCTOR FUNCTION trade_leg_ot
  7       RETURN SELF AS RESULT
  8  )
  9  FINAL;
 10  /
Type created.
SQL> CREATE TYPE BODY trade_leg_ot AS
  2     CONSTRUCTOR FUNCTION trade_leg_ot
  3        RETURN SELF AS RESULT IS
  4     BEGIN
  5        RETURN;
  6     END;
  7  END;
  8  /
Type body created.
SQL> CREATE TYPE leg_valuation_ot UNDER transaction_ot
  2  ( leg_no           NUMBER
  3  , valuation_date   DATE
  4  , market_value     NUMBER
  5  , CONSTRUCTOR FUNCTION leg_valuation_ot
  6       RETURN SELF AS RESULT
  7  )
  8  FINAL;
  9  /
Type created.
SQL> CREATE TYPE BODY leg_valuation_ot AS
  2     CONSTRUCTOR FUNCTION leg_valuation_ot
  3        RETURN SELF AS RESULT IS
  4     BEGIN
  5        RETURN;
  6     END;
  7  END;
  8  /
Type body created.
creating the pipelined function
To recap, therefore, we have a single set of source data and three target tables to be loaded from this source data. We have described this data via a type hierarchy that we will now use in our pipelined function implementation. We will begin with a package specification, as follows.
SQL> CREATE PACKAGE trades_load AS
  2     FUNCTION trades_transform( p_source_data IN SYS_REFCURSOR )
  3        RETURN transaction_ntt
  4        PIPELINED
  5        PARALLEL_ENABLE (PARTITION p_source_data BY ANY);
  6     PROCEDURE load_trades;
  7   END trades_load;
  8 /
Package created.
Note that our package has two public programs: our pipelined function (defined as parallel-enabled) and a procedure to invoke the load itself. Note how our pipelined function returns the generic collection type as described earlier. That is, it will return multiple instances of the TRANSACTION_OT type or any other type that is allowed to substitute for it (i.e. any of our three subtypes). This is the critical point and it highlights the flexibility we achieve by using type polymorphism in this way.
We will now implement our pipelined function by creating the package body. Note that the loading procedure is stubbed at this stage to minimise the code listing.
SQL> CREATE PACKAGE BODY trades_load AS
  2
  3     ------------------------------------------------------------
  4
  5     FUNCTION trades_transform( p_source_data IN SYS_REFCURSOR )
  6        RETURN transaction_ntt
  7        PIPELINED
  8        PARALLEL_ENABLE (PARTITION p_source_data BY ANY) IS
  9
 10        /* Array of input record type... */
 11        TYPE aat_source_data IS TABLE OF trades_staging%ROWTYPE
 12           INDEX BY PLS_INTEGER;
 13        aa_source_data aat_source_data;
 14        r_source_data  trades_staging%ROWTYPE;
 15
 16        /* Output record types... */
 17        r_trade         trade_ot         := trade_ot();
 18        r_trade_leg     trade_leg_ot     := trade_leg_ot();
 19        r_leg_valuation leg_valuation_ot := leg_valuation_ot();
 20
 21     BEGIN
 22
 23        LOOP
 24
 25           FETCH p_source_data BULK COLLECT INTO aa_source_data LIMIT 100;
 26
 27           FOR i IN 1 .. aa_source_data.COUNT LOOP
 28
 29              /* Work with a single record... */
 30              r_source_data := aa_source_data(i);
 31
 32              /* Reset the variables... */
 33              r_trade         := trade_ot();
 34              r_trade_leg     := trade_leg_ot();
 35              r_leg_valuation := leg_valuation_ot();
 36
 37              /* Prepare and pipe the trade record... */
 38              IF r_source_data.leg_no = 1 THEN
 39
 40                 r_trade.transaction_id  := r_source_data.trade_id;
 41                 r_trade.product_type    := r_source_data.product_type;
 42                 r_trade.counterparty    := r_source_data.counterparty;
 43                 r_trade.trading_book    := r_source_data.trading_book;
 44                 r_trade.trade_timestamp := r_source_data.trade_timestamp;
 45                 r_trade.maturity_date   := r_source_data.maturity_date;
 46
 47                 PIPE ROW (r_trade);
 48
 49              END IF;
 50
 51              /* Prepare and pipe the trade_leg record... */
 52              r_trade_leg.transaction_id := r_source_data.trade_id;
 53              r_trade_leg.leg_no         := r_source_data.leg_no;
 54              r_trade_leg.trade_amount   := r_source_data.trade_amount;
 55              r_trade_leg.currency       := r_source_data.currency;
 56              r_trade_leg.trade_price    := r_source_data.trade_price;
 57
 58              PIPE ROW (r_trade_leg);
 59
 60              /* Prepare and pipe the leg_valuation record... */
 61              r_leg_valuation.transaction_id := r_source_data.trade_id;
 62              r_leg_valuation.leg_no         := r_source_data.leg_no;
 63              r_leg_valuation.valuation_date := r_source_data.valuation_date;
 64              r_leg_valuation.market_value   := r_source_data.market_value;
 65
 66              PIPE ROW (r_leg_valuation);
 67
 68           END LOOP;
 69
 70           EXIT WHEN p_source_data%NOTFOUND;
 71
 72        END LOOP;
 73        CLOSE p_source_data;
 74
 75        RETURN;
 76
 77     END trades_transform;
 78
 79     ------------------------------------------------------------
 80
 81     PROCEDURE load_trades IS
 82     BEGIN
 83        NULL;
 84     END load_trades;
 85
 86  END trades_load;
 87  /
Package body created.
We now have a pipelined function that returns three different record structures as substitutes for the supertype. Note in particular the following:
    Lines 17-19: we have three different record variables to be populated and returned from each staging row;
    Lines 33-35: prior to processing each source row, we reset each record by invoking the non-default type constructors we created earlier;
    Lines 38-50: we only want to pipe one trade record per trade leg pairing, hence the LEG_NO test. We assign the TRADES "record" and pipe it;
    Lines 52-58: we prepare and pipe a TRADE_LEGS record for each source record;
    Lines 61-66: we prepare and pipe a LEG_VALUATIONS record for each source record.
For performance reasons, we have defined the pipelined function as parallel-enabled and are using bulk fetches from the cursor in small array sizes. A critical point to note is that for clarity this example deliberately excludes any of the complex transformations that would necessitate a PL/SQL solution. Given the format above, it would of course be much more simple and efficient to use bulk SQL loading. The need for a PL/SQL approach is therefore assumed as a necessity.
loading from the pipelined function
We will now use our pipelined function. We will begin by demonstrating how we query the function, starting with a simple SQL statement as follows.
SQL> SELECT *
  2  FROM   TABLE(
  3            trades_load.trades_transform(
  4               CURSOR( SELECT * FROM trades_staging ) ) )
  5  WHERE  ROWNUM <= 5;
TRANSACTION_ID
--------------
         14636
         14636
         14637
         14637
         14637
5 rows selected.
First, note the syntax. We are passing in a ref cursor parameter to the pipelined function because this is a pre-requisite to enable parallel execution. Second, note how we use "SELECT *" but only receive a single column back. Remember that the pipelined function is based on our TRANSACTION_OT type (and TRANSACTION_NTT collection type). This type only contains a single attribute, so what we see above is semantically correct, even though we have piped rows of a different structure.
The reason (and solution) is simple. When using type substitution, Oracle does not downcast a supertype into its correct subtype unless we tell it to.

--oracle不会自动寻找子类的值

We do this in two stages. First, we must retrieve the actual object instances from the function and not the individual attributes. In the first example, we tried to access the attributes using "SELECT *". We retrieve the actual object instances by using the VALUE function, as follows.
SQL> SELECT VALUE(nt)
  2  FROM   TABLE(
  3            trades_load.trades_transform(
  4               CURSOR( SELECT * FROM trades_staging ) ) ) nt
  5  WHERE  ROWNUM <= 5;
VALUE(NT)(TRANSACTION_ID)
-----------------------------------------------------------------------------------------
TRADE_LEG_OT(14636, 2, 386190879, 'GBP', .724850851)
LEG_VALUATION_OT(14636, 2, '12-AUG-07', 2096427733)
TRADE_OT(14637, 'SYNONYM', 'PUBLIC', '30-AUG-05 14.16.38.000000', 'SYNONYM', '14-JAN-06')
TRADE_LEG_OT(14637, 1, 292552620, 'GBP', .555342993)
LEG_VALUATION_OT(14637, 1, '12-AUG-07', 670904643)
5 rows selected.
We can now see for the first time that we have a truly flexible pipelined function! In the output above, we have three different structures being returned from the function. We can take this a stage further and decode the type of each object instance using IS OF conditions, as follows.
SQL> SELECT VALUE(nt) AS record_value
  2  ,      CASE
  3            WHEN VALUE(nt) IS OF TYPE (trade_ot)
  4            THEN 'TRADES'
  5            WHEN VALUE(nt) IS OF TYPE (trade_leg_ot)
  6            THEN 'TRADE_LEGS'
  7            ELSE 'LEG_VALUATIONS'
  8         END AS record_type
  9  FROM   TABLE(
 10            trades_load.trades_transform(
 11               CURSOR( SELECT * FROM trades_staging ) ) ) nt
 12  WHERE  ROWNUM <= 5;
RECORD_VALUE(TRANSACTION_ID)                                           RECORD_TYPE
---------------------------------------------------------------------- --------------------
TRADE_LEG_OT(14636, 2, 386190879, 'GBP', .724850851)                   TRADE_LEGS
LEG_VALUATION_OT(14636, 2, '12-AUG-07', 2096427733)                    LEG_VALUATIONS
TRADE_OT(14637, 'SYNONYM', 'PUBLIC', '30-AUG-05 14.16.38.000000',      TRADES
'SYNONYM', '14-JAN-06')
TRADE_LEG_OT(14637, 1, 292552620, 'GBP', .555342993)                   TRADE_LEGS
LEG_VALUATION_OT(14637, 1, '12-AUG-07', 670904643)                     LEG_VALUATIONS
5 rows selected.
We have now "labelled" each object instance with the table it is eventually going to be loaded into. When we build our multi-table insert statement later, it will be obvious why we have included this CASE expression. To complete the downcasting of the object instances to their correct subtypes, we require a final step. Using the TREAT function, we can attempt to cast each object instance to each of the subtypes, as follows.
SQL> SELECT CASE
  2            WHEN VALUE(nt) IS OF TYPE (trade_ot)
  3            THEN 'TRADES'
  4            WHEN VALUE(nt) IS OF TYPE (trade_leg_ot)
  5            THEN 'TRADE_LEGS'
  6            ELSE 'LEG_VALUATIONS'
  7         END AS record_type
  8  ,      TREAT(VALUE(nt) AS trade_ot)         AS trade_rec
  9  ,      TREAT(VALUE(nt) AS trade_leg_ot)     AS trade_leg_rec
 10  ,      TREAT(VALUE(nt) AS leg_valuation_ot) AS leg_valuation_rec
 11  FROM   TABLE(
 12            trades_load.trades_transform(
 13               CURSOR( SELECT * FROM trades_staging ) ) ) nt
 14  WHERE  ROWNUM <= 5;
RECORD_TYPE    TRADE_REC                      TRADE_LEG_REC                  LEG_VALUATION_REC
-------------- ------------------------------ ------------------------------ --------------------------
TRADE_LEGS                                    TRADE_LEG_OT(14636, 2, 3861908
                                              79, 'GBP', .724850851)
LEG_VALUATIONS                                                               LEG_VALUATION_OT(14636, 2,
                                                                             '12-AUG-07', 2096427733)
TRADES         TRADE_OT(14637, 'SYNONYM', 'PU
               BLIC', '30-AUG-05 14.16.38.000
               000', 'SYNONYM', '14-JAN-06')
TRADE_LEGS                                    TRADE_LEG_OT(14637, 1, 2925526
                                              20, 'GBP', .555342993)
LEG_VALUATIONS                                                               LEG_VALUATION_OT(14637, 1,
                                                                             '12-AUG-07', 670904643)
5 rows selected.
Of course, each record returning from the pipelined function is of one subtype only. On each record, therefore, two of the TREAT functions will return NULL and only one will yield the correct subtype. At this stage, however, we have successfully returned multiple record types from a single pipelined function and are now ready to access their respective attributes. We do this as follows.
SQL> SELECT ilv.record_type
  2  ,      ilv.trade_rec.transaction_id         AS trade_id
  3  ,      ilv.trade_rec.product_type           AS product_type
  4  ,      ilv.trade_leg_rec.leg_no             AS leg_no
  5  ,      ilv.leg_valuation_rec.valuation_date AS valuation_date
  6  FROM (
  7        SELECT CASE
  8                  WHEN VALUE(nt) IS OF TYPE (trade_ot)
  9                  THEN 'TRADES'
 10                  WHEN VALUE(nt) IS OF TYPE (trade_leg_ot)
 11                  THEN 'TRADE_LEGS'
 12                  ELSE 'LEG_VALUATIONS'
 13               END AS record_type
 14        ,      TREAT(VALUE(nt) AS trade_ot)         AS trade_rec
 15        ,      TREAT(VALUE(nt) AS trade_leg_ot)     AS trade_leg_rec
 16        ,      TREAT(VALUE(nt) AS leg_valuation_ot) AS leg_valuation_rec
 17        FROM   TABLE(
 18                  trades_load.trades_transform(
 19                     CURSOR( SELECT * FROM trades_staging ) ) ) nt
 20        WHERE  ROWNUM <= 5
 21       ) ilv;
RECORD_TYPE            TRADE_ID PRODUCT_TY     LEG_NO VALUATION
-------------------- ---------- ---------- ---------- ---------
TRADE_LEGS                                          2
LEG_VALUATIONS                                        12-AUG-07
TRADES                    14637 SYNONYM
TRADE_LEGS                                          1
LEG_VALUATIONS                                        12-AUG-07
5 rows selected.
This shows a small sample of the available attributes, but we now have everything we need for our multi-table load. A restriction of multi-table insert is that we cannot carry object instances up to the VALUES clauses on the inserts. Hence we must decompose the objects in the SELECT section, as we see above. Given this, we will now add the LOAD_TRADES procedure to our package body. Note that the pipelined function code is omitted for brevity.
SQL> CREATE OR REPLACE PACKAGE BODY trades_load AS
  2
  3     ------------------------------------------------------------
  4
  5     FUNCTION trades_transform( ...snip...
 78
 79     ------------------------------------------------------------
 80
 81     PROCEDURE load_trades IS
 82     BEGIN
 83
 84        INSERT FIRST
 85           WHEN record_type = 'TRADES'
 86           THEN
 87              INTO trades ( trade_id
 88                          , product_type
 89                          , counterparty
 90                          , trade_timestamp
 91                          , trading_book
 92                          , maturity_date
 93                          )
 94              VALUES ( trade_id
 95                     , product_type
 96                     , counterparty
 97                     , trade_timestamp
 98                     , trading_book
 99                     , maturity_date
100                     )
101           WHEN record_type = 'TRADE_LEGS'
102           THEN
103              INTO trade_legs ( trade_id
104                              , leg_no
105                              , trade_amount
106                              , currency
107                              , trade_price
108                              )
109              VALUES ( trade_id
110                     , leg_no
111                     , trade_amount
112                     , currency
113                     , trade_price
114                     )
115           WHEN record_type = 'LEG_VALUATIONS'
116           THEN
117              INTO leg_valuations ( trade_id
118                                  , leg_no
119                                  , valuation_date
120                                  , market_value
121                                  )
122              VALUES ( trade_id
123                     , leg_no
124                     , valuation_date
125                     , market_value
126                     )
127        SELECT ilv.record_type
128        ,      COALESCE(
129                  ilv.trade_rec.transaction_id,
130                  ilv.trade_leg_rec.transaction_id,
131                  ilv.leg_valuation_rec.transaction_id
132                  )                                    AS trade_id
133        ,      COALESCE(
134                  ilv.trade_leg_rec.leg_no,
135                  ilv.leg_valuation_rec.leg_no
136                  )                                    AS leg_no
137        ,      ilv.trade_rec.product_type              AS product_type
138        ,      ilv.trade_rec.counterparty              AS counterparty
139        ,      ilv.trade_rec.trade_timestamp           AS trade_timestamp
140        ,      ilv.trade_rec.trading_book              AS trading_book
141        ,      ilv.trade_rec.maturity_date             AS maturity_date
142        ,      ilv.trade_leg_rec.trade_amount          AS trade_amount
143        ,      ilv.trade_leg_rec.currency              AS currency
144        ,      ilv.trade_leg_rec.trade_price           AS trade_price
145        ,      ilv.leg_valuation_rec.valuation_date    AS valuation_date
146        ,      ilv.leg_valuation_rec.market_value      AS market_value
147        FROM (
148              SELECT CASE
149                        WHEN VALUE(nt) IS OF TYPE (trade_ot)
150                        THEN 'TRADES'
151                        WHEN VALUE(nt) IS OF TYPE (trade_leg_ot)
152                        THEN 'TRADE_LEGS'
153                        ELSE 'LEG_VALUATIONS'
154                     END AS record_type
155              ,      TREAT(VALUE(nt) AS trade_ot)         AS trade_rec
156              ,      TREAT(VALUE(nt) AS trade_leg_ot)     AS trade_leg_rec
157              ,      TREAT(VALUE(nt) AS leg_valuation_ot) AS leg_valuation_rec
158              FROM   TABLE(
159                        trades_load.trades_transform(
160                           CURSOR( SELECT * FROM trades_staging ) ) ) nt
161             ) ilv;
162
163        DBMS_OUTPUT.PUT_LINE( SQL%ROWCOUNT || ' rows inserted.' );
164
165     END load_trades;
166
167  END trades_load;
168  /
Package body created.
Our load is quite simple. We select the attributes from each record as described in earlier examples and label each row with its target table using a CASE expression. This "label" is used in the WHEN clauses of the INSERT FIRST section to determine which table the record represents and we insert accordingly. We now have a multi-table load using a pipelined function and type substitution.
testing the loads
We are now ready to test our loads. Remember that we enabled parallel at both table and function level. In multi-table inserts, the entire DML section is parallel-enabled if just one of the target tables is set to parallel. We will therefore enable parallel query and DML in our session as follows.
SQL> ALTER SESSION ENABLE PARALLEL QUERY;
Session altered.
SQL> ALTER SESSION ENABLE PARALLEL DML;
Session altered.
We will now execute our load, as follows.
SQL> exec trades_load.load_trades;
BEGIN trades_load.load_trades; END;
*
ERROR at line 1:
ORA-02291: integrity constraint (SCOTT.TRADE_LEGS_FK01) violated - parent key not found
ORA-06512: at "SCOTT.TRADES_LOAD", line 72
ORA-06512: at line 1
This is disappointing! Oracle has tried to load a TRADE_LEGS record before its parent TRADES record is present. The reason for this is simple: multi-table inserts do not guarantee order of inserts and we are seeing evidence of this. We might think that we can simply force the ordering of the input data, but this does not change the fact that Oracle has decided to load TRADE_LEGS before TRADES. In the sample code file (see bottom of article for download details) another version of the TRADES_LOAD package is included. This is the same as the above with the following additions, all added in the vain attempt to ensure that the INSERT FIRST section receives data in the order in which we need it to be loaded.
    an ORDER BY on the select from the pipelined function, to ensure that records are ordered in TRADES->TRADE_LEGS->LEG_VALUATIONS order; plus
    an ORDER BY on the CURSOR expression over the TRADES_STAGING table to ensure records are input into the pipelined function in TRADE_ID and LEG_NO order; plus
    a streaming clause on the pipelined function itself, which enables us to control the order in which data is piped to the consumer.
Despite these three combined attempts at "brute force" ordering, we receive the same error message as above. The issue lies with the multi-table insert. We must therefore workaround this problem in one of two ways:
    temporarily disable the foreign key constraints; or
    use deferrable foreign key constraints.
We will test both of these workarounds below.
disable/enable constraints
To ensure we can load all of our tables, we will disable the foreign key constraints, as follows.
SQL> ALTER TABLE trade_legs DISABLE CONSTRAINT trade_legs_fk01;
Table altered.
SQL> ALTER TABLE leg_valuations DISABLE CONSTRAINT leg_valuations_fk01;
Table altered.
We will now test our loading procedure again.
SQL> exec trades_load.load_trades;
249290 rows inserted.
PL/SQL procedure successfully completed.
SQL> COMMIT;
Commit complete.
We have successfully loaded our tables. As a quick sanity-check, we will count the records we loaded, as follows.
SQL> SELECT COUNT(*) FROM trades;
  COUNT(*)
----------
     49858
1 row selected.
SQL> SELECT COUNT(*) FROM trade_legs;
  COUNT(*)
----------
     99716
1 row selected.
SQL> SELECT COUNT(*) FROM leg_valuations;
  COUNT(*)
----------
     99716
1 row selected.
We will now enable our foreign key constraints, which will raise an exception if there are any data issues.
SQL> ALTER TABLE trade_legs ENABLE CONSTRAINT trade_legs_fk01;
Table altered.
SQL> ALTER TABLE leg_valuations ENABLE CONSTRAINT leg_valuations_fk01;
Table altered.
Success! Note that this method is possible for serialised systems, but for concurrent applications, it might not be suitable. As an alternative, we can use deferrable constraints, as demonstrated below.
deferrable constraints
Deferred constraints enable us to postpone the checking of constraint violations until the end of a transaction (i.e. a COMMIT). This enables us to load data in a state that temporarily violates one or more constraints, but which is rectified at a later stage. The deferrable property of constraints can only be set at the time of creation. We will therefore drop and re-create our two foreign keys as follows.
SQL> ALTER TABLE trade_legs DROP CONSTRAINT trade_legs_fk01;
Table altered.
SQL> ALTER TABLE leg_valuations DROP CONSTRAINT leg_valuations_fk01;
Table altered.
SQL> ALTER TABLE trade_legs ADD
  2     CONSTRAINT trade_legs_fk01
  3     FOREIGN KEY (trade_id)
  4     REFERENCES trades (trade_id)
  5     DEFERRABLE;
Table altered.
SQL> ALTER TABLE leg_valuations ADD
  2     CONSTRAINT leg_valuations_fk01
  3     FOREIGN KEY (trade_id, leg_no)
  4     REFERENCES trade_legs (trade_id, leg_no)
  5     DEFERRABLE;
Table altered.
Our foreign keys are now deferrable. To exploit this, we must set their states to deferred, as follows.
SQL> SET CONSTRAINT trade_legs_fk01 DEFERRED;
Constraint set.
SQL> SET CONSTRAINT leg_valuations_fk01 DEFERRED;
Constraint set.
Having cleared the sample tables of the data loaded in the previous example, we will re-run our loading procedure.
SQL> exec trades_load.load_trades;
249305 rows inserted.
PL/SQL procedure successfully completed.
Again, we manage to load the three tables successfully but we haven't validated the data yet. Deferred constraints are not checked until the end of a transaction, so we will now COMMIT our load.
SQL> COMMIT;
Commit complete.
Success! We will now check a small sample of the data we loaded.
SQL> SELECT * FROM trades WHERE ROWNUM <= 5;
  TRADE_ID PRODUCT_TYPE COUNTERPARTY  TRADE_TIMESTAMP            MATURITY_DATE
---------- ------------ ------------- -------------------------- -------------
      6774 TYPE         SYS           30-AUG-05 14.02.47.000000  31-MAY-06
      6778 VIEW         SYS           30-AUG-05 14.02.47.000000  04-JUN-06
      6782 VIEW         SYS           30-AUG-05 14.02.48.000000  08-JUN-06
      6786 VIEW         SYS           30-AUG-05 14.02.48.000000  12-JUN-06
      6790 TYPE         SYS           30-AUG-05 14.02.49.000000  16-JUN-06
5 rows selected.
SQL> SELECT * FROM trade_legs WHERE ROWNUM <= 5;
  TRADE_ID     LEG_NO TRADE_AMOUNT CURRENCY TRADE_PRICE
---------- ---------- ------------ -------- -----------
      6774          1   1624901166 GBP       .347733816
      6776          2    524881873 GBP       .904404062
      6778          1    622715309 GBP       .608247575
      6780          2    821449852 GBP       .508567497
      6782          1   1623359117 GBP       .674977682
5 rows selected.
SQL> SELECT * FROM leg_valuations WHERE ROWNUM <= 5;
  TRADE_ID     LEG_NO VALUATION_DATE MARKET_VALUE
---------- ---------- -------------- ------------
      2545          2 14-AUG-07        1042583556
      2546          1 14-AUG-07        1098505446
      2547          1 14-AUG-07          49145215
      2548          2 14-AUG-07          97502618
      2549          2 14-AUG-07         127231786
5 rows selected.
The primary advantage of this method over the disable/enable method is that concurrency is not affected by the constraint states of a single session. We can therefore adopt this method in a multi-user application that is likely to load the same target tables concurrently.
an alternative approach to consider
In this article, we have demonstrated a powerful new technique, combining three unique features (multi-table insert, parallel pipelined functions and type substitution) to extend the ETL potential of Oracle. As a final note, however, there is a simple alternative that some readers might wish to explore. Remember that pipelined functions are usually defined by a single object type and collection of this type. We can easily remove the type hierarchy from our example but still load multiple tables from a single function. We would instead develop the following:
    a single object type that contains all of the attributes needed for all target tables. In our example, we would have a single denormalised type containing all of the attributes in TRADES, TRADE_LEGS and LEG_VALUATIONS;
    a collection of this single object type as usual;
    a pipelined function to prepare and pipe each output record with all denormalised attributes populated; and
    a multi-table insert that loads the attributes into their respective tables.
With reference to our trading examples, the alternative multi-table insert would be based on the following pseudo-code.
INSERT ALL
   WHEN leg_no = 1
   THEN
      INTO trades VALUES (...)
   WHEN 1=1
   THEN
      INTO trade_legs VALUES (...)
   WHEN 1=1
   THEN
      INTO leg_valuations VALUES (...)
SELECT trade_id
,      leg_no
,      ...all other attributes of TRADES, TRADE_LEGS and LEG_VALUATIONS...
FROM   TABLE(
          trades_load.trades_transform(
             CURSOR( SELECT * FROM trades_staging ) ) );
Completing this can be an exercise for the reader. The advantages of this method are that fewer records are piped from the function (i.e. several tables are loaded from each record) and the SELECT is much simpler (no type semantics are required). The disadvantage is that the records can become very wide and at the time of writing, pipelined functions perform badly with wide record structures (over 50 attributes).
further reading
--修正後的procedure代碼

 PROCEDURE load_trades IS
   BEGIN
      INSERT FIRST
         WHEN record_type = 'TRADES'
         THEN
            INTO trades ( trade_id
                        , product_type
                        , counterparty
                        , trade_timestamp
                        , trading_book
                        , maturity_date
                        )
            VALUES ( trade_id
                   , product_type
                   , counterparty
                   , trade_timestamp
                   , trading_book
                   , maturity_date
                   )
         WHEN record_type = 'TRADE_LEGS'
         THEN
            INTO trade_legs ( trade_id
                            , leg_no
                            , trade_amount
                            , currency
                            , trade_price
                            )
            VALUES ( trade_id
                   , leg_no
                   , trade_amount
                   , currency
                   , trade_price
                   )
         WHEN record_type = 'LEG_VALUATIONS'
         THEN
            INTO leg_valuations ( trade_id
                                , leg_no
                                , valuation_date
                                , market_value
                                )
            VALUES ( trade_id
                   , leg_no
                   , valuation_date
                   , market_value
                   )
      SELECT ilv.record_type
      ,      COALESCE(
                ilv.trade_rec.transaction_id,
                ilv.trade_leg_rec.transaction_id,
                ilv.leg_valuation_rec.transaction_id
                )                                    AS trade_id
      ,      COALESCE(
                ilv.trade_leg_rec.leg_no,
                ilv.leg_valuation_rec.leg_no
                )                                    AS leg_no
      ,      ilv.trade_rec.product_type              AS product_type
      ,      ilv.trade_rec.counterparty              AS counterparty
      ,      ilv.trade_rec.trade_timestamp           AS trade_timestamp
      ,      ilv.trade_rec.trading_book              AS trading_book
      ,      ilv.trade_rec.maturity_date             AS maturity_date
      ,      ilv.trade_leg_rec.trade_amount          AS trade_amount
      ,      ilv.trade_leg_rec.currency              AS currency
      ,      ilv.trade_leg_rec.trade_price           AS trade_price
      ,      ilv.leg_valuation_rec.valuation_date    AS valuation_date
      ,      ilv.leg_valuation_rec.market_value      AS market_value
      FROM (
            SELECT CASE
                      WHEN VALUE(nt) IS OF TYPE (trade_ot)
                      THEN 'TRADES'
                      WHEN VALUE(nt) IS OF TYPE (trade_leg_ot)
                      THEN 'TRADE_LEGS'
                      ELSE 'LEG_VALUATIONS'
                   END AS record_type
            ,      TREAT(VALUE(nt) AS trade_ot)         AS trade_rec
            ,      TREAT(VALUE(nt) AS trade_leg_ot)     AS trade_leg_rec
            ,      TREAT(VALUE(nt) AS leg_valuation_ot) AS leg_valuation_rec
            FROM   TABLE(
                      trades_load.trades_transform(
                         CURSOR( SELECT * FROM trades_staging ORDER BY trade_id, leg_no) ) ) nt
           ) ilv
      ORDER BY
            COALESCE(
               ilv.trade_rec.transaction_id,
               ilv.trade_leg_rec.transaction_id,
               ilv.leg_valuation_rec.transaction_id
               )
      ,     COALESCE(
               ilv.trade_leg_rec.leg_no,
               ilv.leg_valuation_rec.leg_no
               )
      ,     CASE ilv.record_type
               WHEN 'TRADES'
               THEN 0
               WHEN 'TRADE_LEGS'
               THEN 1
               ELSE 2
            END;
      DBMS_OUTPUT.PUT_LINE( SQL%ROWCOUNT || ' rows inserted.' );
   END load_trades;
END trades_load;
/

 

参考至:http://www.oracle-developer.net/display.php?id=420
               http://docs.oracle.com/cd/B28359_01/appdev.111/b28371/adobjadv.htm#CHDJCFAG

本文原创,转载请注明出处、作者

如有错误,欢迎指正

邮箱:czmcj@163.com

0
1
分享到:
评论

相关推荐

    Amplify Shader Editor1.5.4 最新版

    • Improved editor rendering pipeline leading to huge performance boost. • Reduced editor memory allocations resulting on a low Garbage Collector impact. • Editor more snappy and responsive when ...

    python3.6.5参考手册 chm

    PEP 393: Flexible String Representation Functionality Performance and resource usage PEP 397: Python Launcher for Windows PEP 3151: Reworking the OS and IO exception hierarchy PEP 380: Syntax for ...

    基于模糊故障树的工业控制系统可靠性分析与Python实现

    内容概要:本文探讨了模糊故障树(FFTA)在工业控制系统可靠性分析中的应用,解决了传统故障树方法无法处理不确定数据的问题。文中介绍了模糊数的基本概念和实现方式,如三角模糊数和梯形模糊数,并展示了如何用Python实现模糊与门、或门运算以及系统故障率的计算。此外,还详细讲解了最小割集的查找方法、单元重要度的计算,并通过实例说明了这些方法的实际应用场景。最后,讨论了模糊运算在处理语言变量方面的优势,强调了在可靠性分析中处理模糊性和优化计算效率的重要性。 适合人群:从事工业控制系统设计、维护的技术人员,以及对模糊数学和可靠性分析感兴趣的科研人员。 使用场景及目标:适用于需要评估复杂系统可靠性的场合,特别是在面对不确定数据时,能够提供更准确的风险评估。目标是帮助工程师更好地理解和预测系统故障,从而制定有效的预防措施。 其他说明:文中提供的代码片段和方法可用于初步方案验证和技术探索,但在实际工程项目中还需进一步优化和完善。

    风力发电领域双馈风力发电机(DFIG)Simulink模型的构建与电流电压波形分析

    内容概要:本文详细探讨了双馈风力发电机(DFIG)在Simulink环境下的建模方法及其在不同风速条件下的电流与电压波形特征。首先介绍了DFIG的基本原理,即定子直接接入电网,转子通过双向变流器连接电网的特点。接着阐述了Simulink模型的具体搭建步骤,包括风力机模型、传动系统模型、DFIG本体模型和变流器模型的建立。文中强调了变流器控制算法的重要性,特别是在应对风速变化时,通过实时调整转子侧的电压和电流,确保电流和电压波形的良好特性。此外,文章还讨论了模型中的关键技术和挑战,如转子电流环控制策略、低电压穿越性能、直流母线电压脉动等问题,并提供了具体的解决方案和技术细节。最终,通过对故障工况的仿真测试,验证了所建模型的有效性和优越性。 适用人群:从事风力发电研究的技术人员、高校相关专业师生、对电力电子控制系统感兴趣的工程技术人员。 使用场景及目标:适用于希望深入了解DFIG工作原理、掌握Simulink建模技能的研究人员;旨在帮助读者理解DFIG在不同风速条件下的动态响应机制,为优化风力发电系统的控制策略提供理论依据和技术支持。 其他说明:文章不仅提供了详细的理论解释,还附有大量Matlab/Simulink代码片段,便于读者进行实践操作。同时,针对一些常见问题给出了实用的调试技巧,有助于提高仿真的准确性和可靠性。

    基于西门子S7-200 PLC和组态王的八层电梯控制系统设计与实现

    内容概要:本文详细介绍了基于西门子S7-200 PLC和组态王软件构建的八层电梯控制系统。首先阐述了系统的硬件配置,包括PLC的IO分配策略,如输入输出信号的具体分配及其重要性。接着深入探讨了梯形图编程逻辑,涵盖外呼信号处理、轿厢运动控制以及楼层判断等关键环节。随后讲解了组态王的画面设计,包括动画效果的实现方法,如楼层按钮绑定、轿厢移动动画和门开合效果等。最后分享了一些调试经验和注意事项,如模拟困人场景、防抖逻辑、接线艺术等。 适合人群:从事自动化控制领域的工程师和技术人员,尤其是对PLC编程和组态软件有一定基础的人群。 使用场景及目标:适用于需要设计和实施小型电梯控制系统的工程项目。主要目标是帮助读者掌握PLC编程技巧、组态画面设计方法以及系统联调经验,从而提高项目的成功率。 其他说明:文中提供了详细的代码片段和调试技巧,有助于读者更好地理解和应用相关知识点。此外,还强调了安全性和可靠性方面的考量,如急停按钮的正确接入和硬件互锁设计等。

    CarSim与Simulink联合仿真:基于MPC模型预测控制实现智能超车换道

    内容概要:本文介绍了如何将CarSim的动力学模型与Simulink的智能算法相结合,利用模型预测控制(MPC)实现车辆的智能超车换道。主要内容包括MPC控制器的设计、路径规划算法、联合仿真的配置要点以及实际应用效果。文中提供了详细的代码片段和技术细节,如权重矩阵设置、路径跟踪目标函数、安全超车条件判断等。此外,还强调了仿真过程中需要注意的关键参数配置,如仿真步长、插值设置等,以确保系统的稳定性和准确性。 适合人群:从事自动驾驶研究的技术人员、汽车工程领域的研究人员、对联合仿真感兴趣的开发者。 使用场景及目标:适用于需要进行自动驾驶车辆行为模拟的研究机构和企业,旨在提高超车换道的安全性和效率,为自动驾驶技术研发提供理论支持和技术验证。 其他说明:随包提供的案例文件已调好所有参数,可以直接导入并运行,帮助用户快速上手。文中提到的具体参数和配置方法对于初学者非常友好,能够显著降低入门门槛。

    基于单片机的鱼缸监测设计(51+1602+AD0809+18B20+UART+JKx2)#0107

    包括:源程序工程文件、Proteus仿真工程文件、论文材料、配套技术手册等 1、采用51单片机作为主控; 2、采用AD0809(仿真0808)检测"PH、氨、亚硝酸盐、硝酸盐"模拟传感; 3、采用DS18B20检测温度; 4、采用1602液晶显示检测值; 5、检测值同时串口上传,调试助手监看; 6、亦可通过串口指令对加热器、制氧机进行控制;

    风电领域双馈永磁风电机组并网仿真及短路故障分析与MPPT控制

    内容概要:本文详细介绍了双馈永磁风电机组并网仿真模型及其短路故障分析方法。首先构建了一个9MW风电场模型,由6台1.5MW双馈风机构成,通过升压变压器连接到120kV电网。文中探讨了风速模块的设计,包括渐变风、阵风和随疾风的组合形式,并提供了相应的Python和MATLAB代码示例。接着讨论了双闭环控制策略,即功率外环和电流内环的具体实现细节,以及MPPT控制用于最大化风能捕获的方法。此外,还涉及了短路故障模块的建模,包括三相电压电流特性和离散模型与phasor模型的应用。最后,强调了永磁同步机并网模型的特点和注意事项。 适合人群:从事风电领域研究的技术人员、高校相关专业师生、对风电并网仿真感兴趣的工程技术人员。 使用场景及目标:适用于风电场并网仿真研究,帮助研究人员理解和优化风电机组在不同风速条件下的性能表现,特别是在短路故障情况下的应对措施。目标是提高风电系统的稳定性和可靠性。 其他说明:文中提供的代码片段和具体参数设置有助于读者快速上手并进行实验验证。同时提醒了一些常见的错误和需要注意的地方,如离散化步长的选择、初始位置对齐等。

    空手道训练测试系统BLE106版本

    适用于空手道训练和测试场景

    【音乐创作领域AI提示词】AI音乐提示词(deepseek,豆包,kimi,chatGPT,扣子空间,manus,AI训练师)

    内容概要:本文介绍了金牌音乐作词大师的角色设定、背景经历、偏好特点、创作目标、技能优势以及工作流程。金牌音乐作词大师凭借深厚的音乐文化底蕴和丰富的创作经验,能够为不同风格的音乐创作歌词,擅长将传统文化元素与现代流行文化相结合,创作出既富有情感又触动人心的歌词。在创作过程中,会严格遵守社会主义核心价值观,尊重用户需求,提供专业修改建议,确保歌词内容健康向上。; 适合人群:有歌词创作需求的音乐爱好者、歌手或音乐制作人。; 使用场景及目标:①为特定主题或情感创作歌词,如爱情、励志等;②融合传统与现代文化元素创作独特风格的歌词;③对已有歌词进行润色和优化。; 阅读建议:阅读时可以重点关注作词大师的创作偏好、技能优势以及工作流程,有助于更好地理解如何创作出高质量的歌词。同时,在提出创作需求时,尽量详细描述自己的情感背景和期望,以便获得更贴合心意的作品。

    linux之用户管理教程.md

    linux之用户管理教程.md

    基于单片机的搬运机器人设计(51+1602+L298+BZ+KEY6)#0096

    包括:源程序工程文件、Proteus仿真工程文件、配套技术手册等 1、采用51/52单片机作为主控芯片; 2、采用1602液晶显示设置及状态; 3、采用L298驱动两个电机,模拟机械臂动力、移动底盘动力; 3、首先按键配置-待搬运物块的高度和宽度(为0不能开始搬运); 4、按下启动键开始搬运,搬运流程如下: 机械臂先把物块抓取到机器车上, 机械臂减速 机器车带着物块前往目的地 机器车减速 机械臂把物块放下来 机械臂减速 机器车回到物块堆积处(此时机器车是空车) 机器车减速 蜂鸣器提醒 按下复位键,结束本次搬运

    基于下垂控制的三相逆变器电压电流双闭环仿真及MATLAB/Simulink/PLECS实现

    内容概要:本文详细介绍了基于下垂控制的三相逆变器电压电流双闭环控制的仿真方法及其在MATLAB/Simulink和PLECS中的具体实现。首先解释了下垂控制的基本原理,即有功调频和无功调压,并给出了相应的数学表达式。随后讨论了电压环和电流环的设计与参数整定,强调了两者带宽的差异以及PI控制器的参数选择。文中还提到了一些常见的调试技巧,如锁相环的响应速度、LC滤波器的谐振点处理、死区时间设置等。此外,作者分享了一些实用的经验,如避免过度滤波、合理设置采样周期和下垂系数等。最后,通过突加负载测试展示了系统的动态响应性能。 适合人群:从事电力电子、微电网研究的技术人员,尤其是有一定MATLAB/Simulink和PLECS使用经验的研发人员。 使用场景及目标:适用于希望深入了解三相逆变器下垂控制机制的研究人员和技术人员,旨在帮助他们掌握电压电流双闭环控制的具体实现方法,提高仿真的准确性和效率。 其他说明:本文不仅提供了详细的理论讲解,还结合了大量的实战经验和调试技巧,有助于读者更好地理解和应用相关技术。

    光伏并网逆变器全栈开发资料:硬件设计、控制算法及实战经验

    内容概要:本文详细介绍了光伏并网逆变器的全栈开发资料,涵盖了从硬件设计到控制算法的各个方面。首先,文章深入探讨了功率接口板的设计,包括IGBT缓冲电路、PCB布局以及EMI滤波器的具体参数和设计思路。接着,重点讲解了主控DSP板的核心控制算法,如MPPT算法的实现及其注意事项。此外,还详细描述了驱动扩展板的门极驱动电路设计,特别是光耦隔离和驱动电阻的选择。同时,文章提供了并联仿真的具体实现方法,展示了环流抑制策略的效果。最后,分享了许多宝贵的实战经验和调试技巧,如主变压器绕制、PWM输出滤波、电流探头使用等。 适合人群:从事电力电子、光伏系统设计的研发工程师和技术爱好者。 使用场景及目标:①帮助工程师理解和掌握光伏并网逆变器的硬件设计和控制算法;②提供详细的实战经验和调试技巧,提升产品的可靠性和性能;③适用于希望深入了解光伏并网逆变器全栈开发的技术人员。 其他说明:文中不仅提供了具体的电路设计和代码实现,还分享了许多宝贵的实际操作经验和常见问题的解决方案,有助于提高开发效率和产品质量。

    机器人轨迹规划中粒子群优化与3-5-3多项式结合的时间最优路径规划

    内容概要:本文详细介绍了粒子群优化(PSO)算法与3-5-3多项式相结合的方法,在机器人轨迹规划中的应用。首先解释了粒子群算法的基本原理及其在优化轨迹参数方面的作用,随后阐述了3-5-3多项式的数学模型,特别是如何利用不同阶次的多项式确保轨迹的平滑过渡并满足边界条件。文中还提供了具体的Python代码实现,展示了如何通过粒子群算法优化时间分配,使3-5-3多项式生成的轨迹达到时间最优。此外,作者分享了一些实践经验,如加入惩罚项以避免超速,以及使用随机扰动帮助粒子跳出局部最优。 适合人群:对机器人运动规划感兴趣的科研人员、工程师和技术爱好者,尤其是有一定编程基础并对优化算法有初步了解的人士。 使用场景及目标:适用于需要精确控制机器人运动的应用场合,如工业自动化生产线、无人机导航等。主要目标是在保证轨迹平滑的前提下,尽可能缩短运动时间,提高工作效率。 其他说明:文中不仅给出了理论讲解,还有详细的代码示例和调试技巧,便于读者理解和实践。同时强调了实际应用中需要注意的问题,如系统的建模精度和安全性考量。

    【KUKA 机器人资料】:kuka机器人压铸欧洲标准.pdf

    KUKA机器人相关资料

    光子晶体中BIC与OAM激发的模拟及三维Q值计算

    内容概要:本文详细探讨了光子晶体中的束缚态在连续谱中(BIC)及其与轨道角动量(OAM)激发的关系。首先介绍了光子晶体的基本概念和BIC的独特性质,随后展示了如何通过Python代码模拟二维光子晶体中的BIC,并解释了BIC在光学器件中的潜在应用。接着讨论了OAM激发与BIC之间的联系,特别是BIC如何增强OAM激发效率。文中还提供了使用有限差分时域(FDTD)方法计算OAM的具体步骤,并介绍了计算本征态和三维Q值的方法。此外,作者分享了一些实验中的有趣发现,如特定条件下BIC表现出OAM特征,以及不同参数设置对Q值的影响。 适合人群:对光子晶体、BIC和OAM感兴趣的科研人员和技术爱好者,尤其是从事微纳光子学研究的专业人士。 使用场景及目标:适用于希望通过代码模拟深入了解光子晶体中BIC和OAM激发机制的研究人员。目标是掌握BIC和OAM的基础理论,学会使用Python和其他工具进行模拟,并理解这些现象在实际应用中的潜力。 其他说明:文章不仅提供了详细的代码示例,还分享了许多实验心得和技巧,帮助读者避免常见错误,提高模拟精度。同时,强调了物理离散化方式对数值计算结果的重要影响。

    C#联合Halcon 17.12构建工业视觉项目的配置与应用

    内容概要:本文详细介绍了如何使用C#和Halcon 17.12构建一个功能全面的工业视觉项目。主要内容涵盖项目配置、Halcon脚本的选择与修改、相机调试、模板匹配、生产履历管理、历史图像保存以及与三菱FX5U PLC的以太网通讯。文中不仅提供了具体的代码示例,还讨论了实际项目中常见的挑战及其解决方案,如环境配置、相机控制、模板匹配参数调整、PLC通讯细节、生产数据管理和图像存储策略等。 适合人群:从事工业视觉领域的开发者和技术人员,尤其是那些希望深入了解C#与Halcon结合使用的专业人士。 使用场景及目标:适用于需要开发复杂视觉检测系统的工业应用场景,旨在提高检测精度、自动化程度和数据管理效率。具体目标包括但不限于:实现高效的视觉处理流程、确保相机与PLC的无缝协作、优化模板匹配算法、有效管理生产和检测数据。 其他说明:文中强调了框架整合的重要性,并提供了一些实用的技术提示,如避免不同版本之间的兼容性问题、处理实时图像流的最佳实践、确保线程安全的操作等。此外,还提到了一些常见错误及其规避方法,帮助开发者少走弯路。

    基于Matlab的9节点配电网中分布式电源接入对节点电压影响的研究

    内容概要:本文探讨了分布式电源(DG)接入对9节点配电网节点电压的影响。首先介绍了9节点配电网模型的搭建方法,包括定义节点和线路参数。然后,通过在特定节点接入分布式电源,利用Matlab进行潮流计算,模拟DG对接入点及其周围节点电压的影响。最后,通过绘制电压波形图,直观展示了不同DG容量和接入位置对配电网电压分布的具体影响。此外,还讨论了电压越限问题以及不同线路参数对电压波动的影响。 适合人群:电力系统研究人员、电气工程学生、从事智能电网和分布式能源研究的专业人士。 使用场景及目标:适用于研究分布式电源接入对配电网电压稳定性的影响,帮助优化分布式电源的规划和配置,确保电网安全稳定运行。 其他说明:文中提供的Matlab代码和图表有助于理解和验证理论分析,同时也为后续深入研究提供了有价值的参考资料。

Global site tag (gtag.js) - Google Analytics