Comparison of different SQL implementations
The goal of this page — which is a work in progress — is to gather information relevant for people who are porting SQL from one product to another and/or are interested in possibilities and limits of 'cross-product' SQL.
The following tables compare how different DBMS products handle various SQL (and related) features. If possible, the tables also state how the implementations shoulddo things, according to the SQL standard.
I will only write about subjects that I've worked with personally, or subjects which I anticipate to find use for in the near future. Subjects on which there are no significant implementation variances are not covered. Beta-versions of software are not examined.
I'm sorry about the colors. They are a result of wanting to mark each DBMS differently and at the same time wanting to be relatively nice to printers.
If you have corrections or suggestions, please contact me; even notifications about spelling errors are welcome.
Contents:
- Legend, definitions, and notes
- Features
- Data definition language (DDL)
-
The SELECT statement
- Ordering result sets
-
Limiting result sets (RANK() / ROW_NUMBER() / FETCH FIRST / LIMIT / TOP)
- Simple limit
- Top-n (quota-queries)
- Limit—with offset, including a note about the importance of sorting on unique values
- The INSERT statement
- Data types
- Functions and operators
- Constraint handling
-
Mixture of type and operations
- Automatic key generation (IDENTITY/SERIAL/AUTO_INCREMENT)
- Bulk operations
- Command line operations / metadata
- JDBC
- Other topics
- Related work
- Acknowledgments
- TODOs
Legend, definitions, and notes
The following SQL standard and implementations have been examined, if not otherwise stated:
Standard | The latest official version of SQL is SQL:2008.
I don't have access to the official ISO standard text, but Whitemarsh Information Systems Corporation provides arather final draft as a zip-archive, containing several files. Most important to this page is the file No books cover SQL:2008 yet. Regarding the previous standard, SQL:2003, the only book covering the subject is in German which I was never any good at. Therefore, I also use the following book as reference: |
PostgreSQL | PostgreSQL 8.4.1 on CentOS Linux. Documentation |
DB2 | DB2 Express-C v. 9.1 on Fedora Linux. Note that there are differences between various DB2 flavors; this page is about DB2 for "LUW" (Linux/Unix/Windows). Documentation |
MS SQL Server | MS SQL Server 2005 on Windows XP. Microsoft's SQL implementation is sometimes named Transact-SQL, or TSQL. In this document, I'll generally write MSSQL as a short-hand for Microsoft's SQL Server product. Documentation |
MySQL | MySQL Database Server 5.0.18 on Fedora Linux (i.e. MySQL AB's "classic" DBMS product—not MaxDB). Documentation |
Oracle | Oracle Database 11g Release 2 on Red Hat Enterprise Linux. Documentation |
Informix | Informix Dynamic Server Workgroup Edition v. 11.50 on Red Hat Enterprise Linux. Documentation |
The products are running with their default settings. This is important for MySQL and MSSQL: Their interpretation of SQL may be changed rather drastically by adjusting certain configuration options, potentially increasing the level of standard compliance (for MySQL, there is a dedicated documentation page about this). However, such non-default configuration options are not of great value for people writing SQL applications because the developer often cannot rely on non-default configuration settings.
Features
Views
Join types and features
All the DBMSes support basic INNER JOINs, but vary in their support for other join types.
In the following feature chart, a means yes; an empty table cell means no.
Natural joins (only tested: NATURAL LEFT JOIN ) |
![]() |
![]() |
![]() |
|||
USING -clause |
![]() |
![]() |
![]() |
|||
FULL joins1 (tested: SELECT...FULL JOIN...ON...=... ) |
![]() |
![]() |
![]() |
![]() |
![]() |
|
Explicit CROSS JOIN (cartesian product) |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
Remarks:
- Note that
FULL
joins may be emulated with a union of a left and a right join.
Data definition language (DDL)
Copying structure
Objective: An existing table, t1 needs to be copied to a new table, t2, without copying data. I.e., only the structure/definition of the table is copied.
The SELECT statement
Ordering result sets
Limiting result sets
Simple limit
Objective: Want to only get n rows in the result set. Usually only makes sense in connection with an ORDER BY
expression.
Note: This is not the same as a top-n query — see next section.
Note also: Some of the queries below may not be legal in all situations, such as in views or sub-queries.
Standard | The SQL standard provides three ways of performing a 'simple limit':
|
PostgreSQL | Supports all standards-based approaches.
In old PostgreSQL versions (versions 8.3 and older), a special PostgreSQL (and MySQL) specific method was used:
Note that Documentation: |
DB2 | Supports all standards-based approaches.
Documentation:
|
MSSQL | Supports the ROW_NUMBER() (since MSSQL 2005) and cursor standards-based approaches; doesn't support FETCH FIRST .
MSSQL 2000 didn't support |
MySQL | Doesn't support the standard. Alternative solution:
|
Oracle | Supports ROW_NUMBER ; doesn't support FETCH FIRST .
As Oracle doesn't allow
A reader of this page told me that using the Oracle-specific |
Informix | Supports neither ROW_NUMBER(), nor FETCH FIRST.
Alternative solution (which is illegal in plain sub-queries): |
Top-n query
Objective: Like the simple limit-query above, but include rows with tie conditions. Thus, the query may return more than n rows.
Some call this a quota-query.
The following examples are based on this table:
SELECT * FROM person ORDER BY age ASC; +----------+-------------+-----+ |PERSON_ID | PERSON_NAME | AGE | +----------+-------------+-----+ | 7 | Hilda | 12 | | 8 | Bill | 12 | | 4 | Joe | 23 | | 2 | Veronica | 23 | | 3 | Michael | 27 | | 9 | Marianne | 27 | | 1 | Ben | 50 | | 10 | Michelle | 50 | | 5 | Irene | 77 | | 6 | Vivian | 77 | +----------+-------------+-----+
Now, we only want the three (n=3) youngest persons displayed, i.e. a result set like this:
+----------+-------------+-----+ |PERSON_ID | PERSON_NAME | AGE | +----------+-------------+-----+ | 7 | Hilda | 12 | | 8 | Bill | 12 | | 4 | Joe | 23 | | 2 | Veronica | 23 | +----------+-------------+-----+
Standard | With standard SQL, there are two principal ways to obtain the wanted data:
In the article Going To Extremes by Joe Celko, there is a description of yet another principle for performing quota queries, using scalar subqueries. Scalar subqueries are more tedious to write but might yield better performance on your system. |
PostgreSQL | Supports the fast standard SQL variant.
In version 8.3 and older, PostgreSQL only supported the slow standard SQL query variant. In practice, a PostgreSQL-only method was used instead, in order to obtain acceptable query performance: (Change |
DB2 | Supports the fast standard SQL variant. |
MSSQL | Supports the fast standard SQL variant.
MSSQL 2000 supported the slow standard SQL variant. In practice, a MSSQL-only expression had to be used instead, in order to obtain acceptable query performance: |
MySQL | Supports the slow standard SQL solution. In practice, this MySQL-specific solution should be used instead, in order to obtain acceptable query performance:
(Change The offset-value 2 is the result of n-1 (remember: n is 3 in these examples). The second argument to the |
Oracle | Supports the fast standard SQL variant. However, as Oracle doesn't like "AS ... " after subqueries (and doesn't require naming of subqueries), the query has to be paraphrased slightly:
(Change |
Informix | On my TODO. |
Limit—with offset
Objective: Want to only get n rows in the result set, and we want the first skip rows in the result set discarded. Usually only makes sense in connection with anORDER BY
expression.
In the recipes below, basic ordering is ASCending, i.e. lowest-first queries. If you want the opposite, then change ASC->DESC
and DESC->ASC
at the places emphasizedlike this.
Standard | The SQL standard provides three ways of performing 'limit with offset':
|
PostgreSQL | Supports all the standards-based approaches.
In version 8.3 and older, cursors should be used, or a special construct: Documentation: |
DB2 | Supports the window function based approach.
Regarding cursors: DB2 for Linux/Unix/Windows doesn't support Documentation: OLAP functions, the FETCH statement. |
MSSQL | Supports the window function and cursor based approaches.
MSSQL 2000 didn't support |
MySQL | Doesn't support the standard approaches. Alternative solution: SELECT columns
In older versions of MySQL, the LIMIT-syntax is less clear: |
Oracle | Supports ROW_NUMBER() . I'm unsure if Oracle's cursor support is standards-compliant.
As Oracle doesn't accept
A reader of this page told me that using the Oracle-specific |
Informix | Supports neither OFFSET ...FETCH FIRST nor ROW_NUMBER . Supports cursors.
An alternative to using cursors is to us an Informix-specific construct: |
FETCH FIRST/LIMIT/TOP queries with offset are often used in a result presentation context: To retrieve only—say—30 rows at a time so that the end-user isn't overwhelmed by the complete result set, but instead is offered a paginated result presentation. In this case, be careful not to (only) sort on a non-unique column.
Consider the following example (where PostgreSQL is used):
SELECT * FROM person ORDER BY age ASC; person_id | person_name | age -----------+-------------+----- 7 | Hilda | 12 8 | Bill | 12 4 | Joe | 23 2 | Veronica | 23 3 | Michael | 27 9 | Marianne | 27 1 | Ben | 50 10 | Michelle | 50 5 | Irene | 77 6 | Vivian | 77
When ordering is performed on the non-unique age-value, ties may occur and it's not guaranteed that the DBMS will fetch the rows in the same order every time.
Instead of the above listing, the DBMS is allowed to return the following display order where Michael and Marianne are displayed in the opposite order compared to above:
SELECT * FROM person ORDER BY age ASC; person_id | person_name | age -----------+-------------+----- 7 | Hilda | 12 8 | Bill | 12 4 | Joe | 23 2 | Veronica | 23 9 | Marianne | 27 3 | Michael | 27 1 | Ben | 50 10 | Michelle | 50 5 | Irene | 77 6 | Vivian | 77
Now, suppose the end-user wants the results displayed five rows at a time. The result set is fetched in two queries where the DBMS happens to sort differently, as above. We will use PostgreSQL's legacy syntax in the example:
SELECT * FROM person ORDER BY age ASC LIMIT 5; person_id | person_name | age -----------+-------------+----- 7 | Hilda | 12 8 | Bill | 12 4 | Joe | 23 2 | Veronica | 23 3 | Michael | 27 SELECT * FROM person ORDER BY age ASC LIMIT 5 OFFSET 5; person_id | person_name | age -----------+-------------+----- 3 | Michael | 27 1 | Ben | 50 10 | Michelle | 50 5 | Irene | 77 6 | Vivian | 77
Notice that Marianne was not displayed in any of the two split result set presentations.
The problem could be avoided if the result set ordering had been done in a deterministic way, i.e. where the unique person_id value was considered in case of a tie:SELECT * FROM person ORDER BY age ASC, person_id ASC ...
This is safer than to pray for the DBMS to behave in a predictable way when handling non-unique values.
Note: If the table is updated between parts of the result set pagination, then the user might still get an inconsistent presentation. If you want to guard against this, too, then you should see if use of an insensitive cursor is an option in your application. Use of cursors to paginate result sets usually require that your application is stateful, which is not the case in many web-application settings. Alternatively, you could let the application cache the complete result set (e.g. in a session if your web application environment provides for sessions).
The INSERT statement
Inserting several rows at a time
Data types
The BOOLEAN type
Standard | The BOOLEAN type is optional (has feature ID T031), which is a bit surprising for such a basic type. However, it seems that endless discussions of how NULL is to be interpreted for a boolean value is holding BOOLEAN from becoming a core type.
The standard says that a BOOLEAN may be one of the following literals:
The DBMS may interpret NULL as equivalent to UNKNOWN. It is unclear from the specification if the DBMS mustsupport UNKNOWN, NULL or both as boolean literals. In this author's opinion, you should forget about the UNKNOWN literal in order to simplify the situation and let the normal SQL three-way logic apply. It's defined that TRUE > FALSE (true larger than false). |
PostgreSQL | Follows the standard.
Accepts NULL as a boolean literal; doesn't accept UNKNOWN as a boolean literal. |
DB2 | Doesn't support the BOOLEAN type. Judging from various JDBC-documentation, it seems that IBM recommends a CHAR(1) field constrained to values '0' and '1' (and perhaps NULL) as the way to store boolean values. |
MSSQL | Doesn't support the BOOLEAN type.
Possible alternative type: the BIT type which may have 0 or 1 (or NULL) as value. If you insert an integer value other than these into a field of type BIT, then the inserted value will silently be converted to 1. Rudy Limeback has some notes about oddities with the MSSQL BIT type. |
MySQL | Offers a non-conforming BOOLEAN type. MySQL's BOOLEAN is one of many aliases to its TINYINT(1) type.
(Take care if you use TINYINT(1) and JDBC with MySQL and expect to get non-boolean values from it.) MySQL accepts the literals TRUE and FALSE as aliases to 1 and 0, respectively. However, you may also assign a value of — e.g. — 9 to a column of type BOOLEAN (which is non-conforming). If you use JDBC with MySQL, then BOOLEAN is the preferred type for booleans: MySQL's JDBC-driver implicitly converts between Java's boolean and MySQL's pseudo-BOOLEAN type. Side note: MySQL has a |
Oracle | Doesn't support the BOOLEAN type. Judging from various JDBC documentation and a discussion at Ask Tom, it seems that Oracle recommends NUMBER(1) as the way to store boolean values; it's probably wise to constrain such columns to values 0 and 1 (and perhaps NULL). |
Informix | On my TODO. |
Warning to JDBC users:
According to the JDBC standard, getBoolean() must convert a SQL-'value' of NULL to the false Java value. To check if the database-value was really NULL, usewasNull().
The CHAR type
For the following section, I have used this test-SQL to try to illuminate differences (unfortunately, even standard SQL as simple as this has to be adjusted for some products):
Test steps:CREATE TABLE chartest (
charval1 CHAR(10) NOT NULL,
charval2 CHAR(10) NOT NULL,
varcharval VARCHAR(30) NOT NULL
);
INSERT INTO chartest VALUES ('aaa','aaa','aaa');
INSERT INTO chartest
VALUES ('aaaaaa ','aaa','aaa'); -- should truncate to 'aaaaaa '
INSERT INTO chartest
VALUES ('aaaaaaaaaaaa','aaa','aaa'); -- should raise error
SELECT * FROM chartest; -- should show two rows
DELETE FROM chartest WHERE charval1='aaaaaa';
SELECT * FROM chartest; -- should show one row
SELECT * FROM chartest WHERE charval1=varcharval;
SELECT charval1 || 'X' AS res FROM chartest;
SELECT CHAR_LENGTH(charval1 || charval2) AS res FROM chartest;
SELECT CHAR_LENGTH(charval1) + CHAR_LENGTH(charval2)
AS res
FROM chartest;
Expected results, after CREATE and INSERTs:
SELECT * FROM chartest; -- should show two rows CHARVAL1 CHARVAL2 VARCHARVAL ========== ========== ============================== aaa aaa aaa aaaaaa aaa aaa DELETE FROM chartest WHERE charval1='aaaaaa'; SELECT * FROM chartest; -- should show one row CHARVAL1 CHARVAL2 VARCHARVAL ========== ========== ============================== aaa aaa aaa SELECT * FROM chartest WHERE charval1=varcharval; CHARVAL1 CHARVAL2 VARCHARVAL ========== ========== ============================== aaa aaa aaa SELECT charval1 || 'X' FROM chartest AS res; res =========== aaa X SELECT CHAR_LENGTH(charval1 || charval2) AS res FROM chartest; res =========== 20 SELECT character_length(charval1) + character_length(charval2) AS res FROM chartest; res ============ 20
Standard |
|
PostgreSQL | Stores CHARs in space padded form, but violates the standard by (conceptually) truncating trailing white-space before performing most functions, operators, and comparisons (like the CHARACTER_LENGTH -function and the concatenation(|| ) operator).
|
DB2 | Follows the standard. |
MSSQL | Generally follows standard, but (conceptually) truncates trailing white-space before performing some functions (at least before LEN() ).
|
MySQL | Breaks the standard by silently inserting the string, truncated to specified column CHAR-length. (It's actually not completely silent, as it issues warnings if values were truncated: If you manually check for warnings, you will know that something bad happened, but not which of the rows are now invalid.) Violates the standard by effectively truncating all trailing spaces. The documentation states that MySQL truncates trailing spaces when CHAR values are retrieved. That may be true, but it seems that truncation even happens before the CHAR values are used as input in functions like CONCAT , CHAR_LENGTH , etc.
|
Oracle | Follows the standard, with a minor exception: Oracle doesn't remove trailing spaces which exceed the specified CHAR length, but raises an exception. |
Informix | On my TODO. |
Date and time
The TIMESTAMP type
Standard | Part of the Core requirements, feature ID F051-03. Stores year, month, day, hour, minute, second (with fractional seconds; default is 6 fractional digits). Extension to Core SQL (feature ID F411): TIMESTAMP WITH TIME ZONE which also stores the time zone. Examples of TIMESTAMP literals:
Examples of TIMESTAMP WITH TIME ZONE literals:
It's strange that TIMESTAMP WITH TIME ZONE literals are not represented as, e.g., |
PostgreSQL | Follows that standard with one exception: In some cases, TIMESTAMP '2003-08-23 01:02:03 +02:00' is interpreted as a TIMESTAMP WITHOUT TIME ZONE (discarding the '+02:00 ' part)—not as a TIMESTAMP WITH TIME ZONE value. The standard may be illogical regarding this, but a standard is a standard...
Performs good sanity checks on inserted timestamp values; e.g. this will work: |
DB2 | DB2 has the TIMESTAMP data type, but not the extended TIMESTAMP WITH TIME ZONE type.
Performs good sanity checks on inserted timestamp values; e.g. this will work: |
MSSQL | Note that MSSQL's choice of words related to date and time is confusing: In MSSQL's vocabulary, datetime is a concrete data type, whereas in the SQL standard, datetime is a general term covering the DATE, TIME and TIMESTAMP types.
MSSQL has a strange pseudo-type called TIMESTAMP, but has deprecated it; don't use it in new code. The closest match to the SQL standard's TIMESTAMP type is DATETIME. This type stores the combination of date and time. It has a maximum of three fractional digits for seconds. Performs good sanity checks on inserted timestamp values; e.g. this will work: |
MySQL |
No matter what date/time data type chosen in MySQL, storage of fractional seconds and time zones are not supported (the TIME type accepts time literals with fractional seconds, but discards the fractional part when storing the value). You will have to invent your own systems for such information. Note also, that MySQL's choice of words related to date and time is confusing: In MySQL's vocabulary, datetimeis a concrete data type, whereas in the SQL standard, datetime is a general term covering the DATE, TIME and TIMESTAMP types. MySQL has a type called TIMESTAMP, but it is quite different from the standard TIMESTAMP: It's a 'magic' data type with side effects in that it's automatically updated to the current date and time if some criteria are fulfilled. MySQL has a type called DATETIME. Like MySQL's TIMESTAMP type, it stores a combination of date and time without fractional seconds. There are no side effects associated with the DATETIME type—which makes it the closest match to the SQL standard's TIMESTAMP type. By default, MySQL's sanity checks with regard to dates and time are (deliberately) poor. For example, MySQL accepts DATETIME values of '2003-02-29 00:05:00' and '2003-01-32 00:00:00'. Such values yield warnings (which you must check for if you want to be warned), but result in a value of zero being stored. |
Oracle | Follows the standard. Oracle has both the TIMESTAMP and the extended TIMESTAMP WITH TIME ZONE types.
A special gotcha applies, though: Oracle forbids columns of type TIMESTAMP WITH TIME ZONE as part of a unique key; this includes primary and foreign keys. Timestamps without time zone (and Oracle's special TIMESTAMP WITH LOCAL TIME ZONE) are accepted. Performs good sanity checks on inserted timestamp values; e.g. this will work: |
Informix | On my TODO. |
SQL functions
CHARACTER_LENGTH
Standard |
CHARACTER_LENGTH(argument) If the optional feature T061 is implemented, the function may be augmented with an indication of string unit: CHARACTER_LENGTH(argument USING string-unit ) string-unit may be UTF8 , UTF16 , UTF32 .
Returns NUMERIC. Returns NULL if the input is NULL. |
PostgreSQL | Follows the standard, providing CHARACTER_LENGTH (and CHAR_LENGTH ).
Note that PostgreSQL removes trailing (not leading) space from from CHAR values before counting. Note also that the behaviour of CHARACTER_LENGTH with regard to CHAR values has changed between versions 7.4 and 8.0 of PostgreSQL. |
DB2 | Has a CHARACTER_LENGTH function, but it's non-compliant because it requires indication of string unit, and db2's string units are different from the standard's.
Provides the Note that CHAR values are space-padded (like the standard says they should be), so the length of Documentation: CHARACTER_LENGTH and LENGTH |
MSSQL | Doesn't have CHARACTER_LENGTH. Provides the LEN and DATALENGTH functions instead (the latter is especially valid for 'special' data types like the TEXT type).Note that MSSQL's LEN -function removes trailing (not leading) spaces from CHAR values before counting; MSSQL'sDATALENGTH doesn't discard spaces.
Documentation: LEN and DATALENGTH |
MySQL | Provides CHARACTER_LENGTH. Aliases: CHAR_LENGTH, LENGTH. Note that MySQL removes trailing (not leading) spaces from CHAR values before counting. |
Oracle | Doesn't have CHARACTER_LENGTH. Provides the LENGTH function instead.
Behaves in strange ways if the input is the empty string or NULL, because of Oracles non-standard NULL handling (it considers NULL and the empty string identical 'values'). Note that CHAR values are space-padded (like the standard says they should be), so the length of |
Informix | On my TODO. |
SUBSTRING
Standard | The standard defines two variants of the SUBSTRING function:
|
PostgreSQL | PostgreSQL provides three SUBSTRING flavors:
|
DB2 | Provides (since version 9) the SUBSTRING function, but requires you to indicate string unit by appending "USINGunit ". The unit identifier may be CODEUNITS16 , CODEUNITS32 , or OCTETS . CODEUNITS16 /CODEUNITS32 seem non-standard. The standard's CHARACTERS unit isn't supported by DB2. Example: SELECT SUBSTRING(somecolumn FROM 3 USING OCTETS) FROM sometable SELECT SUBSTRING(somecolumn FROM 3 FOR 2 USING OCTETS) FROM sometable
For old DB2 versions, use the non-standard DB2 doesn't provide any built-in regular expression facilities at all (but you may manually add PCRE capabilities). |
MSSQL | MSSQL has a SUBSTRING function, but its syntax differs from that of the standard. The syntax is:
where start is an integer specifying the beginning of the string, and length is a non-negative integer indicating how many characters to return. MSSQL has no regular expression functionality. |
MySQL | MySQL supports the standard's ordinary SUBSTRING function, with some twists (see below). No regular expression based substring extraction is supported. MySQL breaks the standard when negative values are used as either start-position or length:
|
Oracle | Doesn't provide the standard SUBSTRING function. Provides SUBSTR(input,start-pos[,length]) instead (i.e. length is optional). Oracle provides a number of SUBSTR-variants (SUBSTRB, SUBSTRC, SUBSTR2, SUBSTR4, same syntax as for SUBSTR), mainly for handling various kinds of non-latin-only string-types. Oracle doesn't have support for string-extraction with the special SQL-style regular expressions. Instead, it has the REGEXP_SUBSTR function which offers string extraction, using POSIX-style regular expression pattern matching. Documentation: SUBSTR and REGEXP_SUBSTR. |
Informix | On my TODO. |
Note: If you find yourself using SUBSTRING in a WHERE-expression, then consider if LIKE could be used instead: The use of LIKE will typically make your DBMS try to use an index, whereas it will typically not try to do so in connection with functions.
REPLACE
REPLACE
means a string-function which searches a source string (haystack) for occurrences of a string to be replaced (needle) and replaces it with a new string (replacement).
Standard | Not mentioned. May be obtained through a combination of other functions (have a look at the OVERLAY, POSITION and CHARACTER_LENGTH functions). A de facto standard seems to have emerged with regard to REPLACE: REPLACE (haystack:string,needle:string,replacement:string) which means 'replace needle with replacement in the string haystack'. Replacement is done case-sensitivelyunless otherwise stated. The REPLACE function may be handy for correcting spelling errors (and other situations): UPDATE tablename
|
PostgreSQL | Follows de facto standard. Documentation |
DB2 | Follows de facto standard. Documentation |
MSSQL | Follows de facto standard with the exception that MSSQL by default works case insensitively. Documentation |
MySQL | Follows de facto standard. MySQL even works case sensitively.1 Note that the REPLACE -function is different from MySQL's non-standard REPLACE INTO expression. Documentation |
Oracle | Follows de facto standard. Documentation |
Informix | On my TODO. |
Note 1:
In this author's opinion, it's confusing that most (if not all) string-related functions in MySQL work case sensitively, while MySQL's default behaviour is to work case insensitively in plain WHERE-clauses involving string comparisons.
TRIM
LOCALTIMESTAMP
It's often important to get the value of current date and time. Below are the functions used to do that in the different implementations.
Concatenation
Constraint handling
The UNIQUE constraint
Standard | As the constraint name indicates, a (set of) column(s) with a UNIQUE constraint may only contain unique (combinations of) values.
A column—or a set of columns—which is subject to a UNIQUE constraint must also be subject to a not NULLconstraint, unless the DBMS implements an optional "NULLs allowed" feature (Feature ID 591). The optional feature adds some additional characteristics to the UNIQUE constraint:
|
PostgreSQL | Follows the standard, including the optional NULLs allowed feature. |
DB2 | Follows the non-optional parts of the UNIQUE-constraint. Doesn't implement the optional NULLs allowed feature.
Documentation (see the unique-constraint section of the page). |
MSSQL | Follows the standard—with a twist:
MSSQL offers the NULLs allowed feature, but allows at most one instance of a NULL-'value', if NULLs are allowed; i.e. breaks characteristic 2 in the above description of the standard. |
MySQL | Follows the standard, including the optional NULLs allowed feature. |
Oracle | Follows the standard—with a twist regarding multiple-column UNIQUE-constraints:
The optional NULLs allowed feature is implemented: If the UNIQUE-constraint is imposed on a single column, then the column may contain any number of NULLs (as expected from characteristic 2 in the above description of the standard). However, if the UNIQUE-constraint is specified for multiple columns, then Oracle sees the constraint as violated if any two rows
|
Informix | On my TODO. |
Mixture of type and operations
Automatic key generation
It's sometimes handy to have the DBMS handle generation of keys. The DBMSes offer various means for this. Note, however, that some database authorities warn against—at least some variants of—auto-generated keys; this is a classic database discourse.
Standard | The standard specifies a column attribute of: GENERATED ... AS IDENTITY (non-core feature ID T174+T175). When creating a table, an IDENTITY clause may be declared for certain types of columns (INTEGER being one):
or
The column with the IDENTITY attribute will be given values in increasing order, possibly with 'holes' (...,3,4,7,...). A base table may at most contain one column with the IDENTITY attribute. NOT NULL is implied for an IDENTITY column. Normally, a column declared with IDENTITY will also be declared PRIMARY KEY, but it's not implied. The examples differ in their 'ALWAYS' vs. 'BY DEFAULT' clauses:
The standard specifies several extended options which may be declared for a generated IDENTITY column. |
PostgreSQL | PostgreSQL doesn't support the standard's IDENTITY attribute.
PostgreSQL's best offering for a column with auto-generated values is to declare a column of 'type' SERIAL:
'SERIAL' is a short-hand for creating a sequence and using that sequence to create unique integers for a column. If the table is dropped, PostgreSQL will drop the sequence which was created as a side-effect of using the SERIAL type. As a user may manually insert or update a value in a column created as SERIAL, this comes closest to the standard's If you want semantics like the standard's
Another option is to add the |
DB2 | Follows standard, albeit with some restrictions on how identity columns may (not) be added to an existing table, etc.
Documentation: CREATE TABLE syntax and description of identity columns. |
MSSQL | MSSQL offers IDENTITY as a column property, but with a different syntax than the standard's specification. An example of creating a table with an IDENTITY column:
With MSSQL's IDENTITY attribute, the user cannot manually insert the value, unless the user has first run I.e., MSSQL's IDENTITY type is closest to the standard's Documentation: The IDENTITY property and SET IDENTITY_INSERT. |
MySQL | MySQL doesn't support the standard's IDENTITY attribute.
As an alternative, an integer column may be assigned the non-standard
Columns with the AUTO_INCREMENT attribute will—under certain conditions—automatically be assigned a value of <largest value in column>+<at least 1>. Look in MySQL's documentation for the (rather extensive) details. A table can have at most one column with the AUTO_INCREMENT attribute; that column must be indexed (it doesn'thave to be a primary key, as in the example SQL above) and cannot have a DEFAULT value attribute. It's probably not too far fetched to think of MySQL's AUTO_INCREMENT feature as this equivalence: The nice thing about this approach is that the automatic value insertion should never fail, even though some of the column's values might have been manually set—i.e. the combined advantages of the standard's The drawback is that it might result in more house-keeping: The system may need extra table locks when performing row updates/insertions to protect against ghost updates in concurrent transactions—thus slowing down the system in case of many concurrent updates/insertions. |
Oracle | Oracle doesn't support the standard's IDENTITY attribute.
If you want an auto-incrementing column in Oracle, then create a sequence and use that sequence in a trigger associated to the table. Example: For the table mytable, you want the mytable_id column to be of integer type, with an auto-incrementing values:
This will create an auto-incrementing column resembling the Note: If 'nice', incrementing values aren't important, you may use Oracle's SYS_GUID function as the default for a column; that way, universally unique identifiers will be assigned if you don't indicate a value for the column in new rows. Documentation: |
Informix | On my TODO. |
Note: IBM has a page comparing IDENTITY columns and sequences.
Bulk operations
TRUNCATE TABLE
Often, it's useful to be able to remove all rows from a large table in a quick way. And often, DELETE
isn't as quick as you'd like it to be. So several DBMSes implement a TRUNCATE
operation. Typically, truncating means that deletion isn't associated with triggers which may exist for the table, and typically, truncating involves little (if any) transaction log activity.
Standard | The SQL standard defines the TRUNCATE TABLE tablename statement (optional feature ID F200, new in SQL:2008) as:Delete all rows of a base table without causing any triggered action. Unfortunately, the standard doesn't specify
|
PostgreSQL | Follows the standard.
In PostgreSQL, See the documentation for variations and restrictions. Most importantly, you need to have be owner of the table to be truncated (or work as a superuser); alternatively, you need to have TRUNCATE privilege on the table. Note also the nice—but potentially dangerous— |
DB2 | Almost follows the standard.(since version 9.7) DB2 requires that the IMMEDIATE keyword be added the the ordinary TRUNCATE TABLE statement, e.g.: TRUNCATE TABLE someschema.sometable IMMEDIATE TRUNCATE TABLE must be the first statement in a transaction. A transaction starting with TRUNCATE TABLE may include other statements, but if the transaction is rolled back, the TRUNCATE TABLE operation is not undone. DB2s TRUNCATE TABLE operation has a number of optional arguments, see the documentation for more on this; especially, the REUSE STORAGE argument may be important for ad-hoc DBA tasks.
In DB2 versions < 9.7, you may abuse the
The Documentation: |
MSSQL | Follows the standard.
In MSSQL, You need to have at least ALTER-permission on the table to be truncated. |
MySQL | MySQL has a TRUNCATE TABLE statement, but it doesn't always follow the standard.
Note that in some cases, MySQL's truncate command is really the equivalent of an unrestricted DELETE command (i.e.: potentially slow and trigger-invoking). Its behaviour depends on which storage engine the table is managed by. When using InnoDB (transaction safe) tables, |
Oracle | Follows the standard.
Note, that the You may find that Needed privileges—Quoting from the documentation: |
Informix | On my TODO. |
Command line procedures / metadata
The following are not necessarily SQL operations, but rather a description of how different operations are performed in the command line interface provided by each product.
The shape of the command line interfaces in the commercial products is depressing. Vendors, please do something about it: Not all database developers like to use slow GUIs for technical stuff. And sometimes, DBMS work is performed over slow Internet lines which makes a decent command line interface vital.
Fortunately, a tool like HenPlus exists. It can be a pain to install, but once working, it's nice to work with.
Starting the command line interface
Standard | Not defined. |
PostgreSQL | Run:psql which should be in the PATH in any sensible installation. PostgreSQL's command line interface is very user friendly. It has command history (press arrow-up for previous commands) and a fairly well-working command completion feature. |
DB2 | Run:db2 -t (The -t argument tells the command line processor to a semicolon as statement terminator instead of the default (newline). This allows for multi-line SQL statements.)
The The 'utility' doesn't seem to have anything resembling useful command history or command completion. Fortunately, queries may be sent to the DB2 also has a 'utility' called |
MSSQL | The command line interface is started by runningsqlcmd
In MSSQL 2000, the command line interface was started by running An alternative to osql—apart from HenPlus, mentioned above—is SQSH which should work on any modern open source operating system, except it doesn't seem to support Kerberos, so you need to log into the database using a database-account (not a Windows-account). |
MySQL | Run:mysql
If you need help on the optional command line options, see the man page. On platforms like Linux and FreeBSD (which have decent readline-capabilities), MySQL's command line interface is simply great; not much else to say. MySQL's command line interface is said to be rather poor on Windows, though. |
Oracle | Run:sqlplus
A unique feature of Oracle is that a web-based administration interface is provided, as a supplement to the local administration software. The URL to the interface is typically |
Informix | Informix' command line utility is called dbaccess . If run without arguments, it starts a menu system. If you simply want to shoot SQL statements off to the database, another form is more convenient, at least on unix: echo 'SELECT foo FROM bar' | dbaccess databasename
|
Getting a list of databases
Standard | Not specified, as far as I know. (By the way: The SQL standard doesn't have the concept of a database as a container of schemas; instead, the standard specifies that schemas are contained in a catalog.) |
PostgreSQL | Using SQL: SELECT datname FROM pg_catalog.pg_database
When working in the Alternative (when working from the terminal, not in Documentation: The psql tool, the pg_database catalog. |
DB2 | Offers the LIST DATABASE DIRECTORY command, but only when working in the db2 command line processor (i.e. not when working from db2batch ); this command's output is human readable, but sub-optimal as machine readable format.
|
MSSQL |
EXEC SP_HELPDB
|
MySQL |
SHOW DATABASES
|
Oracle | In Oracle, there is a one-to-one relationship between databases and instances (unless you work with a clustered Oracle system). You can get a list of instances; the way to do it depends on the operating system which Oracle is running on:
Documentation: oratab |
Informix | Connect to the sysmaster database (all users are allowed to do this) and run: SELECT name FROM sysmaster:sysdatabases
|
Getting a list of schemas
Getting a list of tables
Standard | Part 11 of the SQL standard specifies the INFORMATION_SCHEMA schema which must be part of all database catalogues. The schema may be used like this:
or (often more relevant):
See a warning about potential case sensitivity problems below. |
PostgreSQL | Follows the standard, except for some gotchas mentioned below.
In command-line context, it's easier to use the following non-SQL command instead of querying the INFORMATION_SCHEMA: Documentation: The |
DB2 | Doesn't provide the standard INFORMATION_SCHEMA . Instead, DB2 offers the SYSCAT schema (catalog) which is somewhat compatible.
Offers what is probably a shorthand to some system catalog query: |
MSSQL | Follows that standard. Sometimes, the SP_TABLES system stored procedure is easier to use.
Documentation:
|
MySQL | Follows the standard, except that MySQL doesn't support schemas, so one might say that MySQL's INFORMATION_SCHEMA is really an 'INFORMATION_DATABASE' or 'INFORMATION_CATALOGUE'.
In command-line context, it's easier to use the following non-standard SQL: Documentation: |
Oracle | Doesn't provide the standard INFORMATION_SCHEMA. Provides a data dictionary system instead.
The quickest way to get a usable list of 'normal' tables in the current schema: |
Informix | Doesn't provide the standard INFORMATION_SCHEMA out of the box. A few of the standard's INFORMATION_SCHEMA views may be added by running a special script, though.
Informix offers a set of system catalogs instead. To get a list of tables: The above query will include views and other objects; if you want base tables only: |
Warning about a general case sensitivity gotcha
Note that there may be case sensitivity issues involved when using meta-data views like those in the INFORMATION_SCHEMA. Generally, the standard states that the name of an identifier (such as table names) are implicitly converted to uppercase, unless double-quotes are used when referring to the identifier. The same goes for identifiers used in queries: A query like SELECT foo FROM tablename
is implicitly converted to SELECT FOO FROM TABLENAME
.
If you create your table as CREATE TABLE testtab (id INTEGER PRIMARY KEY)
then a query like SELECT * FROM testtab
should work fine, and SELECT * FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_NAME='TESTTAB'
should work, while the following query will probably fail: SELECT * FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_NAME='testtab'
Warning about INFORMATION_SCHEMA gotchas in PostgreSQL
Warning: PostgreSQL's case-conversion rules for unquoted identifiers (such as table names) are non-standard: PostgreSQL converts the identifiers to lower case, instead of converting to upper case. This means that you may try altering the case of identifier names used for queries in the INFORMATION_SCHEMA if you experience unexpected, empty metadata queries.
Note also that due to PostgreSQL's handling of constraint names, the INFORMATION_SCHEMA cannot safely be used to deduce referential constraints; for this, you have to use PostgreSQL's pg_catalog system-schema.
Getting a table description
Standard | Part 11 of the SQL standard specifies the INFORMATION_SCHEMA schema which must be part of all database catalogues. The schema may be used like this:
—or like this (more verbose):
To get information about constraints, involved columns and (possibly) referenced columns, a query like this may be used: If you don't care about potential namespace conflicts, you may leave out the lines commented with " See also: Warning about potential case sensitivity problems above. |
PostgreSQL | Follows the standard, except for some gotchas mentioned above.
In command-line context it's easier to use this non-SQL command: |
DB2 | Doesn't provide the standard INFORMATION_SCHEMA.
To obtain (very) basic information about a table: To get information about constraints, including involved/referred columns, a query like the following may be used, although the Documentation:
|
MSSQL | Follows the standard, except that
Often, the Documentation: |
MySQL | Follows the standard, except that
In command-line context it's easier to use this non-SQL command: Documentation: |
Oracle | Doesn't provide the standard INFORMATION_SCHEMA. Offers data dictionary views instead.
To get (very) basic information: To get information on constraints, including foreign (referred) table/column information, a query like this may be used (adjust tablename in one of the last lines): To get information on indexes on a table, a query like this may be used (adjust tablename in one of the last lines): Documentation:
|
Informix | Doesn't provide the standard INFORMATION_SCHEMA out of the box. If a special script is run, an INFORMATION_SCHEMA may be added which allows for using the most basic standards-based table description query.
In practice, an Informix-only query is used. The following query provides very basic table information, excluding constraints: Notice that the table name is in lower case. The colname values are numeric codes which need to be looked up in order to provide meaning. |
Manually telling the DBMS to collect statistics
In most DBMSes, it's possible to enable automatic statistics gathering, but sometimes, it's nice to be able to manually tell the DBMS to gather statistics for a table (or a number of tables).
Standard | Not standardized. |
PostgreSQL |
ANALYZE tablename
If the tablename parameter is left out, then statistics are gathered for all tables in the current database. |
DB2 |
RUNSTATS ON TABLE schema-name.table-name AND INDEXES ALL (many variations/options available) The Documentation: |
MSSQL | First, you have to add statistics to the table: CREATE STATISTICS stats_name (The CREATE STATISTICS step is not needed for indexed columns. Thus, this step may be skipped if you are satisfied with keeping statistics on indexed columns only.)
The statistics may then be updated when needed: Having to explicitly mention tables and columns can be tedious, and in many cases, the Documentation: CREATE STATISTICS, UPDATE STATISTICS, sp_createstats, sp_updatestats |
MySQL |
ANALYZE TABLE tablename
|
Oracle | Oracle offers to estimate (quick) or compute (thorough) statistics for a database object. The quick way to do this is to use the deprecated ANALYZE command which can be used in various ways, e.g.
—Or: If you want to stay away from deprecated features (although I doubt that Oracle will remove ANALYZE...STATISTICS... any time soon), you need to use the DBMS_STATS package. |
Informix | On my TODO. |
Getting a query explanation
Standard | Not standardized. |
PostgreSQL |
EXPLAIN <query>
|
DB2 | The easiest way to get a query explanation is to save the query in a file (without a terminating semicolon), and then run a special command-line utility:db2expln -database databasename -stmtfile query.sql -terminator ';' -terminal In the above example, the query has been saved to a file called "query.sql". In some situations, you may want to use the If you prefer to get the explanation through SQL:
|
MSSQL | MSSQL can be put in a query explanation mode where queries are not actually executed, but a query explanation is returned instead:SET SHOWPLAN_TEXT ON
The query explanation mode is turned off by running |
MySQL |
EXPLAIN <query>
|
Oracle |
EXPLAIN PLAN FOR <query> After the query has run, do the following to get the plan explanation: SELECT plan_table_output FROM table(dbms_xplan.display())
|
Informix | On my TODO. |
Turning on query timing
JDBC
JDBC driver jar file name, and general documentation
PostgreSQL | The PostgreSQL JDBC Driver: postgresql-postgresqlversion-jdbcbuild#.jdbc4.jar |
DB2 | IBM Data Server Driver for JDBC: db2jcc.jar (included in default DB2 client software installations; may also be downloaded separately, after registration) |
MSSQL |
Microsoft's driver: sqljdbc.jar Alternative: The open source JTDS driver: jtds-version.jar Documentation: |
MySQL | The MySQL Connector/J driver: mysql-connector-java-version-bin.jar |
Oracle | Oracle's JDBC drivers: ojdbc5.jar (for Java 5), ojdbc6.jar (for Java 6) |
Informix | IBM's Informix JDBC driver: ifxjdbc.jar (download requires registration and filling out annoying questionnaires, and an installer which only works with some JREs has to be run to unpack the driver) |
JDBC driver class name
JDBC connection URL
Other topics
Dummy table use
Some DBMSes let you perform a query like this:
SELECT 1+1
answering
2
With other DBMSes, you need to insert a dummy-table expression to obtain the same result:
SELECT 1+1 FROM dummy-table
Obtaining DBMS version
Standard | SELECT CHARACTER_VALUE |
PostgreSQL | Follows the standard. An alternative, non-standard function may be used: SELECT VERSION()
|
DB2 |
SELECT service_level FROM SYSIBMADM.ENV_INST_INFO
—or run the special Documentation: |
MSSQL | MSSQL's implementation of the IMPLEMENTATION_SCHEMA doesn't seem to include the SQL_IMPLEMENTATION_INFO view. In stead, you may useSELECT SERVERPROPERTY('ProductVersion') (just the version), or SELECT @@VERSION (verbose, harder to parse). Documentation: SERVERPROPERTY, @@VERSION |
MySQL | MySQL's INFORMATION_SCHEMA doesn't include the SQL_IMPLEMENTATION_INFO view.
Work-around: |
Oracle |
SELECT banner FROM v$version
|
Informix | Using SQL: SELECT dbinfo('version','full') FROM systables WHERE tabid=1
From the command line: Documentation:
|
Standard TCP/IP port
Standard | Not specified | ||
PostgreSQL | 5432 | For security reasons, PostgreSQL doesn't listen to non-local TCP interfaces by default. | Documentation |
DB2 | 50000 | ||
MSSQL | 1433 | By default, MSSQL Express Edition doesn't listen for TCP connections. | |
MySQL | 3306 | ||
Oracle | 1521 | Documentation | |
Informix | 9088 (unencrypted) | Informix versions prior to version 11, the default port was 1526. |
Diagnostic log
Each DBMS has different ways to record diagnostic information (event logs).
Related work
- Mimer Information Technology AB (makers of the Mimer SQL DBMS) has an interesting feature comparison chart, displaying what SQL:1999 features are implemented in different commercial products. May be biased because it's created by a DBMS vendor.
Mimer also has lists of reserved words. - Wikipedia has a Comparison of relational database management systems page. And a Wikibook called SQL dialects reference is in the works.
- Chris Fehily's SQL: Visual QuickStart Guide teaches SQL by first describing the standards-based (SQL:2003) approach, and then how to adjust to the real World, using MS Access, MSSQL, Oracle, MySQL, PostgreSQL, and DB2. (Full disclosure note: I was technical editor on second edition of the book.)
- Alessandro Tanasi: Database datatype comparison sheet.
- Peter Gulutzan (who works for MySQL AB) has written several articles related to the subject. He has also written two related books:
- SQL-99 Complete, Really (co-authored with Trudy Pelzer) is said to be good.
- SQL Performance Tuning (also co-authored with Trudy Pelzer), mentions quite a few cross-product SQL issues (primarily related to performance, of course).
-
Some DBMS evaluations performed at the Astrogrid Virtual Observatory (focus on spatial functionality):
- Various comparisons of MySQL, PostgreSQL and DB2.
- Comparison of availability and names of mathematical functions in major DBMS products.
- Autumn '03: Comparison of DB2, MySQL, and Postgres, comparing ease of use, scalability and performance of two types of spatial joins.
- Autumn '02: Comparison of PostgreSQL, MySQL, Oracle, SQL Server and DB2.
- Oracle / SQL Server / DB2 / Mckoi / MySQL Database Equivalents.
- Uday Parmar: Open Source Database Feature Comparison Matrix.
Note: Created by employees of a database vendor. - SQLite:
- Survey of NULL-handling: NULL Handling in SQLite Versus Other Database Engines.
- SQL Features That SQLite Does Not Implement
- Bowman/Emerson/Darnovsky's The Practical SQL Handbook—Using SQL Variants is OK for this subject, although it is rather out-dated and (worse) doesn't include any guidance on working with open source DBMSs.
- Kevin E. Kline's SQL in a Nutshell from O'Reilly is a good reference. O'Reilly has also published Jonathan Gennick's SQL Pocket Guide which looks good (but I haven't read it).
- The Analysis and Solutions Company:
- Building Truly Portable Database Applications in PHP includes advice on DBMS differences (some of the presentation is PHP-specific, as the title indicates).
- Database Portability: Date and Timestamp Columns.
- Jutta Horstmann OSDBmigration.
- Lewis Cunningham: A comparison of data types between various databases - Oracle, MySQL, DB2, Ingres, SQL Server, Firebird, Postgres.
- Bristle Software SQL Tips contains tips with variants for several DBMSes.
- ConnStr.net: .Net/ODBC connection strings to all sorts of database systems.
- See also my DBMS links.
Acknowledgments
The following people have provided comments, suggestions and/or fixes, resulting in content changes on this page:
- Ian Barwick
- Chester Kustarz
- Bruno Wolff III
- Carsten Pedersen
- Jürgen Auer
- Edi Stocker
- Tzvetan Tzankov
- Jess Robinson
- Gordon P. Hemsley
- Philip Nelson
- Andreas Plesner Jacobsen
- Clive Page
- Holger Jakobs
- Dennis Björklund
- Chris Fehily
- Alf-Ivar Holm
- Joseph Fuda
- J M Sykes
- Greg Sabino Mullane
- Jari Aalto
- Robert Jones
- Greg Fortune
- Dick Leone
- Neil Conway
- Markus Schaber
- James Denny
- Neal Lindsay
- David Rowley
- Erik Berglund
- Michael Swart
- Radosław Zieliński
- Jonathan Giroux
- Thomas Kellerer
- Guillaume Taglang
- Claus Samuelsen
- David Fetter
- Mike Wilson
- James Beckett
- Joachim Selke
- Radim Kolar
- Stewart Ashton
(In chronological order.)
相关推荐
`MS_SQL_SELECT.html`文件可能包含关于MSSQL SELECT语句的详细比较,而`Comparison of different SQL implementations.html`可能对比了不同SQL实现的特性。 标签中的“tsql”代表Transact-SQL,这是MSSQL的SQL方言...
- **Portability**: CLI applications are more portable across different database platforms compared to applications using embedded SQL. #### When to Use DB2 CLI or Embedded SQL - **Choosing Between ...
内容概要:本文详细介绍了低成本单发单收激光测距传感器的一站式解决方案,涵盖硬件设计、软件实现及其应用。硬件部分基于STM32F030F4P6芯片,搭配激光发射管和APD接收模块,通过精心设计的信号调理电路确保高精度测量。软件部分展示了关键代码片段,如初始化、测距算法和ADC配置,采用改进型飞行时间法(ToF)并通过DMA优化数据处理效率。此外,文章还讨论了调试过程中遇到的问题及解决方案,如环境光干扰和PCB布局优化。最终,该方案实现了0.05-50米范围内±1.5mm的测距精度,适用于多种应用场景。 适合人群:电子爱好者、硬件工程师、嵌入式系统开发者。 使用场景及目标:① DIY爱好者可以通过本方案进行个人项目的开发;② 企业可以基于此方案进行商业产品的开发,降低成本;③ 教育机构可以用作教学案例,帮助学生理解激光测距原理和技术实现。 其他说明:文章不仅提供详细的硬件和软件设计方案,还包括BOM清单、供应商信息和调试指南,有助于快速实现和优化项目。
内容概要:本文详细介绍了将麻雀算法(SSA)应用于LSTM参数优化的方法及其MATLAB实现。首先,通过生成带噪声的正弦波数据模拟真实场景的数据扰动,然后定义适应度函数用于评估LSTM模型的表现。接着,利用麻雀算法的发现者和跟随者角色进行参数优化,最终实现了比随机调参更好的预测效果。文中不仅提供了完整的代码实现,还讨论了参数设置的经验值以及一些实用技巧,如数据归一化、早停机制和并行加速等。 适合人群:对机器学习尤其是深度学习有一定了解的研究人员和技术爱好者,熟悉MATLAB编程环境。 使用场景及目标:适用于需要提高时间序列预测精度的任务,如金融数据分析、天气预报等领域。主要目标是通过引入生物启发式的优化算法来提升LSTM模型的性能。 其他说明:文中提到的麻雀算法能够显著减少人工调参的工作量,并且相比传统的网格搜索法更加高效。此外,作者还分享了一些实践经验,帮助读者更好地理解和应用这一方法。
内容概要:本文详细介绍了使用西门子Smart200 PLC实现高效的星三角降压启动子程序的方法。作者分享了具体的编程技巧,包括参数化的外部配置、定时器的巧妙运用以及故障处理机制。文中展示了完整的主程序调用示例和子程序内部逻辑,强调了模块化编程的优势,使得不同电机可以轻松复用相同的子程序,极大提高了调试和维护效率。此外,作者还提到了一些常见的调试陷阱和优化建议,如避免星三角同时导通、合理设置切换时间和加入硬件互锁等。 适合人群:从事PLC编程、自动化控制领域的工程师和技术人员,尤其是那些希望提高编程效率和代码复用性的从业者。 使用场景及目标:适用于需要频繁进行电机星三角降压启动的工业应用场景,如纺织厂、水泥厂等。主要目标是通过模块化编程减少重复劳动,提升系统稳定性和响应速度。 其他说明:文章不仅提供了详细的代码示例,还分享了许多实践经验,帮助读者更好地理解和应用这些技术。
2025中国数字营销趋势报告.pdf
内容概要:本文详细介绍了如何利用两台威纶通MT6071iP触摸屏控制一台信捷PLC的具体步骤和技术要点。主要内容涵盖硬件连接、触摸屏设置、PLC编程、调试与优化等方面。文中不仅提供了具体的配置方法,如通讯参数设置、画面设计、宏指令处理等,还分享了许多实际操作中的经验和注意事项,如地址分配、数据同步、通讯稳定性提升等。 适合人群:从事工业自动化领域的工程师和技术人员,尤其是那些需要实现多屏协同控制PLC系统的工作者。 使用场景及目标:适用于需要提高生产线上设备操作便捷性和监控灵活性的场合。通过一机多屏设置,可以在不同位置对同一PLC进行控制,方便现场操作和远程管理。 其他说明:文章强调了硬件连接细节、通讯参数一致性、PLC编程逻辑的重要性,并提供了一些实用技巧,如宏指令处理、心跳检测、防呆设计等,帮助读者更好地理解和实施该项目。
2024中国上市公司数字化转型报告.pdf
内容概要:《2024年中国物联网产业创新白皮书》由深圳市物联网产业协会与AIoT星图研究院联合编制,汇集了全国30多个省市物联网组织的智慧。白皮书系统梳理了中国物联网产业的发展历程、现状及未来趋势,涵盖了物联网的概念、产业结构、市场规模、投融资情况、面临的问题与机遇。书中详细分析了感知层、传输层、平台层及应用层的关键技术,探讨了智慧城市、智能工业、车联网、智慧医疗等九大产业物联网应用领域,以及消费物联网的发展特征与热门单品。此外,白皮书还关注了物联网数据安全、法规遵从、人才短缺等挑战,并提出了相应的解决方案。 适用人群:物联网从业者、企业决策者、政策制定者及相关研究机构。 使用场景及目标:①帮助从业者深入了解物联网产业的现状和发展趋势;②为企业决策者提供战略规划依据;③为政策制定者提供政策支持和法规制定参考;④为研究机构提供详尽的数据和案例支持。 其他说明:白皮书不仅限于技术科普,更从宏观角度结合市场情况,多维度讨论了物联网产业生态,旨在为物联网企业、从业者找到最适合的技术应用场景,促进产业健康发展。报告还特别鸣谢了参与市场调研的企业,感谢他们提供的宝贵行业信息。由于时间和资源的限制,报告可能存在信息不充分之处,欢迎各界人士提出宝贵意见。
内容概要:本文详细介绍了汇川H5U PLC中采用结构体编程的优势及其具体应用场景。作者通过多个实际项目案例展示了结构体编程如何显著提高代码的整洁度、可维护性和扩展性。文中特别强调了结构体在处理大量相似设备(如气缸、阀门、伺服轴)时的作用,以及如何利用结构体简化变量管理、增强在线调试体验并减少内存消耗。此外,还讨论了一些常见的注意事项和技术细节,如结构体初始化、功能块集成、在线修改功能的应用等。 适用人群:从事工业自动化系统开发的技术人员,尤其是对PLC编程有一定经验的工程师。 使用场景及目标:适用于需要高效管理和维护复杂工业控制系统的场合,旨在帮助工程师更好地理解和掌握结构体编程技巧,从而提高工作效率和代码质量。 其他说明:文章提供了丰富的代码片段作为示例,便于读者直观地理解结构体编程的具体实现方法。同时提醒读者注意结构体初始化等问题,确保项目顺利进行。
内容概要:本文介绍了如何利用数学建模替代传统的电气元件仿真,实现三相逆变器的模型预测控制(MPC)。主要内容包括三相桥的数学模型建立、代价函数设计、状态方程离散化以及仿真优化技巧。通过将三相桥的输出电压转换为矩阵运算,减少了计算复杂度,提高了仿真速度。代价函数不仅考虑了电流跟踪误差,还加入了开关频率惩罚项,以减少不必要的开关动作。此外,文中提供了详细的代码示例,展示了如何通过数学建模实现高效的MPC控制。 适合人群:从事电力电子控制系统设计的研究人员和技术人员,尤其是对三相逆变器及其控制算法感兴趣的读者。 使用场景及目标:适用于需要快速迭代和验证控制算法的场合,如光伏并网、电机驱动等领域。目标是提高仿真效率,优化控制性能,减少开发时间和成本。 其他说明:文中提到的技术手段能够显著提升仿真速度,但在应用时需要注意数值稳定性和参数辨识的准确性。
内容概要:本文详细介绍了如何使用MATLAB/Simulink构建并仿真一个60W的Flyback变换器模型,输入为390V直流,输出为19V/3A。主要内容涵盖主电路搭建(包括变压器参数设置、MOSFET选择)、吸收电路设计(RCD参数调整)、闭环控制系统(PID参数调节)以及动态性能测试。文中还提供了多个实用技巧,如防止变压器饱和、优化吸收电路参数、提高动态响应速度等,并指出了常见的错误及其解决方案。同时,推荐了几本相关领域的经典书籍供进一步学习。 适合人群:初学者和有一定基础的电源设计师,特别是希望深入了解Flyback变换器工作原理和技术细节的人群。 使用场景及目标:适用于需要进行中小功率电源设计的研究人员和工程师,旨在帮助他们掌握Flyback变换器的设计方法,提升仿真的准确性和可靠性。 其他说明:提供的仿真文件可以在MATLAB 2017b及以上版本运行,建议读者按照文中步骤逐步实践,以便更好地理解和应用所学知识。
内容概要:《2024年中国城市低空经济发展指数报告》由36氪研究院发布,指出低空经济作为新质生产力的代表,已成为中国经济新的增长点。报告从发展环境、资金投入、创新能力、基础支撑和发展成效五个维度构建了综合指数评价体系,评估了全国重点城市的低空经济发展状况。北京和深圳在总指数中名列前茅,分别以91.26和84.53的得分领先,展现出强大的资金投入、创新能力和基础支撑。低空经济主要涉及无人机、eVTOL(电动垂直起降飞行器)和直升机等产品,广泛应用于农业、物流、交通、应急救援等领域。政策支持、市场需求和技术进步共同推动了低空经济的快速发展,预计到2026年市场规模将突破万亿元。 适用人群:对低空经济发展感兴趣的政策制定者、投资者、企业和研究人员。 使用场景及目标:①了解低空经济的定义、分类和发展驱动力;②掌握低空经济的主要应用场景和市场规模预测;③评估各城市在低空经济发展中的表现和潜力;④为政策制定、投资决策和企业发展提供参考依据。 其他说明:报告强调了政策监管、产业生态建设和区域融合错位的重要性,提出了加强法律法规建设、人才储备和基础设施建设等建议。低空经济正加速向网络化、智能化、规模化和集聚化方向发展,各地应找准自身比较优势,实现差异化发展。
内容概要:本文深入探讨了汇川H5U PLC程序框架,尤其是其在ETHERCAT总线控制方面的应用。该框架提供了完整的气缸控制、轴控制等功能模块,涵盖从初始化到故障处理的全过程。文中详细介绍了气缸控制的梯形图逻辑、轴控制的具体指令及其状态机设计,并展示了触摸屏与PLC变量的绑定方法。此外,还提到了三套针对不同应用场景的程序样例,包括单机设备、流水线联动和柔性生产版本。整体而言,该框架以其条理分明的代码结构和丰富的功能模块,成为自动化控制领域的有力工具。 适合人群:从事工业自动化控制系统的工程师和技术人员,特别是初学者和希望深入了解汇川H5U PLC及ETHERCAT总线控制的专业人士。 使用场景及目标:适用于需要高效、稳定的多轴运动控制和人机交互的自动化项目。主要目标是帮助用户快速掌握汇川H5U PLC的编程技巧,提高项目的开发效率和可靠性。 其他说明:文章强调了框架的实用性、易用性和详细的文档支持,使得即使是新手也能轻松上手并应用于实际项目中。
https://blog.csdn.net/qianqianaao/article/details/147515496?fromshare=blogdetail&sharetype=blogdetail&sharerId=147515496&sharerefer=PC&sharesource=qianqianaao&sharefrom=from_link python小项目
内容概要:本文详细介绍了如何利用Cruise和Simulink平台进行增程式混合动力汽车的整车仿真模型构建。主要内容涵盖增程器、电池、驱动电机等关键部件的物理参数设置,尤其是发电机和驱动电机效率MAP图的精确调整。文中强调了控制策略的重要性,将其分为驾驶模式选择、扭矩分配、SOC平衡三个子系统,并讨论了模式切换逻辑、再生制动策略以及DLL联合仿真的难点和技术细节。此外,作者分享了许多实用的经验和技巧,如使用二维插值提高增程器工作效率、设置合理的滞回区避免模式震荡、通过环形缓冲区解决时钟同步问题等。 适合人群:从事新能源汽车研究的技术人员、高校相关专业师生、对混合动力汽车仿真感兴趣的工程师。 使用场景及目标:帮助读者掌握增程式混合动力汽车仿真模型的搭建方法,提高仿真的准确性,优化控制策略,减少开发周期和成本。 其他说明:文章提供了大量实战经验和调试技巧,有助于解决实际工作中常见的问题,如模式切换不稳定、DLL编译错误、制动能量回收不合理等。同时,还分享了一些提高仿真效率的小窍门,如使用Fast Restart功能、避免内存泄漏等。
内容概要:本文详细介绍了威纶通触摸屏上利用EBpro软件和宏指令实现的经典贪吃蛇游戏。文中不仅剖析了游戏的核心逻辑,如蛇的移动、碰撞检测、方向控制以及食物生成算法,还探讨了宏指令在触摸屏编程中的具体应用。通过这个模板,读者可以深入了解宏指令的工作机制及其在工业触摸屏编程中的优势。 适合人群:对嵌入式系统、人机界面(HMI)编程感兴趣的开发者和技术爱好者。 使用场景及目标:① 学习如何在威纶通触摸屏上使用宏指令进行编程;② 掌握贪吃蛇游戏的基本逻辑和实现方法;③ 提升对工业触摸屏编程的理解,尤其是宏指令的应用技巧。 其他说明:文章提供了详细的代码片段和注释,帮助读者更好地理解和修改代码。同时,文中提到的一些优化建议(如防止180度急转弯、改进随机数生成等)有助于提高游戏性能和用户体验。
内容概要:本文详细介绍了如何利用黏菌优化算法(SMA)优化广义回归神经网络(GRNN)来进行时间序列的拟合预测建模。首先解释了SMA和GRNN的基本原理,接着提供了完整的MATLAB代码实现,包括数据加载与划分、SMA参数设置、SMA主循环、模型构建与预测、绘图与指标计算等步骤。文中不仅给出了详细的代码注释,还分享了一些实用的经验技巧,如数据处理方法、参数选择建议等。此外,通过具体的案例展示了该方法的有效性和优越性。 适合人群:对时间序列预测感兴趣的科研人员、学生以及有一定编程基础的数据分析师。 使用场景及目标:适用于需要高精度时间序列预测的应用场景,如金融、能源等领域。主要目标是提高预测精度,减少预测误差,提供可靠的预测结果。 其他说明:本文提供的代码可以直接应用于实际项目中,只需替换相应数据即可。同时,文中提到的一些经验和技巧可以帮助使用者更好地理解和应用该方法。
本书是《Data Structures and Algorithm Analysis in C++》第三版的解答手册,包含了该教科书中许多练习题的答案。这些答案反映了第三版第一次印刷时书籍的状态。特别排除了一般编程问题和任何解决方案在章节末尾有参考文献的问题。解决方案在完整性程度上有所不同;通常,细节留给读者。存在的少量代码段落应该是伪-C++代码,而不是完全完美的代码。本书适合已经学习过《Data Structures and Algorithm Analysis in C++》的学生或读者使用,作为理解和掌握数据结构与算法分析的辅助材料。